WorldWideScience

Sample records for decomposition based optimization

  1. Decomposition based parallel processing technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2000-01-01

    In practical design studies, most of designers solve multidisciplinary problems with complex design structure. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder original design processes to minimize total cost and time. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  2. Optimal (Solvent) Mixture Design through a Decomposition Based CAMD methodology

    DEFF Research Database (Denmark)

    Achenie, L.; Karunanithi, Arunprakash T.; Gani, Rafiqul

    2004-01-01

    Computer Aided Molecular/Mixture design (CAMD) is one of the most promising techniques for solvent design and selection. A decomposition based CAMD methodology has been formulated where the mixture design problem is solved as a series of molecular and mixture design sub-problems. This approach is...

  3. Parallel processing based decomposition technique for efficient collaborative optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Sung Chan; Kim, Min Soo; Choi, Dong Hoon

    2001-01-01

    In practical design studies, most of designers solve multidisciplinary problems with large sized and complex design system. These multidisciplinary problems have hundreds of analysis and thousands of variables. The sequence of process to solve these problems affects the speed of total design cycle. Thus it is very important for designer to reorder the original design processes to minimize total computational cost. This is accomplished by decomposing large multidisciplinary problem into several MultiDisciplinary Analysis SubSystem (MDASS) and processing it in parallel. This paper proposes new strategy for parallel decomposition of multidisciplinary problem to raise design efficiency by using genetic algorithm and shows the relationship between decomposition and Multidisciplinary Design Optimization(MDO) methodology

  4. Set-Based Discrete Particle Swarm Optimization Based on Decomposition for Permutation-Based Multiobjective Combinatorial Optimization Problems.

    Science.gov (United States)

    Yu, Xue; Chen, Wei-Neng; Gu, Tianlong; Zhang, Huaxiang; Yuan, Huaqiang; Kwong, Sam; Zhang, Jun

    2017-08-07

    This paper studies a specific class of multiobjective combinatorial optimization problems (MOCOPs), namely the permutation-based MOCOPs. Many commonly seen MOCOPs, e.g., multiobjective traveling salesman problem (MOTSP), multiobjective project scheduling problem (MOPSP), belong to this problem class and they can be very different. However, as the permutation-based MOCOPs share the inherent similarity that the structure of their search space is usually in the shape of a permutation tree, this paper proposes a generic multiobjective set-based particle swarm optimization methodology based on decomposition, termed MS-PSO/D. In order to coordinate with the property of permutation-based MOCOPs, MS-PSO/D utilizes an element-based representation and a constructive approach. Through this, feasible solutions under constraints can be generated step by step following the permutation-tree-shaped structure. And problem-related heuristic information is introduced in the constructive approach for efficiency. In order to address the multiobjective optimization issues, the decomposition strategy is employed, in which the problem is converted into multiple single-objective subproblems according to a set of weight vectors. Besides, a flexible mechanism for diversity control is provided in MS-PSO/D. Extensive experiments have been conducted to study MS-PSO/D on two permutation-based MOCOPs, namely the MOTSP and the MOPSP. Experimental results validate that the proposed methodology is promising.

  5. Image Watermarking Algorithm Based on Multiobjective Ant Colony Optimization and Singular Value Decomposition in Wavelet Domain

    Directory of Open Access Journals (Sweden)

    Khaled Loukhaoukha

    2013-01-01

    Full Text Available We present a new optimal watermarking scheme based on discrete wavelet transform (DWT and singular value decomposition (SVD using multiobjective ant colony optimization (MOACO. A binary watermark is decomposed using a singular value decomposition. Then, the singular values are embedded in a detailed subband of host image. The trade-off between watermark transparency and robustness is controlled by multiple scaling factors (MSFs instead of a single scaling factor (SSF. Determining the optimal values of the multiple scaling factors (MSFs is a difficult problem. However, a multiobjective ant colony optimization is used to determine these values. Experimental results show much improved performances of the proposed scheme in terms of transparency and robustness compared to other watermarking schemes. Furthermore, it does not suffer from the problem of high probability of false positive detection of the watermarks.

  6. Model-Based Speech Signal Coding Using Optimized Temporal Decomposition for Storage and Broadcasting Applications

    Science.gov (United States)

    Athaudage, Chandranath R. N.; Bradley, Alan B.; Lech, Margaret

    2003-12-01

    A dynamic programming-based optimization strategy for a temporal decomposition (TD) model of speech and its application to low-rate speech coding in storage and broadcasting is presented. In previous work with the spectral stability-based event localizing (SBEL) TD algorithm, the event localization was performed based on a spectral stability criterion. Although this approach gave reasonably good results, there was no assurance on the optimality of the event locations. In the present work, we have optimized the event localizing task using a dynamic programming-based optimization strategy. Simulation results show that an improved TD model accuracy can be achieved. A methodology of incorporating the optimized TD algorithm within the standard MELP speech coder for the efficient compression of speech spectral information is also presented. The performance evaluation results revealed that the proposed speech coding scheme achieves 50%-60% compression of speech spectral information with negligible degradation in the decoded speech quality.

  7. Primal Recovery from Consensus-Based Dual Decomposition for Distributed Convex Optimization

    NARCIS (Netherlands)

    Simonetto, A.; Jamali-Rad, H.

    2015-01-01

    Dual decomposition has been successfully employed in a variety of distributed convex optimization problems solved by a network of computing and communicating nodes. Often, when the cost function is separable but the constraints are coupled, the dual decomposition scheme involves local parallel

  8. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-01

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  9. A singular value decomposition linear programming (SVDLP) optimization technique for circular cone based robotic radiotherapy.

    Science.gov (United States)

    Liang, Bin; Li, Yongbao; Wei, Ran; Guo, Bin; Xu, Xuang; Liu, Bo; Li, Jiafeng; Wu, Qiuwen; Zhou, Fugen

    2018-01-05

    With robot-controlled linac positioning, robotic radiotherapy systems such as CyberKnife significantly increase freedom of radiation beam placement, but also impose more challenges on treatment plan optimization. The resampling mechanism in the vendor-supplied treatment planning system (MultiPlan) cannot fully explore the increased beam direction search space. Besides, a sparse treatment plan (using fewer beams) is desired to improve treatment efficiency. This study proposes a singular value decomposition linear programming (SVDLP) optimization technique for circular collimator based robotic radiotherapy. The SVDLP approach initializes the input beams by simulating the process of covering the entire target volume with equivalent beam tapers. The requirements on dosimetry distribution are modeled as hard and soft constraints, and the sparsity of the treatment plan is achieved by compressive sensing. The proposed linear programming (LP) model optimizes beam weights by minimizing the deviation of soft constraints subject to hard constraints, with a constraint on the l 1 norm of the beam weight. A singular value decomposition (SVD) based acceleration technique was developed for the LP model. Based on the degeneracy of the influence matrix, the model is first compressed into lower dimension for optimization, and then back-projected to reconstruct the beam weight. After beam weight optimization, the number of beams is reduced by removing the beams with low weight, and optimizing the weights of the remaining beams using the same model. This beam reduction technique is further validated by a mixed integer programming (MIP) model. The SVDLP approach was tested on a lung case. The results demonstrate that the SVD acceleration technique speeds up the optimization by a factor of 4.8. Furthermore, the beam reduction achieves a similar plan quality to the globally optimal plan obtained by the MIP model, but is one to two orders of magnitude faster. Furthermore, the SVDLP

  10. A novel signal compression method based on optimal ensemble empirical mode decomposition for bearing vibration signals

    Science.gov (United States)

    Guo, Wei; Tse, Peter W.

    2013-01-01

    Today, remote machine condition monitoring is popular due to the continuous advancement in wireless communication. Bearing is the most frequently and easily failed component in many rotating machines. To accurately identify the type of bearing fault, large amounts of vibration data need to be collected. However, the volume of transmitted data cannot be too high because the bandwidth of wireless communication is limited. To solve this problem, the data are usually compressed before transmitting to a remote maintenance center. This paper proposes a novel signal compression method that can substantially reduce the amount of data that need to be transmitted without sacrificing the accuracy of fault identification. The proposed signal compression method is based on ensemble empirical mode decomposition (EEMD), which is an effective method for adaptively decomposing the vibration signal into different bands of signal components, termed intrinsic mode functions (IMFs). An optimization method was designed to automatically select appropriate EEMD parameters for the analyzed signal, and in particular to select the appropriate level of the added white noise in the EEMD method. An index termed the relative root-mean-square error was used to evaluate the decomposition performances under different noise levels to find the optimal level. After applying the optimal EEMD method to a vibration signal, the IMF relating to the bearing fault can be extracted from the original vibration signal. Compressing this signal component obtains a much smaller proportion of data samples to be retained for transmission and further reconstruction. The proposed compression method were also compared with the popular wavelet compression method. Experimental results demonstrate that the optimization of EEMD parameters can automatically find appropriate EEMD parameters for the analyzed signals, and the IMF-based compression method provides a higher compression ratio, while retaining the bearing defect

  11. An Improved Multiobjective Optimization Evolutionary Algorithm Based on Decomposition for Complex Pareto Fronts.

    Science.gov (United States)

    Jiang, Shouyong; Yang, Shengxiang

    2016-02-01

    The multiobjective evolutionary algorithm based on decomposition (MOEA/D) has been shown to be very efficient in solving multiobjective optimization problems (MOPs). In practice, the Pareto-optimal front (POF) of many MOPs has complex characteristics. For example, the POF may have a long tail and sharp peak and disconnected regions, which significantly degrades the performance of MOEA/D. This paper proposes an improved MOEA/D for handling such kind of complex problems. In the proposed algorithm, a two-phase strategy (TP) is employed to divide the whole optimization procedure into two phases. Based on the crowdedness of solutions found in the first phase, the algorithm decides whether or not to delicate computational resources to handle unsolved subproblems in the second phase. Besides, a new niche scheme is introduced into the improved MOEA/D to guide the selection of mating parents to avoid producing duplicate solutions, which is very helpful for maintaining the population diversity when the POF of the MOP being optimized is discontinuous. The performance of the proposed algorithm is investigated on some existing benchmark and newly designed MOPs with complex POF shapes in comparison with several MOEA/D variants and other approaches. The experimental results show that the proposed algorithm produces promising performance on these complex problems.

  12. Optimization of dual-energy CT acquisitions for proton therapy using projection-based decomposition.

    Science.gov (United States)

    Vilches-Freixas, Gloria; Létang, Jean Michel; Ducros, Nicolas; Rit, Simon

    2017-09-01

    Dual-energy computed tomography (DECT) has been presented as a valid alternative to single-energy CT to reduce the uncertainty of the conversion of patient CT numbers to proton stopping power ratio (SPR) of tissues relative to water. The aim of this work was to optimize DECT acquisition protocols from simulations of X-ray images for the treatment planning of proton therapy using a projection-based dual-energy decomposition algorithm. We have investigated the effect of various voltages and tin filtration combinations on the SPR map accuracy and precision, and the influence of the dose allocation between the low-energy (LE) and the high-energy (HE) acquisitions. For all spectra combinations, virtual CT projections of the Gammex phantom were simulated with a realistic energy-integrating detector response model. Two situations were simulated: an ideal case without noise (infinite dose) and a realistic situation with Poisson noise corresponding to a 20 mGy total central dose. To determine the optimal dose balance, the proportion of LE-dose with respect to the total dose was varied from 10% to 90% while keeping the central dose constant, for four dual-energy spectra. SPR images were derived using a two-step projection-based decomposition approach. The ranges of 70 MeV, 90 MeV, and 100 MeV proton beams onto the adult female (AF) reference computational phantom of the ICRP were analytically determined from the reconstructed SPR maps. The energy separation between the incident spectra had a strong impact on the SPR precision. Maximizing the incident energy gap reduced image noise. However, the energy gap was not a good metric to evaluate the accuracy of the SPR. In terms of SPR accuracy, a large variability of the optimal spectra was observed when studying each phantom material separately. The SPR accuracy was almost flat in the 30-70% LE-dose range, while the precision showed a minimum slightly shifted in favor of lower LE-dose. Photon noise in the SPR images (20 mGy dose

  13. Fusion of remote sensing images based on pyramid decomposition with Baldwinian Clonal Selection Optimization

    Science.gov (United States)

    Jin, Haiyan; Xing, Bei; Wang, Lei; Wang, Yanyan

    2015-11-01

    In this paper, we put forward a novel fusion method for remote sensing images based on the contrast pyramid (CP) using the Baldwinian Clonal Selection Algorithm (BCSA), referred to as CPBCSA. Compared with classical methods based on the transform domain, the method proposed in this paper adopts an improved heuristic evolutionary algorithm, wherein the clonal selection algorithm includes Baldwinian learning. In the process of image fusion, BCSA automatically adjusts the fusion coefficients of different sub-bands decomposed by CP according to the value of the fitness function. BCSA also adaptively controls the optimal search direction of the coefficients and accelerates the convergence rate of the algorithm. Finally, the fusion images are obtained via weighted integration of the optimal fusion coefficients and CP reconstruction. Our experiments show that the proposed method outperforms existing methods in terms of both visual effect and objective evaluation criteria, and the fused images are more suitable for human visual or machine perception.

  14. Synthesis of Phase-Only Reconfigurable Linear Arrays Using Multiobjective Invasive Weed Optimization Based on Decomposition

    Directory of Open Access Journals (Sweden)

    Yan Liu

    2014-01-01

    Full Text Available Synthesis of phase-only reconfigurable array aims at finding a common amplitude distribution and different phase distributions for the array to form different patterns. In this paper, the synthesis problem is formulated as a multiobjective optimization problem and solved by a new proposed algorithm MOEA/D-IWO. First, novel strategies are introduced in invasive weed optimization (IWO to make original IWO fit for solving multiobjective optimization problems; then, the modified IWO is integrated into the framework of the recently well proved competitive multiobjective optimization algorithm MOEA/D to form a new competitive MOEA/D-IWO algorithm. At last, two sets of experiments are carried out to illustrate the effectiveness of MOEA/D-IWO. In addition, MOEA/D-IWO is compared with MOEA/D-DE, a new version of MOEA/D. The comparing results show the superiority of MOEA/D-IWO and indicate its potential for solving the antenna array synthesis problems.

  15. Using combinatorial problem decomposition for optimizing plutonium inventory management

    International Nuclear Information System (INIS)

    Niquil, Y.; Gondran, M.; Voskanian, A.; Paris-11 Univ., 91 - Orsay

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author)

  16. Generalized Benders’ Decomposition for topology optimization problems

    DEFF Research Database (Denmark)

    Munoz Queupumil, Eduardo Javier; Stolpe, Mathias

    2011-01-01

    ) problems with discrete design variables to global optimality. We present the theoretical aspects of the method, including a proof of finite convergence and conditions for obtaining global optimal solutions. The method is also linked to, and compared with, an Outer-Approximation approach and a mixed 0......–1 semi definite programming formulation of the considered problem. Several ways to accelerate the method are suggested and an implementation is described. Finally, a set of truss topology optimization problems are numerically solved to global optimality.......This article considers the non-linear mixed 0–1 optimization problems that appear in topology optimization of load carrying structures. The main objective is to present a Generalized Benders’ Decomposition (GBD) method for solving single and multiple load minimum compliance (maximum stiffness...

  17. Daily Peak Load Forecasting Based on Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-01-01

    Full Text Available Daily peak load forecasting is an important part of power load forecasting. The accuracy of its prediction has great influence on the formulation of power generation plan, power grid dispatching, power grid operation and power supply reliability of power system. Therefore, it is of great significance to construct a suitable model to realize the accurate prediction of the daily peak load. A novel daily peak load forecasting model, CEEMDAN-MGWO-SVM (Complete Ensemble Empirical Mode Decomposition with Adaptive Noise and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, is proposed in this paper. Firstly, the model uses the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN algorithm to decompose the daily peak load sequence into multiple sub sequences. Then, the model of modified grey wolf optimization and support vector machine (MGWO-SVM is adopted to forecast the sub sequences. Finally, the forecasting sequence is reconstructed and the forecasting result is obtained. Using CEEMDAN can realize noise reduction for non-stationary daily peak load sequence, which makes the daily peak load sequence more regular. The model adopts the grey wolf optimization algorithm improved by introducing the population dynamic evolution operator and the nonlinear convergence factor to enhance the global search ability and avoid falling into the local optimum, which can better optimize the parameters of the SVM algorithm for improving the forecasting accuracy of daily peak load. In this paper, three cases are used to test the forecasting accuracy of the CEEMDAN-MGWO-SVM model. We choose the models EEMD-MGWO-SVM (Ensemble Empirical Mode Decomposition and Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, MGWO-SVM (Support Vector Machine Optimized by Modified Grey Wolf Optimization Algorithm, GWO-SVM (Support Vector Machine Optimized by Grey Wolf Optimization Algorithm, SVM (Support Vector

  18. An optimization approach for fitting canonical tensor decompositions.

    Energy Technology Data Exchange (ETDEWEB)

    Dunlavy, Daniel M. (Sandia National Laboratories, Albuquerque, NM); Acar, Evrim; Kolda, Tamara Gibson

    2009-02-01

    Tensor decompositions are higher-order analogues of matrix decompositions and have proven to be powerful tools for data analysis. In particular, we are interested in the canonical tensor decomposition, otherwise known as the CANDECOMP/PARAFAC decomposition (CPD), which expresses a tensor as the sum of component rank-one tensors and is used in a multitude of applications such as chemometrics, signal processing, neuroscience, and web analysis. The task of computing the CPD, however, can be difficult. The typical approach is based on alternating least squares (ALS) optimization, which can be remarkably fast but is not very accurate. Previously, nonlinear least squares (NLS) methods have also been recommended; existing NLS methods are accurate but slow. In this paper, we propose the use of gradient-based optimization methods. We discuss the mathematical calculation of the derivatives and further show that they can be computed efficiently, at the same cost as one iteration of ALS. Computational experiments demonstrate that the gradient-based optimization methods are much more accurate than ALS and orders of magnitude faster than NLS.

  19. Post-decomposition optimizations using pattern matching and rule-based clustering for multi-patterning technology

    Science.gov (United States)

    Wang, Lynn T.-N.; Madhavan, Sriram

    2018-03-01

    A pattern matching and rule-based polygon clustering methodology with DFM scoring is proposed to detect decomposition-induced manufacturability detractors and fix the layout designs prior to manufacturing. A pattern matcher scans the layout for pre-characterized patterns from a library. If a pattern were detected, rule-based clustering identifies the neighboring polygons that interact with those captured by the pattern. Then, DFM scores are computed for the possible layout fixes: the fix with the best score is applied. The proposed methodology was applied to two 20nm products with a chip area of 11 mm2 on the metal 2 layer. All the hotspots were resolved. The number of DFM spacing violations decreased by 7-15%.

  20. Using combinatorial problem decomposition for optimizing plutonium inventory management

    Energy Technology Data Exchange (ETDEWEB)

    Niquil, Y.; Gondran, M. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches; Voskanian, A. [Electricite de France (EDF), 92 - Clamart (France). Direction des Etudes et Recherches]|[Paris-11 Univ., 91 - Orsay (France). Lab. de Recherche en Informatique

    1997-03-01

    Plutonium Inventory Management Optimization can be modeled as a very large 0-1 linear program. To solve it, problem decomposition is necessary, since other classic techniques are not efficient for such a size. The first decomposition consists in favoring constraints that are the most difficult to reach and variables that have the highest influence on the cost: fortunately, both correspond to stock output decisions. The second decomposition consists in mixing continuous linear program solving and integer linear program solving. Besides, the first decisions to be taken are systematically favored, for they are based on data considered to be sure, when data supporting later decisions in known with less accuracy and confidence. (author) 7 refs.

  1. Dictionary-Based Tensor Canonical Polyadic Decomposition

    Science.gov (United States)

    Cohen, Jeremy Emile; Gillis, Nicolas

    2018-04-01

    To ensure interpretability of extracted sources in tensor decomposition, we introduce in this paper a dictionary-based tensor canonical polyadic decomposition which enforces one factor to belong exactly to a known dictionary. A new formulation of sparse coding is proposed which enables high dimensional tensors dictionary-based canonical polyadic decomposition. The benefits of using a dictionary in tensor decomposition models are explored both in terms of parameter identifiability and estimation accuracy. Performances of the proposed algorithms are evaluated on the decomposition of simulated data and the unmixing of hyperspectral images.

  2. Decomposition in conic optimization with partially separable structure

    DEFF Research Database (Denmark)

    Sun, Yifan; Andersen, Martin Skovgaard; Vandenberghe, Lieven

    2014-01-01

    Decomposition techniques for linear programming are difficult to extend to conic optimization problems with general nonpolyhedral convex cones because the conic inequalities introduce an additional nonlinear coupling between the variables. However in many applications the convex cones have...

  3. Parallel Algorithms for Graph Optimization using Tree Decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL; Groer, Christopher S [ORNL

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  4. Forecasting of Energy Consumption in China Based on Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm

    Directory of Open Access Journals (Sweden)

    Shuyu Dai

    2018-04-01

    Full Text Available For social development, energy is a crucial material whose consumption affects the stable and sustained development of the natural environment and economy. Currently, China has become the largest energy consumer in the world. Therefore, establishing an appropriate energy consumption prediction model and accurately forecasting energy consumption in China have practical significance, and can provide a scientific basis for China to formulate a reasonable energy production plan and energy-saving and emissions-reduction-related policies to boost sustainable development. For forecasting the energy consumption in China accurately, considering the main driving factors of energy consumption, a novel model, EEMD-ISFLA-LSSVM (Ensemble Empirical Mode Decomposition and Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm, is proposed in this article. The prediction accuracy of energy consumption is influenced by various factors. In this article, first considering population, GDP (Gross Domestic Product, industrial structure (the proportion of the second industry added value, energy consumption structure, energy intensity, carbon emissions intensity, total imports and exports and other influencing factors of energy consumption, the main driving factors of energy consumption are screened as the model input according to the sorting of grey relational degrees to realize feature dimension reduction. Then, the original energy consumption sequence of China is decomposed into multiple subsequences by Ensemble Empirical Mode Decomposition for de-noising. Next, the ISFLA-LSSVM (Least Squares Support Vector Machine Optimized by Improved Shuffled Frog Leaping Algorithm model is adopted to forecast each subsequence, and the prediction sequences are reconstructed to obtain the forecasting result. After that, the data from 1990 to 2009 are taken as the training set, and the data from 2010 to 2016 are taken as the test set to make an

  5. A new decomposition-based computer-aided molecular/mixture design methodology for the design of optimal solvents and solvent mixtures

    DEFF Research Database (Denmark)

    Karunanithi, A.T.; Achenie, L.E.K.; Gani, Rafiqul

    2005-01-01

    This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective is to be optim......This paper presents a novel computer-aided molecular/mixture design (CAMD) methodology for the design of optimal solvents and solvent mixtures. The molecular/mixture design problem is formulated as a mixed integer nonlinear programming (MINLP) model in which a performance objective...... is to be optimized subject to structural, property, and process constraints. The general molecular/mixture design problem is divided into two parts. For optimal single-compound design, the first part is solved. For mixture design, the single-compound design is first carried out to identify candidates...... and then the second part is solved to determine the optimal mixture. The decomposition of the CAMD MINLP model into relatively easy to solve subproblems is essentially a partitioning of the constraints from the original set. This approach is illustrated through two case studies. The first case study involves...

  6. Incentivized optimal advert assignment via utility decomposition

    NARCIS (Netherlands)

    Kelly, F.; Key, P.; Walton, N.

    2014-01-01

    We consider a large-scale Ad-auction where adverts are assigned over a potentially infinite number of searches. We capture the intrinsic asymmetries in information between advertisers, the advert platform and the space of searches: advertisers know and can optimize the average performance of their

  7. Optimization and Assessment of Wavelet Packet Decompositions with Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Schell Thomas

    2003-01-01

    Full Text Available In image compression, the wavelet transformation is a state-of-the-art component. Recently, wavelet packet decomposition has received quite an interest. A popular approach for wavelet packet decomposition is the near-best-basis algorithm using nonadditive cost functions. In contrast to additive cost functions, the wavelet packet decomposition of the near-best-basis algorithm is only suboptimal. We apply methods from the field of evolutionary computation (EC to test the quality of the near-best-basis results. We observe a phenomenon: the results of the near-best-basis algorithm are inferior in terms of cost-function optimization but are superior in terms of rate/distortion performance compared to EC methods.

  8. Optimization of wavelet decomposition for image compression and feature preservation.

    Science.gov (United States)

    Lo, Shih-Chung B; Li, Huai; Freedman, Matthew T

    2003-09-01

    A neural-network-based framework has been developed to search for an optimal wavelet kernel that can be used for a specific image processing task. In this paper, a linear convolution neural network was employed to seek a wavelet that minimizes errors and maximizes compression efficiency for an image or a defined image pattern such as microcalcifications in mammograms and bone in computed tomography (CT) head images. We have used this method to evaluate the performance of tap-4 wavelets on mammograms, CTs, magnetic resonance images, and Lena images. We found that the Daubechies wavelet or those wavelets with similar filtering characteristics can produce the highest compression efficiency with the smallest mean-square-error for many image patterns including general image textures as well as microcalcifications in digital mammograms. However, the Haar wavelet produces the best results on sharp edges and low-noise smooth areas. We also found that a special wavelet whose low-pass filter coefficients are 0.32252136, 0.85258927, 1.38458542, and -0.14548269) produces the best preservation outcomes in all tested microcalcification features including the peak signal-to-noise ratio, the contrast and the figure of merit in the wavelet lossy compression scheme. Having analyzed the spectrum of the wavelet filters, we can find the compression outcomes and feature preservation characteristics as a function of wavelets. This newly developed optimization approach can be generalized to other image analysis applications where a wavelet decomposition is employed.

  9. Steganography based on pixel intensity value decomposition

    Science.gov (United States)

    Abdulla, Alan Anwar; Sellahewa, Harin; Jassim, Sabah A.

    2014-05-01

    This paper focuses on steganography based on pixel intensity value decomposition. A number of existing schemes such as binary, Fibonacci, Prime, Natural, Lucas, and Catalan-Fibonacci (CF) are evaluated in terms of payload capacity and stego quality. A new technique based on a specific representation is proposed to decompose pixel intensity values into 16 (virtual) bit-planes suitable for embedding purposes. The proposed decomposition has a desirable property whereby the sum of all bit-planes does not exceed the maximum pixel intensity value, i.e. 255. Experimental results demonstrate that the proposed technique offers an effective compromise between payload capacity and stego quality of existing embedding techniques based on pixel intensity value decomposition. Its capacity is equal to that of binary and Lucas, while it offers a higher capacity than Fibonacci, Prime, Natural, and CF when the secret bits are embedded in 1st Least Significant Bit (LSB). When the secret bits are embedded in higher bit-planes, i.e., 2nd LSB to 8th Most Significant Bit (MSB), the proposed scheme has more capacity than Natural numbers based embedding. However, from the 6th bit-plane onwards, the proposed scheme offers better stego quality. In general, the proposed decomposition scheme has less effect in terms of quality on pixel value when compared to most existing pixel intensity value decomposition techniques when embedding messages in higher bit-planes.

  10. Aerospace engineering design by systematic decomposition and multilevel optimization

    Science.gov (United States)

    Sobieszczanski-Sobieski, J.; Barthelemy, J. F. M.; Giles, G. L.

    1984-01-01

    A method for systematic analysis and optimization of large engineering systems, by decomposition of a large task into a set of smaller subtasks that is solved concurrently is described. The subtasks may be arranged in hierarchical levels. Analyses are carried out in each subtask using inputs received from other subtasks, and are followed by optimizations carried out from the bottom up. Each optimization at the lower levels is augmented by analysis of its sensitivity to the inputs received from other subtasks to account for the couplings among the subtasks in a formal manner. The analysis and optimization operations alternate iteratively until they converge to a system design whose performance is maximized with all constraints satisfied. The method, which is still under development, is tentatively validated by test cases in structural applications and an aircraft configuration optimization.

  11. Eigenvalue Decomposition-Based Modified Newton Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-jun Wang

    2013-01-01

    Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.

  12. Sensitivity Analysis of the Proximal-Based Parallel Decomposition Methods

    Directory of Open Access Journals (Sweden)

    Feng Ma

    2014-01-01

    Full Text Available The proximal-based parallel decomposition methods were recently proposed to solve structured convex optimization problems. These algorithms are eligible for parallel computation and can be used efficiently for solving large-scale separable problems. In this paper, compared with the previous theoretical results, we show that the range of the involved parameters can be enlarged while the convergence can be still established. Preliminary numerical tests on stable principal component pursuit problem testify to the advantages of the enlargement.

  13. Pitfalls in VAR based return decompositions: A clarification

    DEFF Research Database (Denmark)

    Engsted, Tom; Pedersen, Thomas Quistgaard; Tanggaard, Carsten

    in their analysis is not "cashflow news" but "inter- est rate news" which should not be zero. Consequently, in contrast to what Chen and Zhao claim, their decomposition does not serve as a valid caution against VAR based decompositions. Second, we point out that in order for VAR based decompositions to be valid......Based on Chen and Zhao's (2009) criticism of VAR based return de- compositions, we explain in detail the various limitations and pitfalls involved in such decompositions. First, we show that Chen and Zhao's interpretation of their excess bond return decomposition is wrong: the residual component...

  14. An optimized ensemble local mean decomposition method for fault detection of mechanical components

    International Nuclear Information System (INIS)

    Zhang, Chao; Chen, Shuai; Wang, Jianguo; Li, Zhixiong; Hu, Chao; Zhang, Xiaogang

    2017-01-01

    Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error ( Relative RMSE ) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE , corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions. (paper)

  15. An optimized ensemble local mean decomposition method for fault detection of mechanical components

    Science.gov (United States)

    Zhang, Chao; Li, Zhixiong; Hu, Chao; Chen, Shuai; Wang, Jianguo; Zhang, Xiaogang

    2017-03-01

    Mechanical transmission systems have been widely adopted in most of industrial applications, and issues related to the maintenance of these systems have attracted considerable attention in the past few decades. The recently developed ensemble local mean decomposition (ELMD) method shows satisfactory performance in fault detection of mechanical components for preventing catastrophic failures and reducing maintenance costs. However, the performance of ELMD often heavily depends on proper selection of its model parameters. To this end, this paper proposes an optimized ensemble local mean decomposition (OELMD) method to determinate an optimum set of ELMD parameters for vibration signal analysis. In OELMD, an error index termed the relative root-mean-square error (Relative RMSE) is used to evaluate the decomposition performance of ELMD with a certain amplitude of the added white noise. Once a maximum Relative RMSE, corresponding to an optimal noise amplitude, is determined, OELMD then identifies optimal noise bandwidth and ensemble number based on the Relative RMSE and signal-to-noise ratio (SNR), respectively. Thus, all three critical parameters of ELMD (i.e. noise amplitude and bandwidth, and ensemble number) are optimized by OELMD. The effectiveness of OELMD was evaluated using experimental vibration signals measured from three different mechanical components (i.e. the rolling bearing, gear and diesel engine) under faulty operation conditions.

  16. Cluster analysis by optimal decomposition of induced fuzzy sets

    Energy Technology Data Exchange (ETDEWEB)

    Backer, E

    1978-01-01

    Nonsupervised pattern recognition is addressed and the concept of fuzzy sets is explored in order to provide the investigator (data analyst) additional information supplied by the pattern class membership values apart from the classical pattern class assignments. The basic ideas behind the pattern recognition problem, the clustering problem, and the concept of fuzzy sets in cluster analysis are discussed, and a brief review of the literature of the fuzzy cluster analysis is given. Some mathematical aspects of fuzzy set theory are briefly discussed; in particular, a measure of fuzziness is suggested. The optimization-clustering problem is characterized. Then the fundamental idea behind affinity decomposition is considered. Next, further analysis takes place with respect to the partitioning-characterization functions. The iterative optimization procedure is then addressed. The reclassification function is investigated and convergence properties are examined. Finally, several experiments in support of the method suggested are described. Four object data sets serve as appropriate test cases. 120 references, 70 figures, 11 tables. (RWR)

  17. Optimization and kinetics decomposition of monazite using NaOH

    International Nuclear Information System (INIS)

    MV Purwani; Suyanti; Deddy Husnurrofiq

    2015-01-01

    Decomposition of monazite with NaOH has been done. Decomposition performed at high temperature on furnace. The parameters studied were the comparison NaOH / monazite, temperature and time decomposition. From the research decomposition for 100 grams of monazite with NaOH, it can be concluded that the greater the ratio of NaOH / monazite, the greater the conversion. In the temperature influences decomposition 400 - 700°C, the greater the reaction rate constant with increasing temperature greater decomposition. Comparison NaOH / monazite optimum was 1.5 and the optimum time of 3 hours. Relations ratio NaOH / monazite with conversion (x) following the polynomial equation y = 0.1579x 2 – 0.2855x + 0.8301 (y = conversion and x = ratio of NaOH/monazite). Decomposition reaction of monazite with NaOH was second orde reaction, the relationship between temperature (T) with a reaction rate constant (k), k = 6.106.e - 1006.8 /T or ln k = - 1006.8/T + 6.106, frequency factor A = 448.541, activation energy E = 8.371 kJ/mol. (author)

  18. Forecasting outpatient visits using empirical mode decomposition coupled with back-propagation artificial neural networks optimized by particle swarm optimization.

    Science.gov (United States)

    Huang, Daizheng; Wu, Zhihui

    2017-01-01

    Accurately predicting the trend of outpatient visits by mathematical modeling can help policy makers manage hospitals effectively, reasonably organize schedules for human resources and finances, and appropriately distribute hospital material resources. In this study, a hybrid method based on empirical mode decomposition and back-propagation artificial neural networks optimized by particle swarm optimization is developed to forecast outpatient visits on the basis of monthly numbers. The data outpatient visits are retrieved from January 2005 to December 2013 and first obtained as the original time series. Second, the original time series is decomposed into a finite and often small number of intrinsic mode functions by the empirical mode decomposition technique. Third, a three-layer back-propagation artificial neural network is constructed to forecast each intrinsic mode functions. To improve network performance and avoid falling into a local minimum, particle swarm optimization is employed to optimize the weights and thresholds of back-propagation artificial neural networks. Finally, the superposition of forecasting results of the intrinsic mode functions is regarded as the ultimate forecasting value. Simulation indicates that the proposed method attains a better performance index than the other four methods.

  19. Avoiding spurious submovement decompositions: a globally optimal algorithm

    International Nuclear Information System (INIS)

    Rohrer, Brandon Robinson; Hogan, Neville

    2003-01-01

    Evidence for the existence of discrete submovements underlying continuous human movement has motivated many attempts to extract them. Although they produce visually convincing results, all of the methodologies that have been employed are prone to produce spurious decompositions. Examples of potential failures are given. A branch-and-bound algorithm for submovement extraction, capable of global nonlinear minimization (and hence capable of avoiding spurious decompositions), is developed and demonstrated.

  20. A Decomposition-Based Pricing Method for Solving a Large-Scale MILP Model for an Integrated Fishery

    Directory of Open Access Journals (Sweden)

    M. Babul Hasan

    2007-01-01

    The IFP can be decomposed into a trawler-scheduling subproblem and a fish-processing subproblem in two different ways by relaxing different sets of constraints. We tried conventional decomposition techniques including subgradient optimization and Dantzig-Wolfe decomposition, both of which were unacceptably slow. We then developed a decomposition-based pricing method for solving the large fishery model, which gives excellent computation times. Numerical results for several planning horizon models are presented.

  1. INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Groer, Christopher S [ORNL; Sullivan, Blair D [ORNL; Weerapurage, Dinesh P [ORNL

    2012-10-01

    It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.

  2. Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems

    Directory of Open Access Journals (Sweden)

    Weeraddana Chathuranga

    2010-01-01

    Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.

  3. Simulation-optimization of large agro-hydrosystems using a decomposition approach

    Science.gov (United States)

    Schuetze, Niels; Grundmann, Jens

    2014-05-01

    In this contribution a stochastic simulation-optimization framework for decision support for optimal planning and operation of water supply of large agro-hydrosystems is presented. It is based on a decomposition solution strategy which allows for (i) the usage of numerical process models together with efficient Monte Carlo simulations for a reliable estimation of higher quantiles of the minimum agricultural water demand for full and deficit irrigation strategies at small scale (farm level), and (ii) the utilization of the optimization results at small scale for solving water resources management problems at regional scale. As a secondary result of several simulation-optimization runs at the smaller scale stochastic crop-water production functions (SCWPF) for different crops are derived which can be used as a basic tool for assessing the impact of climate variability on risk for potential yield. In addition, microeconomic impacts of climate change and the vulnerability of the agro-ecological systems are evaluated. The developed methodology is demonstrated through its application on a real-world case study for the South Al-Batinah region in the Sultanate of Oman where a coastal aquifer is affected by saltwater intrusion due to excessive groundwater withdrawal for irrigated agriculture.

  4. Spectral decomposition of optimal asset-liability management

    NARCIS (Netherlands)

    Decamps, M.; de Schepper, A.; Goovaerts, M.

    2009-01-01

    This paper concerns optimal asset-liability management when the assets and the liabilities are modeled by means of correlated geometric Brownian motions as suggested in Gerber and Shiu [2003. Geometric Brownian motion models for assets and liabilities: from pension funding to optimal dividends.

  5. Multisensors Cooperative Detection Task Scheduling Algorithm Based on Hybrid Task Decomposition and MBPSO

    Directory of Open Access Journals (Sweden)

    Changyun Liu

    2017-01-01

    Full Text Available A multisensor scheduling algorithm based on the hybrid task decomposition and modified binary particle swarm optimization (MBPSO is proposed. Firstly, aiming at the complex relationship between sensor resources and tasks, a hybrid task decomposition method is presented, and the resource scheduling problem is decomposed into subtasks; then the sensor resource scheduling problem is changed into the match problem of sensors and subtasks. Secondly, the resource match optimization model based on the sensor resources and tasks is established, which considers several factors, such as the target priority, detecting benefit, handover times, and resource load. Finally, MBPSO algorithm is proposed to solve the match optimization model effectively, which is based on the improved updating means of particle’s velocity and position through the doubt factor and modified Sigmoid function. The experimental results show that the proposed algorithm is better in terms of convergence velocity, searching capability, solution accuracy, and efficiency.

  6. Benders’ Decomposition for Curriculum-Based Course Timetabling

    DEFF Research Database (Denmark)

    Bagger, Niels-Christian F.; Sørensen, Matias; Stidsen, Thomas R.

    2018-01-01

    feasibility. We compared our algorithm with other approaches from the literature for a total of 32 data instances. We obtained a lower bound on 23 of the instances, which were at least as good as the lower bounds obtained by the state-of-the-art, and on eight of these, our lower bounds were higher. On two......In this paper we applied Benders’ decomposition to the Curriculum-Based Course Timetabling (CBCT) problem. The objective of the CBCT problem is to assign a set of lectures to time slots and rooms. Our approach was based on segmenting the problem into time scheduling and room allocation problems...... of the instances, our lower bound was an improvement of the currently best-known. Lastly, we compared our decomposition to the model without the decomposition on an additional six instances, which are much larger than the other 32. To our knowledge, this was the first time that lower bounds were calculated...

  7. Global sensitivity analysis for fuzzy inputs based on the decomposition of fuzzy output entropy

    Science.gov (United States)

    Shi, Yan; Lu, Zhenzhou; Zhou, Yicheng

    2018-06-01

    To analyse the component of fuzzy output entropy, a decomposition method of fuzzy output entropy is first presented. After the decomposition of fuzzy output entropy, the total fuzzy output entropy can be expressed as the sum of the component fuzzy entropy contributed by fuzzy inputs. Based on the decomposition of fuzzy output entropy, a new global sensitivity analysis model is established for measuring the effects of uncertainties of fuzzy inputs on the output. The global sensitivity analysis model can not only tell the importance of fuzzy inputs but also simultaneously reflect the structural composition of the response function to a certain degree. Several examples illustrate the validity of the proposed global sensitivity analysis, which is a significant reference in engineering design and optimization of structural systems.

  8. Distributed Prognostics Based on Structural Model Decomposition

    Data.gov (United States)

    National Aeronautics and Space Administration — Within systems health management, prognostics focuses on predicting the remaining useful life of a system. In the model-based prognostics paradigm, physics-based...

  9. Asynchronous Task-Based Polar Decomposition on Manycore Architectures

    KAUST Repository

    Sukkari, Dalal

    2016-10-25

    This paper introduces the first asynchronous, task-based implementation of the polar decomposition on manycore architectures. Based on a new formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original and hostile LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is also capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been severely weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations (i.e., Intel MKL and Elemental) for the polar decomposition on latest shared-memory vendors\\' systems (i.e., Intel Haswell/Broadwell/Knights Landing, NVIDIA K80/P100 GPUs and IBM Power8), while maintaining high numerical accuracy.

  10. Maximum production rate optimization for sulphuric acid decomposition process in tubular plug-flow reactor

    International Nuclear Information System (INIS)

    Wang, Chao; Chen, Lingen; Xia, Shaojun; Sun, Fengrui

    2016-01-01

    A sulphuric acid decomposition process in a tubular plug-flow reactor with fixed inlet flow rate and completely controllable exterior wall temperature profile and reactants pressure profile is studied in this paper by using finite-time thermodynamics. The maximum production rate of the aimed product SO 2 and the optimal exterior wall temperature profile and reactants pressure profile are obtained by using nonlinear programming method. Then the optimal reactor with the maximum production rate is compared with the reference reactor with linear exterior wall temperature profile and the optimal reactor with minimum entropy generation rate. The result shows that the production rate of SO 2 of optimal reactor with the maximum production rate has an increase of more than 7%. The optimization of temperature profile has little influence on the production rate while the optimization of reactants pressure profile can significantly increase the production rate. The results obtained may provide some guidelines for the design of real tubular reactors. - Highlights: • Sulphuric acid decomposition process in tubular plug-flow reactor is studied. • Fixed inlet flow rate and controllable temperature and pressure profiles are set. • Maximum production rate of aimed product SO 2 is obtained. • Corresponding optimal temperature and pressure profiles are derived. • Production rate of SO 2 of optimal reactor increases by 7%.

  11. Structural system identification based on variational mode decomposition

    Science.gov (United States)

    Bagheri, Abdollah; Ozbulut, Osman E.; Harris, Devin K.

    2018-03-01

    In this paper, a new structural identification method is proposed to identify the modal properties of engineering structures based on dynamic response decomposition using the variational mode decomposition (VMD). The VMD approach is a decomposition algorithm that has been developed as a means to overcome some of the drawbacks and limitations of the empirical mode decomposition method. The VMD-based modal identification algorithm decomposes the acceleration signal into a series of distinct modal responses and their respective center frequencies, such that when combined their cumulative modal responses reproduce the original acceleration response. The decaying amplitude of the extracted modal responses is then used to identify the modal damping ratios using a linear fitting function on modal response data. Finally, after extracting modal responses from available sensors, the mode shape vector for each of the decomposed modes in the system is identified from all obtained modal response data. To demonstrate the efficiency of the algorithm, a series of numerical, laboratory, and field case studies were evaluated. The laboratory case study utilized the vibration response of a three-story shear frame, whereas the field study leveraged the ambient vibration response of a pedestrian bridge to characterize the modal properties of the structure. The modal properties of the shear frame were computed using analytical approach for a comparison with the experimental modal frequencies. Results from these case studies demonstrated that the proposed method is efficient and accurate in identifying modal data of the structures.

  12. Decomposition with thermoeconomic isolation applied to the optimal synthesis/design and operation of an advanced tactical aircraft system

    International Nuclear Information System (INIS)

    Rancruel, Diego F.; Spakovsky, Michael R. von

    2006-01-01

    A decomposition methodology based on the concept of 'thermoeconomic isolation' and applied to the synthesis/design and operational optimization of an advanced tactical fighter aircraft is the focus of this paper. The total system is composed of six sub-systems of which five participate with degrees of freedom (493) in the optimization. They are the propulsion sub-system (PS), the environmental control sub-system (ECS), the fuel loop subsystem (FLS), the vapor compression and Polyalphaolefin (PAO) loops sub-system (VC/PAOS), and the airframe sub-system (AFS). The sixth subsystem comprises the expendable and permanent payloads as well as the equipment group. For each of the first five, detailed thermodynamic, geometric, physical, and aerodynamic models at both design and off-design were formulated and implemented. The most promising set of aircraft sub-system and system configurations were then determined based on both an energy integration and aerodynamic performance analysis at each stage of the mission (including the transient ones). Conceptual, time, and physical decomposition were subsequently applied to the synthesis/design and operational optimization of these aircraft configurations as well as to the highly dynamic process of heat generation and dissipation internal to the subsystems. The physical decomposition strategy used (i.e. Iterative Local-Global Optimization-ILGO) is the first to successfully closely approach the theoretical condition of 'thermoeconomic isolation' when applied to highly complex, highly dynamic non-linear systems. Developed at our Center for Energy Systems research, it has been effectively applied to a number of complex stationary and transportation applications

  13. Decomposition with thermoeconomic isolation applied to the optimal synthesis/design and operation of an advanced tactical aircraft system

    Energy Technology Data Exchange (ETDEWEB)

    Rancruel, Diego F. [Center for Energy Systems Research, Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24060 (United States); Spakovsky, Michael R. von [Center for Energy Systems Research, Department of Mechanical Engineering, Virginia Polytechnic Institute and State University, Blacksburg, VA 24060 (United States)]. E-mail: vonspako@vt.edu

    2006-12-15

    A decomposition methodology based on the concept of 'thermoeconomic isolation' and applied to the synthesis/design and operational optimization of an advanced tactical fighter aircraft is the focus of this paper. The total system is composed of six sub-systems of which five participate with degrees of freedom (493) in the optimization. They are the propulsion sub-system (PS), the environmental control sub-system (ECS), the fuel loop subsystem (FLS), the vapor compression and Polyalphaolefin (PAO) loops sub-system (VC/PAOS), and the airframe sub-system (AFS). The sixth subsystem comprises the expendable and permanent payloads as well as the equipment group. For each of the first five, detailed thermodynamic, geometric, physical, and aerodynamic models at both design and off-design were formulated and implemented. The most promising set of aircraft sub-system and system configurations were then determined based on both an energy integration and aerodynamic performance analysis at each stage of the mission (including the transient ones). Conceptual, time, and physical decomposition were subsequently applied to the synthesis/design and operational optimization of these aircraft configurations as well as to the highly dynamic process of heat generation and dissipation internal to the subsystems. The physical decomposition strategy used (i.e. Iterative Local-Global Optimization-ILGO) is the first to successfully closely approach the theoretical condition of 'thermoeconomic isolation' when applied to highly complex, highly dynamic non-linear systems. Developed at our Center for Energy Systems research, it has been effectively applied to a number of complex stationary and transportation applications.

  14. A novel method for EMG decomposition based on matched filters

    Directory of Open Access Journals (Sweden)

    Ailton Luiz Dias Siqueira Júnior

    Full Text Available Introduction Decomposition of electromyography (EMG signals into the constituent motor unit action potentials (MUAPs can allow for deeper insights into the underlying processes associated with the neuromuscular system. The vast majority of the methods for EMG decomposition found in the literature depend on complex algorithms and specific instrumentation. As an attempt to contribute to solving these issues, we propose a method based on a bank of matched filters for the decomposition of EMG signals. Methods Four main units comprise our method: a bank of matched filters, a peak detector, a motor unit classifier and an overlapping resolution module. The system’s performance was evaluated with simulated and real EMG data. Classification accuracy was measured by comparing the responses of the system with known data from the simulator and with the annotations of a human expert. Results The results show that decomposition of non-overlapping MUAPs can be achieved with up to 99% accuracy for signals with up to 10 active motor units and a signal-to-noise ratio (SNR of 10 dB. For overlapping MUAPs with up to 10 motor units per signal and a SNR of 20 dB, the technique allows for correct classification of approximately 71% of the MUAPs. The method is capable of processing, decomposing and classifying a 50 ms window of data in less than 5 ms using a standard desktop computer. Conclusion This article contributes to the ongoing research on EMG decomposition by describing a novel technique capable of delivering high rates of success by means of a fast algorithm, suggesting its possible use in future real-time embedded applications, such as myoelectric prostheses control and biofeedback systems.

  15. An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Gao Hua [Department of Astronomy, School of Physics, Peking University, Beijing 100871 (China); Ho, Luis C. [Kavli Institute for Astronomy and Astrophysics, Peking University, Beijing 100871 (China)

    2017-08-20

    The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.

  16. An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies

    Science.gov (United States)

    Gao, Hua; Ho, Luis C.

    2017-08-01

    The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R-band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.

  17. An Optimal Strategy for Accurate Bulge-to-disk Decomposition of Disk Galaxies

    International Nuclear Information System (INIS)

    Gao Hua; Ho, Luis C.

    2017-01-01

    The development of two-dimensional (2D) bulge-to-disk decomposition techniques has shown their advantages over traditional one-dimensional (1D) techniques, especially for galaxies with non-axisymmetric features. However, the full potential of 2D techniques has yet to be fully exploited. Secondary morphological features in nearby disk galaxies, such as bars, lenses, rings, disk breaks, and spiral arms, are seldom accounted for in 2D image decompositions, even though some image-fitting codes, such as GALFIT, are capable of handling them. We present detailed, 2D multi-model and multi-component decomposition of high-quality R -band images of a representative sample of nearby disk galaxies selected from the Carnegie-Irvine Galaxy Survey, using the latest version of GALFIT. The sample consists of five barred and five unbarred galaxies, spanning Hubble types from S0 to Sc. Traditional 1D decomposition is also presented for comparison. In detailed case studies of the 10 galaxies, we successfully model the secondary morphological features. Through a comparison of best-fit parameters obtained from different input surface brightness models, we identify morphological features that significantly impact bulge measurements. We show that nuclear and inner lenses/rings and disk breaks must be properly taken into account to obtain accurate bulge parameters, whereas outer lenses/rings and spiral arms have a negligible effect. We provide an optimal strategy to measure bulge parameters of typical disk galaxies, as well as prescriptions to estimate realistic uncertainties of them, which will benefit subsequent decomposition of a larger galaxy sample.

  18. Empirical projection-based basis-component decomposition method

    Science.gov (United States)

    Brendel, Bernhard; Roessl, Ewald; Schlomka, Jens-Peter; Proksa, Roland

    2009-02-01

    Advances in the development of semiconductor based, photon-counting x-ray detectors stimulate research in the domain of energy-resolving pre-clinical and clinical computed tomography (CT). For counting detectors acquiring x-ray attenuation in at least three different energy windows, an extended basis component decomposition can be performed in which in addition to the conventional approach of Alvarez and Macovski a third basis component is introduced, e.g., a gadolinium based CT contrast material. After the decomposition of the measured projection data into the basis component projections, conventional filtered-backprojection reconstruction is performed to obtain the basis-component images. In recent work, this basis component decomposition was obtained by maximizing the likelihood-function of the measurements. This procedure is time consuming and often unstable for excessively noisy data or low intrinsic energy resolution of the detector. Therefore, alternative procedures are of interest. Here, we introduce a generalization of the idea of empirical dual-energy processing published by Stenner et al. to multi-energy, photon-counting CT raw data. Instead of working in the image-domain, we use prior spectral knowledge about the acquisition system (tube spectra, bin sensitivities) to parameterize the line-integrals of the basis component decomposition directly in the projection domain. We compare this empirical approach with the maximum-likelihood (ML) approach considering image noise and image bias (artifacts) and see that only moderate noise increase is to be expected for small bias in the empirical approach. Given the drastic reduction of pre-processing time, the empirical approach is considered a viable alternative to the ML approach.

  19. Resonance-Based Sparse Signal Decomposition and its Application in Mechanical Fault Diagnosis: A Review.

    Science.gov (United States)

    Huang, Wentao; Sun, Hongjian; Wang, Weijie

    2017-06-03

    Mechanical equipment is the heart of industry. For this reason, mechanical fault diagnosis has drawn considerable attention. In terms of the rich information hidden in fault vibration signals, the processing and analysis techniques of vibration signals have become a crucial research issue in the field of mechanical fault diagnosis. Based on the theory of sparse decomposition, Selesnick proposed a novel nonlinear signal processing method: resonance-based sparse signal decomposition (RSSD). Since being put forward, RSSD has become widely recognized, and many RSSD-based methods have been developed to guide mechanical fault diagnosis. This paper attempts to summarize and review the theoretical developments and application advances of RSSD in mechanical fault diagnosis, and to provide a more comprehensive reference for those interested in RSSD and mechanical fault diagnosis. Followed by a brief introduction of RSSD's theoretical foundation, based on different optimization directions, applications of RSSD in mechanical fault diagnosis are categorized into five aspects: original RSSD, parameter optimized RSSD, subband optimized RSSD, integrated optimized RSSD, and RSSD combined with other methods. On this basis, outstanding issues in current RSSD study are also pointed out, as well as corresponding instructional solutions. We hope this review will provide an insightful reference for researchers and readers who are interested in RSSD and mechanical fault diagnosis.

  20. A time-domain decomposition iterative method for the solution of distributed linear quadratic optimal control problems

    Science.gov (United States)

    Heinkenschloss, Matthias

    2005-01-01

    We study a class of time-domain decomposition-based methods for the numerical solution of large-scale linear quadratic optimal control problems. Our methods are based on a multiple shooting reformulation of the linear quadratic optimal control problem as a discrete-time optimal control (DTOC) problem. The optimality conditions for this DTOC problem lead to a linear block tridiagonal system. The diagonal blocks are invertible and are related to the original linear quadratic optimal control problem restricted to smaller time-subintervals. This motivates the application of block Gauss-Seidel (GS)-type methods for the solution of the block tridiagonal systems. Numerical experiments show that the spectral radii of the block GS iteration matrices are larger than one for typical applications, but that the eigenvalues of the iteration matrices decay to zero fast. Hence, while the GS method is not expected to convergence for typical applications, it can be effective as a preconditioner for Krylov-subspace methods. This is confirmed by our numerical tests.A byproduct of this research is the insight that certain instantaneous control techniques can be viewed as the application of one step of the forward block GS method applied to the DTOC optimality system.

  1. Agent-Based Optimization

    CERN Document Server

    Jędrzejowicz, Piotr; Kacprzyk, Janusz

    2013-01-01

    This volume presents a collection of original research works by leading specialists focusing on novel and promising approaches in which the multi-agent system paradigm is used to support, enhance or replace traditional approaches to solving difficult optimization problems. The editors have invited several well-known specialists to present their solutions, tools, and models falling under the common denominator of the agent-based optimization. The book consists of eight chapters covering examples of application of the multi-agent paradigm and respective customized tools to solve  difficult optimization problems arising in different areas such as machine learning, scheduling, transportation and, more generally, distributed and cooperative problem solving.

  2. Satellite Image Time Series Decomposition Based on EEMD

    Directory of Open Access Journals (Sweden)

    Yun-long Kong

    2015-11-01

    Full Text Available Satellite Image Time Series (SITS have recently been of great interest due to the emerging remote sensing capabilities for Earth observation. Trend and seasonal components are two crucial elements of SITS. In this paper, a novel framework of SITS decomposition based on Ensemble Empirical Mode Decomposition (EEMD is proposed. EEMD is achieved by sifting an ensemble of adaptive orthogonal components called Intrinsic Mode Functions (IMFs. EEMD is noise-assisted and overcomes the drawback of mode mixing in conventional Empirical Mode Decomposition (EMD. Inspired by these advantages, the aim of this work is to employ EEMD to decompose SITS into IMFs and to choose relevant IMFs for the separation of seasonal and trend components. In a series of simulations, IMFs extracted by EEMD achieved a clear representation with physical meaning. The experimental results of 16-day compositions of Moderate Resolution Imaging Spectroradiometer (MODIS, Normalized Difference Vegetation Index (NDVI, and Global Environment Monitoring Index (GEMI time series with disturbance illustrated the effectiveness and stability of the proposed approach to monitoring tasks, such as applications for the detection of abrupt changes.

  3. Sparse time-frequency decomposition based on dictionary adaptation.

    Science.gov (United States)

    Hou, Thomas Y; Shi, Zuoqiang

    2016-04-13

    In this paper, we propose a time-frequency analysis method to obtain instantaneous frequencies and the corresponding decomposition by solving an optimization problem. In this optimization problem, the basis that is used to decompose the signal is not known a priori. Instead, it is adapted to the signal and is determined as part of the optimization problem. In this sense, this optimization problem can be seen as a dictionary adaptation problem, in which the dictionary is adaptive to one signal rather than a training set in dictionary learning. This dictionary adaptation problem is solved by using the augmented Lagrangian multiplier (ALM) method iteratively. We further accelerate the ALM method in each iteration by using the fast wavelet transform. We apply our method to decompose several signals, including signals with poor scale separation, signals with outliers and polluted by noise and a real signal. The results show that this method can give accurate recovery of both the instantaneous frequencies and the intrinsic mode functions. © 2016 The Author(s).

  4. Design of tailor-made chemical blend using a decomposition-based computer-aided approach

    DEFF Research Database (Denmark)

    Yunus, Nor Alafiza; Gernaey, Krist; Manan, Z.A.

    2011-01-01

    Computer aided techniques form an efficient approach to solve chemical product design problems such as the design of blended liquid products (chemical blending). In chemical blending, one tries to find the best candidate, which satisfies the product targets defined in terms of desired product...... methodology for blended liquid products that identifies a set of feasible chemical blends. The blend design problem is formulated as a Mixed Integer Nonlinear Programming (MINLP) model where the objective is to find the optimal blended gasoline or diesel product subject to types of chemicals...... and their compositions and a set of desired target properties of the blended product as design constraints. This blend design problem is solved using a decomposition approach, which eliminates infeasible and/or redundant candidates gradually through a hierarchy of (property) model based constraints. This decomposition...

  5. A new decomposition method for parallel processing multi-level optimization

    International Nuclear Information System (INIS)

    Park, Hyung Wook; Kim, Min Soo; Choi, Dong Hoon

    2002-01-01

    In practical designs, most of the multidisciplinary problems have a large-size and complicate design system. Since multidisciplinary problems have hundreds of analyses and thousands of variables, the grouping of analyses and the order of the analyses in the group affect the speed of the total design cycle. Therefore, it is very important to reorder and regroup the original design processes in order to minimize the total computational cost by decomposing large multidisciplinary problems into several MultiDisciplinary Analysis SubSystems (MDASS) and by processing them in parallel. In this study, a new decomposition method is proposed for parallel processing of multidisciplinary design optimization, such as Collaborative Optimization (CO) and Individual Discipline Feasible (IDF) method. Numerical results for two example problems are presented to show the feasibility of the proposed method

  6. Variance decomposition-based sensitivity analysis via neural networks

    International Nuclear Information System (INIS)

    Marseguerra, Marzio; Masini, Riccardo; Zio, Enrico; Cojazzi, Giacomo

    2003-01-01

    This paper illustrates a method for efficiently performing multiparametric sensitivity analyses of the reliability model of a given system. These analyses are of great importance for the identification of critical components in highly hazardous plants, such as the nuclear or chemical ones, thus providing significant insights for their risk-based design and management. The technique used to quantify the importance of a component parameter with respect to the system model is based on a classical decomposition of the variance. When the model of the system is realistically complicated (e.g. by aging, stand-by, maintenance, etc.), its analytical evaluation soon becomes impractical and one is better off resorting to Monte Carlo simulation techniques which, however, could be computationally burdensome. Therefore, since the variance decomposition method requires a large number of system evaluations, each one to be performed by Monte Carlo, the need arises for possibly substituting the Monte Carlo simulation model with a fast, approximated, algorithm. Here we investigate an approach which makes use of neural networks appropriately trained on the results of a Monte Carlo system reliability/availability evaluation to quickly provide with reasonable approximation, the values of the quantities of interest for the sensitivity analyses. The work was a joint effort between the Department of Nuclear Engineering of the Polytechnic of Milan, Italy, and the Institute for Systems, Informatics and Safety, Nuclear Safety Unit of the Joint Research Centre in Ispra, Italy which sponsored the project

  7. Aligning observed and modelled behaviour based on workflow decomposition

    Science.gov (United States)

    Wang, Lu; Du, YuYue; Liu, Wei

    2017-09-01

    When business processes are mostly supported by information systems, the availability of event logs generated from these systems, as well as the requirement of appropriate process models are increasing. Business processes can be discovered, monitored and enhanced by extracting process-related information. However, some events cannot be correctly identified because of the explosion of the amount of event logs. Therefore, a new process mining technique is proposed based on a workflow decomposition method in this paper. Petri nets (PNs) are used to describe business processes, and then conformance checking of event logs and process models is investigated. A decomposition approach is proposed to divide large process models and event logs into several separate parts that can be analysed independently; while an alignment approach based on a state equation method in PN theory enhances the performance of conformance checking. Both approaches are implemented in programmable read-only memory (ProM). The correctness and effectiveness of the proposed methods are illustrated through experiments.

  8. Palm vein recognition based on directional empirical mode decomposition

    Science.gov (United States)

    Lee, Jen-Chun; Chang, Chien-Ping; Chen, Wei-Kuei

    2014-04-01

    Directional empirical mode decomposition (DEMD) has recently been proposed to make empirical mode decomposition suitable for the processing of texture analysis. Using DEMD, samples are decomposed into a series of images, referred to as two-dimensional intrinsic mode functions (2-D IMFs), from finer to large scale. A DEMD-based 2 linear discriminant analysis (LDA) for palm vein recognition is proposed. The proposed method progresses through three steps: (i) a set of 2-D IMF features of various scale and orientation are extracted using DEMD, (ii) the 2LDA method is then applied to reduce the dimensionality of the feature space in both the row and column directions, and (iii) the nearest neighbor classifier is used for classification. We also propose two strategies for using the set of 2-D IMF features: ensemble DEMD vein representation (EDVR) and multichannel DEMD vein representation (MDVR). In experiments using palm vein databases, the proposed MDVR-based 2LDA method achieved recognition accuracy of 99.73%, thereby demonstrating its feasibility for palm vein recognition.

  9. An effective secondary decomposition approach for wind power forecasting using extreme learning machine trained by crisscross optimization

    International Nuclear Information System (INIS)

    Yin, Hao; Dong, Zhen; Chen, Yunlong; Ge, Jiafei; Lai, Loi Lei; Vaccaro, Alfredo; Meng, Anbo

    2017-01-01

    Highlights: • A secondary decomposition approach is applied in the data pre-processing. • The empirical mode decomposition is used to decompose the original time series. • IMF1 continues to be decomposed by applying wavelet packet decomposition. • Crisscross optimization algorithm is applied to train extreme learning machine. • The proposed SHD-CSO-ELM outperforms other pervious methods in the literature. - Abstract: Large-scale integration of wind energy into electric grid is restricted by its inherent intermittence and volatility. So the increased utilization of wind power necessitates its accurate prediction. The contribution of this study is to develop a new hybrid forecasting model for the short-term wind power prediction by using a secondary hybrid decomposition approach. In the data pre-processing phase, the empirical mode decomposition is used to decompose the original time series into several intrinsic mode functions (IMFs). A unique feature is that the generated IMF1 continues to be decomposed into appropriate and detailed components by applying wavelet packet decomposition. In the training phase, all the transformed sub-series are forecasted with extreme learning machine trained by our recently developed crisscross optimization algorithm (CSO). The final predicted values are obtained from aggregation. The results show that: (a) The performance of empirical mode decomposition can be significantly improved with its IMF1 decomposed by wavelet packet decomposition. (b) The CSO algorithm has satisfactory performance in addressing the premature convergence problem when applied to optimize extreme learning machine. (c) The proposed approach has great advantage over other previous hybrid models in terms of prediction accuracy.

  10. An ill-conditioning conformal radiotherapy analysis based on singular values decomposition

    International Nuclear Information System (INIS)

    Lefkopoulos, D.; Grandjean, P.; Bendada, S.; Dominique, C.; Platoni, K.; Schlienger, M.

    1995-01-01

    Clinical experience in stereotactic radiotherapy of irregular complex lesions had shown that optimization algorithms were necessary to improve the dose distribution. We have developed a general optimization procedure which can be applied to different conformal irradiation techniques. In this presentation this procedure is tested on the stereotactic radiotherapy modality of complex cerebral lesions treated with multi-isocentric technique based on the 'associated targets methodology'. In this inverse procedure we use the singular value decomposition (SVD) analysis which proposes several optimal solutions for the narrow beams weights of each isocentre. The SVD analysis quantifies the ill-conditioning of the dosimetric calculation of the stereotactic irradiation, using the condition number which is the ratio of the bigger to smaller singular values. Our dose distribution optimization approach consists on the study of the irradiation parameters influence on the stereotactic radiotherapy inverse problem. The adjustment of the different irradiation parameters into the 'SVD optimizer' procedure is realized taking into account the ratio of the quality reconstruction to the time calculation. It will permit a more efficient use of the 'SVD optimizer' in clinical applications for real 3D lesions. The evaluation criteria for the choice of satisfactory solutions are based on the dose-volume histograms and clinical considerations. We will present the efficiency of ''SVD optimizer'' to analyze and predict the ill-conditioning in stereotactic radiotherapy and to recognize the topography of the different beams in order to create optimal reconstructed weighting vector. The planification of stereotactic treatments using the ''SVD optimizer'' is examined for mono-isocentrically and complex dual-isocentrically treated lesions. The application of the SVD optimization technique provides conformal dose distribution for complex intracranial lesions. It is a general optimization procedure

  11. Automatic classification of visual evoked potentials based on wavelet decomposition

    Science.gov (United States)

    Stasiakiewicz, Paweł; Dobrowolski, Andrzej P.; Tomczykiewicz, Kazimierz

    2017-04-01

    Diagnosis of part of the visual system, that is responsible for conducting compound action potential, is generally based on visual evoked potentials generated as a result of stimulation of the eye by external light source. The condition of patient's visual path is assessed by set of parameters that describe the time domain characteristic extremes called waves. The decision process is compound therefore diagnosis significantly depends on experience of a doctor. The authors developed a procedure - based on wavelet decomposition and linear discriminant analysis - that ensures automatic classification of visual evoked potentials. The algorithm enables to assign individual case to normal or pathological class. The proposed classifier has a 96,4% sensitivity at 10,4% probability of false alarm in a group of 220 cases and area under curve ROC equals to 0,96 which, from the medical point of view, is a very good result.

  12. Quantum game theory based on the Schmidt decomposition

    International Nuclear Information System (INIS)

    Ichikawa, Tsubasa; Tsutsui, Izumi; Cheon, Taksu

    2008-01-01

    We present a novel formulation of quantum game theory based on the Schmidt decomposition, which has the merit that the entanglement of quantum strategies is manifestly quantified. We apply this formulation to 2-player, 2-strategy symmetric games and obtain a complete set of quantum Nash equilibria. Apart from those available with the maximal entanglement, these quantum Nash equilibria are extensions of the Nash equilibria in classical game theory. The phase structure of the equilibria is determined for all values of entanglement, and thereby the possibility of resolving the dilemmas by entanglement in the game of Chicken, the Battle of the Sexes, the Prisoners' Dilemma, and the Stag Hunt, is examined. We find that entanglement transforms these dilemmas with each other but cannot resolve them, except in the Stag Hunt game where the dilemma can be alleviated to a certain degree

  13. Quantum Image Encryption Algorithm Based on Image Correlation Decomposition

    Science.gov (United States)

    Hua, Tianxiang; Chen, Jiamin; Pei, Dongju; Zhang, Wenquan; Zhou, Nanrun

    2015-02-01

    A novel quantum gray-level image encryption and decryption algorithm based on image correlation decomposition is proposed. The correlation among image pixels is established by utilizing the superposition and measurement principle of quantum states. And a whole quantum image is divided into a series of sub-images. These sub-images are stored into a complete binary tree array constructed previously and then randomly performed by one of the operations of quantum random-phase gate, quantum revolving gate and Hadamard transform. The encrypted image can be obtained by superimposing the resulting sub-images with the superposition principle of quantum states. For the encryption algorithm, the keys are the parameters of random phase gate, rotation angle, binary sequence and orthonormal basis states. The security and the computational complexity of the proposed algorithm are analyzed. The proposed encryption algorithm can resist brute force attack due to its very large key space and has lower computational complexity than its classical counterparts.

  14. A study on optimal task decomposition of networked parallel computing using PVM

    International Nuclear Information System (INIS)

    Seong, Kwan Jae; Kim, Han Gyoo

    1998-01-01

    A numerical study is performed to investigate the effect of task decomposition on networked parallel processes using Parallel Virtual Machine (PVM). In our study, a PVM program distributed over a network of workstations is used in solving a finite difference version of a one dimensional heat equation, where natural choice of PVM programming structure would be the master-slave paradigm, with the aim of finding an optimal configuration resulting in least computing time including communication overhead among machines. Given a set of PVM tasks comprised of one master and five slave programs, it is found that there exists a pseudo-optimal number of machines, which does not necessarily coincide with the number of tasks, that yields the best performance when the network is under a light usage. Increasing the number of machines beyond this optimal one does not improve computing performance since increase in communication overhead among the excess number of machines offsets the decrease in CPU time obtained by distributing the PVM tasks among these machines. However, when the network traffic is heavy, the results exhibit a more random characteristic that is explained by the random nature of data transfer time

  15. A Novel Memetic Algorithm Based on Decomposition for Multiobjective Flexible Job Shop Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Chun Wang

    2017-01-01

    Full Text Available A novel multiobjective memetic algorithm based on decomposition (MOMAD is proposed to solve multiobjective flexible job shop scheduling problem (MOFJSP, which simultaneously minimizes makespan, total workload, and critical workload. Firstly, a population is initialized by employing an integration of different machine assignment and operation sequencing strategies. Secondly, multiobjective memetic algorithm based on decomposition is presented by introducing a local search to MOEA/D. The Tchebycheff approach of MOEA/D converts the three-objective optimization problem to several single-objective optimization subproblems, and the weight vectors are grouped by K-means clustering. Some good individuals corresponding to different weight vectors are selected by the tournament mechanism of a local search. In the experiments, the influence of three different aggregation functions is first studied. Moreover, the effect of the proposed local search is investigated. Finally, MOMAD is compared with eight state-of-the-art algorithms on a series of well-known benchmark instances and the experimental results show that the proposed algorithm outperforms or at least has comparative performance to the other algorithms.

  16. Analysis of large fault trees based on functional decomposition

    International Nuclear Information System (INIS)

    Contini, Sergio; Matuzas, Vaidas

    2011-01-01

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  17. Analysis of large fault trees based on functional decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Contini, Sergio, E-mail: sergio.contini@jrc.i [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy); Matuzas, Vaidas [European Commission, Joint Research Centre, Institute for the Protection and Security of the Citizen, 21020 Ispra (Italy)

    2011-03-15

    With the advent of the Binary Decision Diagrams (BDD) approach in fault tree analysis, a significant enhancement has been achieved with respect to previous approaches, both in terms of efficiency and accuracy of the overall outcome of the analysis. However, the exponential increase of the number of nodes with the complexity of the fault tree may prevent the construction of the BDD. In these cases, the only way to complete the analysis is to reduce the complexity of the BDD by applying the truncation technique, which nevertheless implies the problem of estimating the truncation error or upper and lower bounds of the top-event unavailability. This paper describes a new method to analyze large coherent fault trees which can be advantageously applied when the working memory is not sufficient to construct the BDD. It is based on the decomposition of the fault tree into simpler disjoint fault trees containing a lower number of variables. The analysis of each simple fault tree is performed by using all the computational resources. The results from the analysis of all simpler fault trees are re-combined to obtain the results for the original fault tree. Two decomposition methods are herewith described: the first aims at determining the minimal cut sets (MCS) and the upper and lower bounds of the top-event unavailability; the second can be applied to determine the exact value of the top-event unavailability. Potentialities, limitations and possible variations of these methods will be discussed with reference to the results of their application to some complex fault trees.

  18. Identifying key nodes in multilayer networks based on tensor decomposition.

    Science.gov (United States)

    Wang, Dingjie; Wang, Haitao; Zou, Xiufen

    2017-06-01

    The identification of essential agents in multilayer networks characterized by different types of interactions is a crucial and challenging topic, one that is essential for understanding the topological structure and dynamic processes of multilayer networks. In this paper, we use the fourth-order tensor to represent multilayer networks and propose a novel method to identify essential nodes based on CANDECOMP/PARAFAC (CP) tensor decomposition, referred to as the EDCPTD centrality. This method is based on the perspective of multilayer networked structures, which integrate the information of edges among nodes and links between different layers to quantify the importance of nodes in multilayer networks. Three real-world multilayer biological networks are used to evaluate the performance of the EDCPTD centrality. The bar chart and ROC curves of these multilayer networks indicate that the proposed approach is a good alternative index to identify real important nodes. Meanwhile, by comparing the behavior of both the proposed method and the aggregated single-layer methods, we demonstrate that neglecting the multiple relationships between nodes may lead to incorrect identification of the most versatile nodes. Furthermore, the Gene Ontology functional annotation demonstrates that the identified top nodes based on the proposed approach play a significant role in many vital biological processes. Finally, we have implemented many centrality methods of multilayer networks (including our method and the published methods) and created a visual software based on the MATLAB GUI, called ENMNFinder, which can be used by other researchers.

  19. AN IMPROVED INTERFEROMETRIC CALIBRATION METHOD BASED ON INDEPENDENT PARAMETER DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    J. Fan

    2018-04-01

    Full Text Available Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM. The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs. However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD. Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  20. An Improved Interferometric Calibration Method Based on Independent Parameter Decomposition

    Science.gov (United States)

    Fan, J.; Zuo, X.; Li, T.; Chen, Q.; Geng, X.

    2018-04-01

    Interferometric SAR is sensitive to earth surface undulation. The accuracy of interferometric parameters plays a significant role in precise digital elevation model (DEM). The interferometric calibration is to obtain high-precision global DEM by calculating the interferometric parameters using ground control points (GCPs). However, interferometric parameters are always calculated jointly, making them difficult to decompose precisely. In this paper, we propose an interferometric calibration method based on independent parameter decomposition (IPD). Firstly, the parameters related to the interferometric SAR measurement are determined based on the three-dimensional reconstruction model. Secondly, the sensitivity of interferometric parameters is quantitatively analyzed after the geometric parameters are completely decomposed. Finally, each interferometric parameter is calculated based on IPD and interferometric calibration model is established. We take Weinan of Shanxi province as an example and choose 4 TerraDEM-X image pairs to carry out interferometric calibration experiment. The results show that the elevation accuracy of all SAR images is better than 2.54 m after interferometric calibration. Furthermore, the proposed method can obtain the accuracy of DEM products better than 2.43 m in the flat area and 6.97 m in the mountainous area, which can prove the correctness and effectiveness of the proposed IPD based interferometric calibration method. The results provide a technical basis for topographic mapping of 1 : 50000 and even larger scale in the flat area and mountainous area.

  1. Optimized waveform relaxation domain decomposition method for discrete finite volume non stationary convection diffusion equation

    International Nuclear Information System (INIS)

    Berthe, P.M.

    2013-01-01

    In the context of nuclear waste repositories, we consider the numerical discretization of the non stationary convection diffusion equation. Discontinuous physical parameters and heterogeneous space and time scales lead us to use different space and time discretizations in different parts of the domain. In this work, we choose the discrete duality finite volume (DDFV) scheme and the discontinuous Galerkin scheme in time, coupled by an optimized Schwarz waveform relaxation (OSWR) domain decomposition method, because this allows the use of non-conforming space-time meshes. The main difficulty lies in finding an upwind discretization of the convective flux which remains local to a sub-domain and such that the multi domain scheme is equivalent to the mono domain one. These difficulties are first dealt with in the one-dimensional context, where different discretizations are studied. The chosen scheme introduces a hybrid unknown on the cell interfaces. The idea of up winding with respect to this hybrid unknown is extended to the DDFV scheme in the two-dimensional setting. The well-posedness of the scheme and of an equivalent multi domain scheme is shown. The latter is solved by an OSWR algorithm, the convergence of which is proved. The optimized parameters in the Robin transmission conditions are obtained by studying the continuous or discrete convergence rates. Several test-cases, one of which inspired by nuclear waste repositories, illustrate these results. (author) [fr

  2. Kernel based pattern analysis methods using eigen-decompositions for reading Icelandic sagas

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Carstensen, Jens Michael

    We want to test the applicability of kernel based eigen-decomposition methods, compared to the traditional eigen-decomposition methods. We have implemented and tested three kernel based methods methods, namely PCA, MAF and MNF, all using a Gaussian kernel. We tested the methods on a multispectral...... image of a page in the book 'hauksbok', which contains Icelandic sagas....

  3. On the Dual-Decomposition-Based Resource and Power Allocation with Sleeping Strategy for Heterogeneous Networks

    KAUST Repository

    Alsharoa, Ahmad M.

    2015-05-01

    In this paper, the problem of radio and power resource management in long term evolution heterogeneous networks (LTE HetNets) is investigated. The goal is to minimize the total power consumption of the network while satisfying the user quality of service determined by each target data rate. We study the model where one macrocell base station is placed in the cell center, and multiple small cell base stations and femtocell access points are distributed around it. The dual decomposition technique is adopted to jointly optimize the power and carrier allocation in the downlink direction in addition to the selection of turned off small cell base stations. Our numerical results investigate the performance of the proposed scheme versus different system parameters and show an important saving in terms of total power consumption. © 2015 IEEE.

  4. QR-decomposition based SENSE reconstruction using parallel architecture.

    Science.gov (United States)

    Ullah, Irfan; Nisar, Habab; Raza, Haseeb; Qasim, Malik; Inam, Omair; Omer, Hammad

    2018-04-01

    Magnetic Resonance Imaging (MRI) is a powerful medical imaging technique that provides essential clinical information about the human body. One major limitation of MRI is its long scan time. Implementation of advance MRI algorithms on a parallel architecture (to exploit inherent parallelism) has a great potential to reduce the scan time. Sensitivity Encoding (SENSE) is a Parallel Magnetic Resonance Imaging (pMRI) algorithm that utilizes receiver coil sensitivities to reconstruct MR images from the acquired under-sampled k-space data. At the heart of SENSE lies inversion of a rectangular encoding matrix. This work presents a novel implementation of GPU based SENSE algorithm, which employs QR decomposition for the inversion of the rectangular encoding matrix. For a fair comparison, the performance of the proposed GPU based SENSE reconstruction is evaluated against single and multicore CPU using openMP. Several experiments against various acceleration factors (AFs) are performed using multichannel (8, 12 and 30) phantom and in-vivo human head and cardiac datasets. Experimental results show that GPU significantly reduces the computation time of SENSE reconstruction as compared to multi-core CPU (approximately 12x speedup) and single-core CPU (approximately 53x speedup) without any degradation in the quality of the reconstructed images. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Tree decomposition based fast search of RNA structures including pseudoknots in genomes.

    Science.gov (United States)

    Song, Yinglei; Liu, Chunmei; Malmberg, Russell; Pan, Fangfang; Cai, Liming

    2005-01-01

    Searching genomes for RNA secondary structure with computational methods has become an important approach to the annotation of non-coding RNAs. However, due to the lack of efficient algorithms for accurate RNA structure-sequence alignment, computer programs capable of fast and effectively searching genomes for RNA secondary structures have not been available. In this paper, a novel RNA structure profiling model is introduced based on the notion of a conformational graph to specify the consensus structure of an RNA family. Tree decomposition yields a small tree width t for such conformation graphs (e.g., t = 2 for stem loops and only a slight increase for pseudo-knots). Within this modelling framework, the optimal alignment of a sequence to the structure model corresponds to finding a maximum valued isomorphic subgraph and consequently can be accomplished through dynamic programming on the tree decomposition of the conformational graph in time O(k(t)N(2)), where k is a small parameter; and N is the size of the projiled RNA structure. Experiments show that the application of the alignment algorithm to search in genomes yields the same search accuracy as methods based on a Covariance model with a significant reduction in computation time. In particular; very accurate searches of tmRNAs in bacteria genomes and of telomerase RNAs in yeast genomes can be accomplished in days, as opposed to months required by other methods. The tree decomposition based searching tool is free upon request and can be downloaded at our site h t t p ://w.uga.edu/RNA-informatics/software/index.php.

  6. A novel hybrid decomposition-and-ensemble model based on CEEMD and GWO for short-term PM2.5 concentration forecasting

    Science.gov (United States)

    Niu, Mingfei; Wang, Yufang; Sun, Shaolong; Li, Yongwu

    2016-06-01

    To enhance prediction reliability and accuracy, a hybrid model based on the promising principle of "decomposition and ensemble" and a recently proposed meta-heuristic called grey wolf optimizer (GWO) is introduced for daily PM2.5 concentration forecasting. Compared with existing PM2.5 forecasting methods, this proposed model has improved the prediction accuracy and hit rates of directional prediction. The proposed model involves three main steps, i.e., decomposing the original PM2.5 series into several intrinsic mode functions (IMFs) via complementary ensemble empirical mode decomposition (CEEMD) for simplifying the complex data; individually predicting each IMF with support vector regression (SVR) optimized by GWO; integrating all predicted IMFs for the ensemble result as the final prediction by another SVR optimized by GWO. Seven benchmark models, including single artificial intelligence (AI) models, other decomposition-ensemble models with different decomposition methods and models with the same decomposition-ensemble method but optimized by different algorithms, are considered to verify the superiority of the proposed hybrid model. The empirical study indicates that the proposed hybrid decomposition-ensemble model is remarkably superior to all considered benchmark models for its higher prediction accuracy and hit rates of directional prediction.

  7. Optimal pattern synthesis for speech recognition based on principal component analysis

    Science.gov (United States)

    Korsun, O. N.; Poliyev, A. V.

    2018-02-01

    The algorithm for building an optimal pattern for the purpose of automatic speech recognition, which increases the probability of correct recognition, is developed and presented in this work. The optimal pattern forming is based on the decomposition of an initial pattern to principal components, which enables to reduce the dimension of multi-parameter optimization problem. At the next step the training samples are introduced and the optimal estimates for principal components decomposition coefficients are obtained by a numeric parameter optimization algorithm. Finally, we consider the experiment results that show the improvement in speech recognition introduced by the proposed optimization algorithm.

  8. Enhancement of dynamic myocardial perfusion PET images based on low-rank plus sparse decomposition.

    Science.gov (United States)

    Lu, Lijun; Ma, Xiaomian; Mohy-Ud-Din, Hassan; Ma, Jianhua; Feng, Qianjin; Rahmim, Arman; Chen, Wufan

    2018-02-01

    The absolute quantification of dynamic myocardial perfusion (MP) PET imaging is challenged by the limited spatial resolution of individual frame images due to division of the data into shorter frames. This study aims to develop a method for restoration and enhancement of dynamic PET images. We propose that the image restoration model should be based on multiple constraints rather than a single constraint, given the fact that the image characteristic is hardly described by a single constraint alone. At the same time, it may be possible, but not optimal, to regularize the image with multiple constraints simultaneously. Fortunately, MP PET images can be decomposed into a superposition of background vs. dynamic components via low-rank plus sparse (L + S) decomposition. Thus, we propose an L + S decomposition based MP PET image restoration model and express it as a convex optimization problem. An iterative soft thresholding algorithm was developed to solve the problem. Using realistic dynamic 82 Rb MP PET scan data, we optimized and compared its performance with other restoration methods. The proposed method resulted in substantial visual as well as quantitative accuracy improvements in terms of noise versus bias performance, as demonstrated in extensive 82 Rb MP PET simulations. In particular, the myocardium defect in the MP PET images had improved visual as well as contrast versus noise tradeoff. The proposed algorithm was also applied on an 8-min clinical cardiac 82 Rb MP PET study performed on the GE Discovery PET/CT, and demonstrated improved quantitative accuracy (CNR and SNR) compared to other algorithms. The proposed method is effective for restoration and enhancement of dynamic PET images. Copyright © 2017 Elsevier B.V. All rights reserved.

  9. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O.; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-01

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix.

  10. Eigenvalue-eigenvector decomposition (EED) analysis of dissimilarity and covariance matrix obtained from total synchronous fluorescence spectral (TSFS) data sets of herbal preparations: Optimizing the classification approach.

    Science.gov (United States)

    Tarai, Madhumita; Kumar, Keshav; Divya, O; Bairi, Partha; Mishra, Kishor Kumar; Mishra, Ashok Kumar

    2017-09-05

    The present work compares the dissimilarity and covariance based unsupervised chemometric classification approaches by taking the total synchronous fluorescence spectroscopy data sets acquired for the cumin and non-cumin based herbal preparations. The conventional decomposition method involves eigenvalue-eigenvector analysis of the covariance of the data set and finds the factors that can explain the overall major sources of variation present in the data set. The conventional approach does this irrespective of the fact that the samples belong to intrinsically different groups and hence leads to poor class separation. The present work shows that classification of such samples can be optimized by performing the eigenvalue-eigenvector decomposition on the pair-wise dissimilarity matrix. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Chaotic Multiobjective Evolutionary Algorithm Based on Decomposition for Test Task Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Hui Lu

    2014-01-01

    Full Text Available Test task scheduling problem (TTSP is a complex optimization problem and has many local optima. In this paper, a hybrid chaotic multiobjective evolutionary algorithm based on decomposition (CMOEA/D is presented to avoid becoming trapped in local optima and to obtain high quality solutions. First, we propose an improving integrated encoding scheme (IES to increase the efficiency. Then ten chaotic maps are applied into the multiobjective evolutionary algorithm based on decomposition (MOEA/D in three phases, that is, initial population and crossover and mutation operators. To identify a good approach for hybrid MOEA/D and chaos and indicate the effectiveness of the improving IES several experiments are performed. The Pareto front and the statistical results demonstrate that different chaotic maps in different phases have different effects for solving the TTSP especially the circle map and ICMIC map. The similarity degree of distribution between chaotic maps and the problem is a very essential factor for the application of chaotic maps. In addition, the experiments of comparisons of CMOEA/D and variable neighborhood MOEA/D (VNM indicate that our algorithm has the best performance in solving the TTSP.

  12. Photoacoustic imaging optimization with raw signal deconvolution and empirical mode decomposition

    Science.gov (United States)

    Guo, Chengwen; Wang, Jing; Qin, Yu; Zhan, Hongchen; Yuan, Jie; Cheng, Qian; Wang, Xueding

    2018-02-01

    Photoacoustic (PA) signal of an ideal optical absorb particle is a single N-shape wave. PA signals of a complicated biological tissue can be considered as the combination of individual N-shape waves. However, the N-shape wave basis not only complicates the subsequent work, but also results in aliasing between adjacent micro-structures, which deteriorates the quality of the final PA images. In this paper, we propose a method to improve PA image quality through signal processing method directly working on raw signals, which including deconvolution and empirical mode decomposition (EMD). During the deconvolution procedure, the raw PA signals are de-convolved with a system dependent point spread function (PSF) which is measured in advance. Then, EMD is adopted to adaptively re-shape the PA signals with two constraints, positive polarity and spectrum consistence. With our proposed method, the built PA images can yield more detail structural information. Micro-structures are clearly separated and revealed. To validate the effectiveness of this method, we present numerical simulations and phantom studies consist of a densely distributed point sources model and a blood vessel model. In the future, our study might hold the potential for clinical PA imaging as it can help to distinguish micro-structures from the optimized images and even measure the size of objects from deconvolved signals.

  13. The thermal decomposition behavior of ammonium perchlorate and of an ammonium-perchlorate-based composite propellant

    Energy Technology Data Exchange (ETDEWEB)

    Behrens, R.; Minier, L.

    1998-03-24

    The thermal decomposition of ammonium perchlorate (AP) and ammonium-perchlorate-based composite propellants is studied using the simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) technique. The main objective of the present work is to evaluate whether the STMBMS can provide new data on these materials that will have sufficient detail on the reaction mechanisms and associated reaction kinetics to permit creation of a detailed model of the thermal decomposition process. Such a model is a necessary ingredient to engineering models of ignition and slow-cookoff for these AP-based composite propellants. Results show that the decomposition of pure AP is controlled by two processes. One occurs at lower temperatures (240 to 270 C), produces mainly H{sub 2}O, O{sub 2}, Cl{sub 2}, N{sub 2}O and HCl, and is shown to occur in the solid phase within the AP particles. 200{micro} diameter AP particles undergo 25% decomposition in the solid phase, whereas 20{micro} diameter AP particles undergo only 13% decomposition. The second process is dissociative sublimation of AP to NH{sub 3} + HClO{sub 4} followed by the decomposition of, and reaction between, these two products in the gas phase. The dissociative sublimation process occurs over the entire temperature range of AP decomposition, but only becomes dominant at temperatures above those for the solid-phase decomposition. AP-based composite propellants are used extensively in both small tactical rocket motors and large strategic rocket systems.

  14. On the Use of Generalized Volume Scattering Models for the Improvement of General Polarimetric Model-Based Decomposition

    Directory of Open Access Journals (Sweden)

    Qinghua Xie

    2017-01-01

    Full Text Available Recently, a general polarimetric model-based decomposition framework was proposed by Chen et al., which addresses several well-known limitations in previous decomposition methods and implements a simultaneous full-parameter inversion by using complete polarimetric information. However, it only employs four typical models to characterize the volume scattering component, which limits the parameter inversion performance. To overcome this issue, this paper presents two general polarimetric model-based decomposition methods by incorporating the generalized volume scattering model (GVSM or simplified adaptive volume scattering model, (SAVSM proposed by Antropov et al. and Huang et al., respectively, into the general decomposition framework proposed by Chen et al. By doing so, the final volume coherency matrix structure is selected from a wide range of volume scattering models within a continuous interval according to the data itself without adding unknowns. Moreover, the new approaches rely on one nonlinear optimization stage instead of four as in the previous method proposed by Chen et al. In addition, the parameter inversion procedure adopts the modified algorithm proposed by Xie et al. which leads to higher accuracy and more physically reliable output parameters. A number of Monte Carlo simulations of polarimetric synthetic aperture radar (PolSAR data are carried out and show that the proposed method with GVSM yields an overall improvement in the final accuracy of estimated parameters and outperforms both the version using SAVSM and the original approach. In addition, C-band Radarsat-2 and L-band AIRSAR fully polarimetric images over the San Francisco region are also used for testing purposes. A detailed comparison and analysis of decomposition results over different land-cover types are conducted. According to this study, the use of general decomposition models leads to a more accurate quantitative retrieval of target parameters. However, there

  15. Risk Based Optimal Fatigue Testing

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Faber, M.H.; Kroon, I.B.

    1992-01-01

    Optimal fatigue life testing of materials is considered. Based on minimization of the total expected costs of a mechanical component a strategy is suggested to determine the optimal stress range levels for which additional experiments are to be performed together with an optimal value...

  16. Hourly forecasting of global solar radiation based on multiscale decomposition methods: A hybrid approach

    International Nuclear Information System (INIS)

    Monjoly, Stéphanie; André, Maïna; Calif, Rudy; Soubdhan, Ted

    2017-01-01

    This paper introduces a new approach for the forecasting of solar radiation series at 1 h ahead. We investigated on several techniques of multiscale decomposition of clear sky index K_c data such as Empirical Mode Decomposition (EMD), Ensemble Empirical Mode Decomposition (EEMD) and Wavelet Decomposition. From these differents methods, we built 11 decomposition components and 1 residu signal presenting different time scales. We performed classic forecasting models based on linear method (Autoregressive process AR) and a non linear method (Neural Network model). The choice of forecasting method is adaptative on the characteristic of each component. Hence, we proposed a modeling process which is built from a hybrid structure according to the defined flowchart. An analysis of predictive performances for solar forecasting from the different multiscale decompositions and forecast models is presented. From multiscale decomposition, the solar forecast accuracy is significantly improved, particularly using the wavelet decomposition method. Moreover, multistep forecasting with the proposed hybrid method resulted in additional improvement. For example, in terms of RMSE error, the obtained forecasting with the classical NN model is about 25.86%, this error decrease to 16.91% with the EMD-Hybrid Model, 14.06% with the EEMD-Hybid model and to 7.86% with the WD-Hybrid Model. - Highlights: • Hourly forecasting of GHI in tropical climate with many cloud formation processes. • Clear sky Index decomposition using three multiscale decomposition methods. • Combination of multiscale decomposition methods with AR-NN models to predict GHI. • Comparison of the proposed hybrid model with the classical models (AR, NN). • Best results using Wavelet-Hybrid model in comparison with classical models.

  17. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... diagonal (eigenvalue and singular value) decompositions and rank-revealing triangular decompositions (ULV, URV, VSV, ULLV and ULLIV). In addition we show how the subspace-based algorithms can be evaluated and compared by means of simple FIR filter interpretations. The algorithms are illustrated...... with working Matlab code and applications in speech processing....

  18. A Decomposition Model for HPLC-DAD Data Set and Its Solution by Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Lizhi Cui

    2014-01-01

    Full Text Available This paper proposes a separation method, based on the model of Generalized Reference Curve Measurement and the algorithm of Particle Swarm Optimization (GRCM-PSO, for the High Performance Liquid Chromatography with Diode Array Detection (HPLC-DAD data set. Firstly, initial parameters are generated to construct reference curves for the chromatogram peaks of the compounds based on its physical principle. Then, a General Reference Curve Measurement (GRCM model is designed to transform these parameters to scalar values, which indicate the fitness for all parameters. Thirdly, rough solutions are found by searching individual target for every parameter, and reinitialization only around these rough solutions is executed. Then, the Particle Swarm Optimization (PSO algorithm is adopted to obtain the optimal parameters by minimizing the fitness of these new parameters given by the GRCM model. Finally, spectra for the compounds are estimated based on the optimal parameters and the HPLC-DAD data set. Through simulations and experiments, following conclusions are drawn: (1 the GRCM-PSO method can separate the chromatogram peaks and spectra from the HPLC-DAD data set without knowing the number of the compounds in advance even when severe overlap and white noise exist; (2 the GRCM-PSO method is able to handle the real HPLC-DAD data set.

  19. A Hybrid Model Based on Wavelet Decomposition-Reconstruction in Track Irregularity State Forecasting

    Directory of Open Access Journals (Sweden)

    Chaolong Jia

    2015-01-01

    Full Text Available Wavelet is able to adapt to the requirements of time-frequency signal analysis automatically and can focus on any details of the signal and then decompose the function into the representation of a series of simple basis functions. It is of theoretical and practical significance. Therefore, this paper does subdivision on track irregularity time series based on the idea of wavelet decomposition-reconstruction and tries to find the best fitting forecast model of detail signal and approximate signal obtained through track irregularity time series wavelet decomposition, respectively. On this ideology, piecewise gray-ARMA recursive based on wavelet decomposition and reconstruction (PG-ARMARWDR and piecewise ANN-ARMA recursive based on wavelet decomposition and reconstruction (PANN-ARMARWDR models are proposed. Comparison and analysis of two models have shown that both these models can achieve higher accuracy.

  20. Nonlinear QR code based optical image encryption using spiral phase transform, equal modulus decomposition and singular value decomposition

    Science.gov (United States)

    Kumar, Ravi; Bhaduri, Basanta; Nishchal, Naveen K.

    2018-01-01

    In this study, we propose a quick response (QR) code based nonlinear optical image encryption technique using spiral phase transform (SPT), equal modulus decomposition (EMD) and singular value decomposition (SVD). First, the primary image is converted into a QR code and then multiplied with a spiral phase mask (SPM). Next, the product is spiral phase transformed with particular spiral phase function, and further, the EMD is performed on the output of SPT, which results into two complex images, Z 1 and Z 2. Among these, Z 1 is further Fresnel propagated with distance d, and Z 2 is reserved as a decryption key. Afterwards, SVD is performed on Fresnel propagated output to get three decomposed matrices i.e. one diagonal matrix and two unitary matrices. The two unitary matrices are modulated with two different SPMs and then, the inverse SVD is performed using the diagonal matrix and modulated unitary matrices to get the final encrypted image. Numerical simulation results confirm the validity and effectiveness of the proposed technique. The proposed technique is robust against noise attack, specific attack, and brutal force attack. Simulation results are presented in support of the proposed idea.

  1. Ultra-precision machining induced phase decomposition at surface of Zn-Al based alloy

    International Nuclear Information System (INIS)

    To, S.; Zhu, Y.H.; Lee, W.B.

    2006-01-01

    The microstructural changes and phase transformation of an ultra-precision machined Zn-Al based alloy were examined using X-ray diffraction and back-scattered electron microscopy techniques. Decomposition of the Zn-rich η phase and the related changes in crystal orientation was detected at the surface of the ultra-precision machined alloy specimen. The effects of the machining parameters, such as cutting speed and depth of cut, on the phase decomposition were discussed in comparison with the tensile and rolling induced microstrucutural changes and phase decomposition

  2. Research and Application of a Hybrid Forecasting Model Based on Data Decomposition for Electrical Load Forecasting

    Directory of Open Access Journals (Sweden)

    Yuqi Dong

    2016-12-01

    Full Text Available Accurate short-term electrical load forecasting plays a pivotal role in the national economy and people’s livelihood through providing effective future plans and ensuring a reliable supply of sustainable electricity. Although considerable work has been done to select suitable models and optimize the model parameters to forecast the short-term electrical load, few models are built based on the characteristics of time series, which will have a great impact on the forecasting accuracy. For that reason, this paper proposes a hybrid model based on data decomposition considering periodicity, trend and randomness of the original electrical load time series data. Through preprocessing and analyzing the original time series, the generalized regression neural network optimized by genetic algorithm is used to forecast the short-term electrical load. The experimental results demonstrate that the proposed hybrid model can not only achieve a good fitting ability, but it can also approximate the actual values when dealing with non-linear time series data with periodicity, trend and randomness.

  3. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    Science.gov (United States)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  4. Fast heap transform-based QR-decomposition of real and complex matrices: algorithms and codes

    Science.gov (United States)

    Grigoryan, Artyom M.

    2015-03-01

    In this paper, we describe a new look on the application of Givens rotations to the QR-decomposition problem, which is similar to the method of Householder transformations. We apply the concept of the discrete heap transform, or signal-induced unitary transforms which had been introduced by Grigoryan (2006) and used in signal and image processing. Both cases of real and complex nonsingular matrices are considered and examples of performing QR-decomposition of square matrices are given. The proposed method of QR-decomposition for the complex matrix is novel and differs from the known method of complex Givens rotation and is based on analytical equations for the heap transforms. Many examples illustrated the proposed heap transform method of QR-decomposition are given, algorithms are described in detail, and MATLAB-based codes are included.

  5. Non-linear scalable TFETI domain decomposition based contact algorithm

    Czech Academy of Sciences Publication Activity Database

    Dobiáš, Jiří; Pták, Svatopluk; Dostál, Z.; Vondrák, V.; Kozubek, T.

    2010-01-01

    Roč. 10, č. 1 (2010), s. 1-10 ISSN 1757-8981. [World Congress on Computational Mechanics/9./. Sydney, 19.07.2010 - 23.07.2010] R&D Projects: GA ČR GA101/08/0574 Institutional research plan: CEZ:AV0Z20760514 Keywords : finite element method * domain decomposition method * contact Subject RIV: BA - General Mathematics http://iopscience.iop.org/1757-899X/10/1/012161/pdf/1757-899X_10_1_012161.pdf

  6. Advances in audio watermarking based on singular value decomposition

    CERN Document Server

    Dhar, Pranab Kumar

    2015-01-01

    This book introduces audio watermarking methods for copyright protection, which has drawn extensive attention for securing digital data from unauthorized copying. The book is divided into two parts. First, an audio watermarking method in discrete wavelet transform (DWT) and discrete cosine transform (DCT) domains using singular value decomposition (SVD) and quantization is introduced. This method is robust against various attacks and provides good imperceptible watermarked sounds. Then, an audio watermarking method in fast Fourier transform (FFT) domain using SVD and Cartesian-polar transformation (CPT) is presented. This method has high imperceptibility and high data payload and it provides good robustness against various attacks. These techniques allow media owners to protect copyright and to show authenticity and ownership of their material in a variety of applications.   ·         Features new methods of audio watermarking for copyright protection and ownership protection ·         Outl...

  7. Base catalyzed decomposition of toxic and hazardous chemicals

    International Nuclear Information System (INIS)

    Rogers, C.J.; Kornel, A.; Sparks, H.L.

    1991-01-01

    There are vast amounts of toxic and hazardous chemicals, which have pervaded our environment during the past fifty years, leaving us with serious, crucial problems of remediation and disposal. The accumulation of polychlorinated biphenyls (PCBs), polychlorinated dibenzo-p-dioxins (PCDDs), ''dioxins'' and pesticides in soil sediments and living systems is a serious problem that is receiving considerable attention concerning the cancer-causing nature of these synthetic compounds.US EPA scientists developed in 1989 and 1990 two novel chemical Processes to effect the dehalogenation of chlorinated solvents, PCBs, PCDDs, PCDFs, PCP and other pollutants in soil, sludge, sediment and liquids. This improved technology employs hydrogen as a nucleophile to replace halogens on halogenated compounds. Hydrogen as nucleophile is not influenced by steric hinderance as with other nucleophile where complete dehalogenation of organohalogens can be achieved. This report discusses catalyzed decomposition of toxic and hazardous chemicals

  8. Presentation of a Modified Boustrophedon Decomposition Algorithm for Optimal Configuration of Flat Fields to use in Path Planning Systems of Agricultural Vehicles

    Directory of Open Access Journals (Sweden)

    R Goudarzi

    2018-03-01

    Full Text Available Introduction The demand of pre-determined optimal coverage paths in agricultural environments have been increased due to the growing application of field robots and autonomous field machines. Also coverage path planning problem (CPP has been extensively studied in robotics and many algorithms have been provided in many topics, but differences and limitations in agriculture lead to several different heuristic and modified adaptive methods from robotics. In this paper, a modified and enhanced version of currently used decomposition algorithm in robotics (boustrophedon cellular decomposition has been presented as a main part of path planning systems of agricultural vehicles. Developed algorithm is based on the parallelization of the edges of the polygon representing the environment to satisfy the requirements of the problem as far as possible. This idea is based on "minimum facing to the cost making condition" in turn, it is derived from encounter concept as a basis of cost making factors. Materials and Methods Generally, a line termed as a slice in boustrophedon cellular decomposition (BCD, sweeps an area in a pre-determined direction and decomposes the area only at critical points (where two segments can be extended to top and bottom of the point. Furthermore, sweep line direction does not change until the decomposition finish. To implement the BCD for parallelization method, two modifications were applied in order to provide a modified version of the boustrophedon cellular decomposition (M-BCD. In the first modification, the longest edge (base edge is targeted, and sweep line direction is set in line with the base edge direction (sweep direction is set perpendicular to the sweep line direction. Then Sweep line moves through the environment and stops at the first (nearest critical point. Next sweep direction will be the same as previous, If the length of those polygon's newly added edges, during the decomposition, are less than or equal to the

  9. Implementation of QR-decomposition based on CORDIC for unitary MUSIC algorithm

    Science.gov (United States)

    Lounici, Merwan; Luan, Xiaoming; Saadi, Wahab

    2013-07-01

    The DOA (Direction Of Arrival) estimation with subspace methods such as MUSIC (MUltiple SIgnal Classification) and ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique) is based on an accurate estimation of the eigenvalues and eigenvectors of covariance matrix. QR decomposition is implemented with the Coordinate Rotation DIgital Computer (CORDIC) algorithm. QRD requires only additions and shifts [6], so it is faster and more regular than other methods. In this article the hardware architecture of an EVD (Eigen Value Decomposition) processor based on TSA (triangular systolic array) for QR decomposition is proposed. Using Xilinx System Generator (XSG), the design is implemented and the estimated logic device resource values are presented for different matrix sizes.

  10. Digital Image Stabilization Method Based on Variational Mode Decomposition and Relative Entropy

    Directory of Open Access Journals (Sweden)

    Duo Hao

    2017-11-01

    Full Text Available Cameras mounted on vehicles frequently suffer from image shake due to the vehicles’ motions. To remove jitter motions and preserve intentional motions, a hybrid digital image stabilization method is proposed that uses variational mode decomposition (VMD and relative entropy (RE. In this paper, the global motion vector (GMV is initially decomposed into several narrow-banded modes by VMD. REs, which exhibit the difference of probability distribution between two modes, are then calculated to identify the intentional and jitter motion modes. Finally, the summation of the jitter motion modes constitutes jitter motions, whereas the subtraction of the resulting sum from the GMV represents the intentional motions. The proposed stabilization method is compared with several known methods, namely, medium filter (MF, Kalman filter (KF, wavelet decomposition (MD method, empirical mode decomposition (EMD-based method, and enhanced EMD-based method, to evaluate stabilization performance. Experimental results show that the proposed method outperforms the other stabilization methods.

  11. Ozone decomposition

    Directory of Open Access Journals (Sweden)

    Batakliev Todor

    2014-06-01

    Full Text Available Catalytic ozone decomposition is of great significance because ozone is a toxic substance commonly found or generated in human environments (aircraft cabins, offices with photocopiers, laser printers, sterilizers. Considerable work has been done on ozone decomposition reported in the literature. This review provides a comprehensive summary of the literature, concentrating on analysis of the physico-chemical properties, synthesis and catalytic decomposition of ozone. This is supplemented by a review on kinetics and catalyst characterization which ties together the previously reported results. Noble metals and oxides of transition metals have been found to be the most active substances for ozone decomposition. The high price of precious metals stimulated the use of metal oxide catalysts and particularly the catalysts based on manganese oxide. It has been determined that the kinetics of ozone decomposition is of first order importance. A mechanism of the reaction of catalytic ozone decomposition is discussed, based on detailed spectroscopic investigations of the catalytic surface, showing the existence of peroxide and superoxide surface intermediates

  12. APPROACH ON INTELLIGENT OPTIMIZATION DESIGN BASED ON COMPOUND KNOWLEDGE

    Institute of Scientific and Technical Information of China (English)

    Yao Jianchu; Zhou Ji; Yu Jun

    2003-01-01

    A concept of an intelligent optimal design approach is proposed, which is organized by a kind of compound knowledge model. The compound knowledge consists of modularized quantitative knowledge, inclusive experience knowledge and case-based sample knowledge. By using this compound knowledge model, the abundant quantity information of mathematical programming and the symbolic knowledge of artificial intelligence can be united together in this model. The intelligent optimal design model based on such a compound knowledge and the automatically generated decomposition principles based on it are also presented. Practically, it is applied to the production planning, process schedule and optimization of production process of a refining & chemical work and a great profit is achieved. Specially, the methods and principles are adaptable not only to continuous process industry, but also to discrete manufacturing one.

  13. On practical challenges of decomposition-based hybrid forecasting algorithms for wind speed and solar irradiation

    International Nuclear Information System (INIS)

    Wang, Yamin; Wu, Lei

    2016-01-01

    This paper presents a comprehensive analysis on practical challenges of empirical mode decomposition (EMD) based algorithms on wind speed and solar irradiation forecasts that have been largely neglected in literature, and proposes an alternative approach to mitigate such challenges. Specifically, the challenges are: (1) Decomposed sub-series are very sensitive to the original time series data. That is, sub-series of the new time series, consisting of the original one plus a limit number of new data samples, may significantly differ from those used in training forecasting models. In turn, forecasting models established by original sub-series may not be suitable for newly decomposed sub-series and have to be trained more frequently; and (2) Key environmental factors usually play a critical role in non-decomposition based methods for forecasting wind speed and solar irradiation. However, it is difficult to incorporate such critical environmental factors into forecasting models of individual decomposed sub-series, because the correlation between the original data and environmental factors is lost after decomposition. Numerical case studies on wind speed and solar irradiation forecasting show that the performance of existing EMD-based forecasting methods could be worse than the non-decomposition based forecasting model, and are not effective in practical cases. Finally, the approximated forecasting model based on EMD is proposed to mitigate the challenges and achieve better forecasting results than existing EMD-based forecasting algorithms and the non-decomposition based forecasting models on practical wind speed and solar irradiation forecasting cases. - Highlights: • Two challenges of existing EMD-based forecasting methods are discussed. • Significant changes of sub-series in each step of the rolling forecast procedure. • Difficulties in incorporating environmental factors into sub-series forecasting models. • The approximated forecasting method is proposed to

  14. Thermal Decomposition Behaviors and Burning Characteristics of AN/Nitramine-Based Composite Propellant

    Science.gov (United States)

    Naya, Tomoki; Kohga, Makoto

    2015-04-01

    Ammonium nitrate (AN) has attracted much attention due to its clean burning nature as an oxidizer. However, an AN-based composite propellant has the disadvantages of low burning rate and poor ignitability. In this study, we added nitramine of cyclotrimethylene trinitramine (RDX) or cyclotetramethylene tetranitramine (HMX) as a high-energy material to AN propellants to overcome these disadvantages. The thermal decomposition and burning rate characteristics of the prepared propellants were examined as the ratio of AN and nitramine was varied. In the thermal decomposition process, AN/RDX propellants showed unique mass loss peaks in the lower temperature range that were not observed for AN or RDX propellants alone. AN and RDX decomposed continuously as an almost single oxidizer in the AN/RDX propellant. In contrast, AN/HMX propellants exhibited thermal decomposition characteristics similar to those of AN and HMX, which decomposed almost separately in the thermal decomposition of the AN/HMX propellant. The ignitability was improved and the burning rate increased by the addition of nitramine for both AN/RDX and AN/HMX propellants. The increased burning rates of AN/RDX propellants were greater than those of AN/HMX. The difference in the thermal decomposition and burning characteristics was caused by the interaction between AN and RDX.

  15. Partial differential equation-based approach for empirical mode decomposition: application on image analysis.

    Science.gov (United States)

    Niang, Oumar; Thioune, Abdoulaye; El Gueirea, Mouhamed Cheikh; Deléchelle, Eric; Lemoine, Jacques

    2012-09-01

    The major problem with the empirical mode decomposition (EMD) algorithm is its lack of a theoretical framework. So, it is difficult to characterize and evaluate this approach. In this paper, we propose, in the 2-D case, the use of an alternative implementation to the algorithmic definition of the so-called "sifting process" used in the original Huang's EMD method. This approach, especially based on partial differential equations (PDEs), was presented by Niang in previous works, in 2005 and 2007, and relies on a nonlinear diffusion-based filtering process to solve the mean envelope estimation problem. In the 1-D case, the efficiency of the PDE-based method, compared to the original EMD algorithmic version, was also illustrated in a recent paper. Recently, several 2-D extensions of the EMD method have been proposed. Despite some effort, 2-D versions for EMD appear poorly performing and are very time consuming. So in this paper, an extension to the 2-D space of the PDE-based approach is extensively described. This approach has been applied in cases of both signal and image decomposition. The obtained results confirm the usefulness of the new PDE-based sifting process for the decomposition of various kinds of data. Some results have been provided in the case of image decomposition. The effectiveness of the approach encourages its use in a number of signal and image applications such as denoising, detrending, or texture analysis.

  16. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  17. Adaptive variational mode decomposition method for signal processing based on mode characteristic

    Science.gov (United States)

    Lian, Jijian; Liu, Zhuo; Wang, Haijun; Dong, Xiaofeng

    2018-07-01

    Variational mode decomposition is a completely non-recursive decomposition model, where all the modes are extracted concurrently. However, the model requires a preset mode number, which limits the adaptability of the method since a large deviation in the number of mode set will cause the discard or mixing of the mode. Hence, a method called Adaptive Variational Mode Decomposition (AVMD) was proposed to automatically determine the mode number based on the characteristic of intrinsic mode function. The method was used to analyze the simulation signals and the measured signals in the hydropower plant. Comparisons have also been conducted to evaluate the performance by using VMD, EMD and EWT. It is indicated that the proposed method has strong adaptability and is robust to noise. It can determine the mode number appropriately without modulation even when the signal frequencies are relatively close.

  18. Simulation-based optimization parametric optimization techniques and reinforcement learning

    CERN Document Server

    Gosavi, Abhijit

    2003-01-01

    Simulation-Based Optimization: Parametric Optimization Techniques and Reinforcement Learning introduces the evolving area of simulation-based optimization. The book's objective is two-fold: (1) It examines the mathematical governing principles of simulation-based optimization, thereby providing the reader with the ability to model relevant real-life problems using these techniques. (2) It outlines the computational technology underlying these methods. Taken together these two aspects demonstrate that the mathematical and computational methods discussed in this book do work. Broadly speaking, the book has two parts: (1) parametric (static) optimization and (2) control (dynamic) optimization. Some of the book's special features are: *An accessible introduction to reinforcement learning and parametric-optimization techniques. *A step-by-step description of several algorithms of simulation-based optimization. *A clear and simple introduction to the methodology of neural networks. *A gentle introduction to converg...

  19. Orbital-Optimized MP3 and MP2.5 with Density-Fitting and Cholesky Decomposition Approximations.

    Science.gov (United States)

    Bozkaya, Uğur

    2016-03-08

    Efficient implementations of the orbital-optimized MP3 and MP2.5 methods with the density-fitting (DF-OMP3 and DF-OMP2.5) and Cholesky decomposition (CD-OMP3 and CD-OMP2.5) approaches are presented. The DF/CD-OMP3 and DF/CD-OMP2.5 methods are applied to a set of alkanes to compare the computational cost with the conventional orbital-optimized MP3 (OMP3) [Bozkaya J. Chem. Phys. 2011, 135, 224103] and the orbital-optimized MP2.5 (OMP2.5) [Bozkaya and Sherrill J. Chem. Phys. 2014, 141, 204105]. Our results demonstrate that the DF-OMP3 and DF-OMP2.5 methods provide considerably lower computational costs than OMP3 and OMP2.5. Further application results show that the orbital-optimized methods are very helpful for the study of open-shell noncovalent interactions, aromatic bond dissociation energies, and hydrogen transfer reactions. We conclude that the DF-OMP3 and DF-OMP2.5 methods are very promising for molecular systems with challenging electronic structures.

  20. Subspace-Based Noise Reduction for Speech Signals via Diagonal and Triangular Matrix Decompositions

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Søren Holdt

    2007-01-01

    We survey the definitions and use of rank-revealing matrix decompositions in single-channel noise reduction algorithms for speech signals. Our algorithms are based on the rank-reduction paradigm and, in particular, signal subspace techniques. The focus is on practical working algorithms, using both...... with working Matlab code and applications in speech processing....

  1. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.; Ltaief, Hatem; Faverge, Mathieu; Keyes, David E.

    2017-01-01

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors' systems, while maintaining numerical accuracy.

  2. Asynchronous Task-Based Polar Decomposition on Single Node Manycore Architectures

    KAUST Repository

    Sukkari, Dalal E.

    2017-09-29

    This paper introduces the first asynchronous, task-based formulation of the polar decomposition and its corresponding implementation on manycore architectures. Based on a formulation of the iterative QR dynamically-weighted Halley algorithm (QDWH) for the calculation of the polar decomposition, the proposed implementation replaces the original LU factorization for the condition number estimator by the more adequate QR factorization to enable software portability across various architectures. Relying on fine-grained computations, the novel task-based implementation is capable of taking advantage of the identity structure of the matrix involved during the QDWH iterations, which decreases the overall algorithmic complexity. Furthermore, the artifactual synchronization points have been weakened compared to previous implementations, unveiling look-ahead opportunities for better hardware occupancy. The overall QDWH-based polar decomposition can then be represented as a directed acyclic graph (DAG), where nodes represent computational tasks and edges define the inter-task data dependencies. The StarPU dynamic runtime system is employed to traverse the DAG, to track the various data dependencies and to asynchronously schedule the computational tasks on the underlying hardware resources, resulting in an out-of-order task scheduling. Benchmarking experiments show significant improvements against existing state-of-the-art high performance implementations for the polar decomposition on latest shared-memory vendors\\' systems, while maintaining numerical accuracy.

  3. Kernel based eigenvalue-decomposition methods for analysing ham

    DEFF Research Database (Denmark)

    Christiansen, Asger Nyman; Nielsen, Allan Aasbjerg; Møller, Flemming

    2010-01-01

    methods, such as PCA, MAF or MNF. We therefore investigated the applicability of kernel based versions of these transformation. This meant implementing the kernel based methods and developing new theory, since kernel based MAF and MNF is not described in the literature yet. The traditional methods only...... have two factors that are useful for segmentation and none of them can be used to segment the two types of meat. The kernel based methods have a lot of useful factors and they are able to capture the subtle differences in the images. This is illustrated in Figure 1. You can see a comparison of the most...... useful factor of PCA and kernel based PCA respectively in Figure 2. The factor of the kernel based PCA turned out to be able to segment the two types of meat and in general that factor is much more distinct, compared to the traditional factor. After the orthogonal transformation a simple thresholding...

  4. Michelson interferometer based interleaver design using classic IIR filter decomposition.

    Science.gov (United States)

    Cheng, Chi-Hao; Tang, Shasha

    2013-12-16

    An elegant method to design a Michelson interferometer based interleaver using a classic infinite impulse response (IIR) filter such as Butterworth, Chebyshev, and elliptic filters as a starting point are presented. The proposed design method allows engineers to design a Michelson interferometer based interleaver from specifications seamlessly. Simulation results are presented to demonstrate the validity of the proposed design method.

  5. An Efficient Local Correlation Matrix Decomposition Approach for the Localization Implementation of Ensemble-Based Assimilation Methods

    Science.gov (United States)

    Zhang, Hongqin; Tian, Xiangjun

    2018-04-01

    Ensemble-based data assimilation methods often use the so-called localization scheme to improve the representation of the ensemble background error covariance (Be). Extensive research has been undertaken to reduce the computational cost of these methods by using the localized ensemble samples to localize Be by means of a direct decomposition of the local correlation matrix C. However, the computational costs of the direct decomposition of the local correlation matrix C are still extremely high due to its high dimension. In this paper, we propose an efficient local correlation matrix decomposition approach based on the concept of alternating directions. This approach is intended to avoid direct decomposition of the correlation matrix. Instead, we first decompose the correlation matrix into 1-D correlation matrices in the three coordinate directions, then construct their empirical orthogonal function decomposition at low resolution. This procedure is followed by the 1-D spline interpolation process to transform the above decompositions to the high-resolution grid. Finally, an efficient correlation matrix decomposition is achieved by computing the very similar Kronecker product. We conducted a series of comparison experiments to illustrate the validity and accuracy of the proposed local correlation matrix decomposition approach. The effectiveness of the proposed correlation matrix decomposition approach and its efficient localization implementation of the nonlinear least-squares four-dimensional variational assimilation are further demonstrated by several groups of numerical experiments based on the Advanced Research Weather Research and Forecasting model.

  6. Numerical difficulties associated with using equality constraints to achieve multi-level decomposition in structural optimization

    Science.gov (United States)

    Thareja, R.; Haftka, R. T.

    1986-01-01

    There has been recent interest in multidisciplinary multilevel optimization applied to large engineering systems. The usual approach is to divide the system into a hierarchy of subsystems with ever increasing detail in the analysis focus. Equality constraints are usually placed on various design quantities at every successive level to ensure consistency between levels. In many previous applications these equality constraints were eliminated by reducing the number of design variables. In complex systems this may not be possible and these equality constraints may have to be retained in the optimization process. In this paper the impact of such a retention is examined for a simple portal frame problem. It is shown that the equality constraints introduce numerical difficulties, and that the numerical solution becomes very sensitive to optimization parameters for a wide range of optimization algorithms.

  7. Grid-based electronic structure calculations: The tensor decomposition approach

    Energy Technology Data Exchange (ETDEWEB)

    Rakhuba, M.V., E-mail: rakhuba.m@gmail.com [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Oseledets, I.V., E-mail: i.oseledets@skoltech.ru [Skolkovo Institute of Science and Technology, Novaya St. 100, 143025 Skolkovo, Moscow Region (Russian Federation); Institute of Numerical Mathematics, Russian Academy of Sciences, Gubkina St. 8, 119333 Moscow (Russian Federation)

    2016-05-01

    We present a fully grid-based approach for solving Hartree–Fock and all-electron Kohn–Sham equations based on low-rank approximation of three-dimensional electron orbitals. Due to the low-rank structure the total complexity of the algorithm depends linearly with respect to the one-dimensional grid size. Linear complexity allows for the usage of fine grids, e.g. 8192{sup 3} and, thus, cheap extrapolation procedure. We test the proposed approach on closed-shell atoms up to the argon, several molecules and clusters of hydrogen atoms. All tests show systematical convergence with the required accuracy.

  8. MRI Volume Fusion Based on 3D Shearlet Decompositions.

    Science.gov (United States)

    Duan, Chang; Wang, Shuai; Wang, Xue Gang; Huang, Qi Hong

    2014-01-01

    Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST) is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.

  9. MRI Volume Fusion Based on 3D Shearlet Decompositions

    Directory of Open Access Journals (Sweden)

    Chang Duan

    2014-01-01

    Full Text Available Nowadays many MRI scans can give 3D volume data with different contrasts, but the observers may want to view various contrasts in the same 3D volume. The conventional 2D medical fusion methods can only fuse the 3D volume data layer by layer, which may lead to the loss of interframe correlative information. In this paper, a novel 3D medical volume fusion method based on 3D band limited shearlet transform (3D BLST is proposed. And this method is evaluated upon MRI T2* and quantitative susceptibility mapping data of 4 human brains. Both the perspective impression and the quality indices indicate that the proposed method has a better performance than conventional 2D wavelet, DT CWT, and 3D wavelet, DT CWT based fusion methods.

  10. A novel hybrid model for air quality index forecasting based on two-phase decomposition technique and modified extreme learning machine.

    Science.gov (United States)

    Wang, Deyun; Wei, Shuai; Luo, Hongyuan; Yue, Chenqiang; Grunder, Olivier

    2017-02-15

    The randomness, non-stationarity and irregularity of air quality index (AQI) series bring the difficulty of AQI forecasting. To enhance forecast accuracy, a novel hybrid forecasting model combining two-phase decomposition technique and extreme learning machine (ELM) optimized by differential evolution (DE) algorithm is developed for AQI forecasting in this paper. In phase I, the complementary ensemble empirical mode decomposition (CEEMD) is utilized to decompose the AQI series into a set of intrinsic mode functions (IMFs) with different frequencies; in phase II, in order to further handle the high frequency IMFs which will increase the forecast difficulty, variational mode decomposition (VMD) is employed to decompose the high frequency IMFs into a number of variational modes (VMs). Then, the ELM model optimized by DE algorithm is applied to forecast all the IMFs and VMs. Finally, the forecast value of each high frequency IMF is obtained through adding up the forecast results of all corresponding VMs, and the forecast series of AQI is obtained by aggregating the forecast results of all IMFs. To verify and validate the proposed model, two daily AQI series from July 1, 2014 to June 30, 2016 collected from Beijing and Shanghai located in China are taken as the test cases to conduct the empirical study. The experimental results show that the proposed hybrid model based on two-phase decomposition technique is remarkably superior to all other considered models for its higher forecast accuracy. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Benders decomposition for discrete material optimization in laminate design with local failure criteria

    DEFF Research Database (Denmark)

    Munoz, Eduardo; Stolpe, Mathias; Bendsøe, Martin P.

    2009-01-01

    in any discrete angle optimization design, or material selection problems. The mathematical modeling of this problem is more general than the one of standard topology optimization. When considering only two material candidates with a considerable difference in stiffness, it corresponds exactly...... to a topology optimization problem. The problem is modeled as a discrete design problem coming from a finite element discretization of the continuum problem. This discretization is made of shell or plate elements. For each element (selection domain), only one of the material candidates must be selected...... of the relaxed master problem and the current best compliance (weight) found get close enough with respect to certain tolerance. The method is investigated by computational means, using the finite element method to solve the analysis problems, and a commercial branch and cut method for solving the relaxed master...

  12. Lifecycle-Based Swarm Optimization Method for Numerical Optimization

    Directory of Open Access Journals (Sweden)

    Hai Shen

    2014-01-01

    Full Text Available Bioinspired optimization algorithms have been widely used to solve various scientific and engineering problems. Inspired by biological lifecycle, this paper presents a novel optimization algorithm called lifecycle-based swarm optimization (LSO. Biological lifecycle includes four stages: birth, growth, reproduction, and death. With this process, even though individual organism died, the species will not perish. Furthermore, species will have stronger ability of adaptation to the environment and achieve perfect evolution. LSO simulates Biological lifecycle process through six optimization operators: chemotactic, assimilation, transposition, crossover, selection, and mutation. In addition, the spatial distribution of initialization population meets clumped distribution. Experiments were conducted on unconstrained benchmark optimization problems and mechanical design optimization problems. Unconstrained benchmark problems include both unimodal and multimodal cases the demonstration of the optimal performance and stability, and the mechanical design problem was tested for algorithm practicability. The results demonstrate remarkable performance of the LSO algorithm on all chosen benchmark functions when compared to several successful optimization techniques.

  13. Near-lossless multichannel EEG compression based on matrix and tensor decompositions.

    Science.gov (United States)

    Dauwels, Justin; Srinivasan, K; Reddy, M Ramasubba; Cichocki, Andrzej

    2013-05-01

    A novel near-lossless compression algorithm for multichannel electroencephalogram (MC-EEG) is proposed based on matrix/tensor decomposition models. MC-EEG is represented in suitable multiway (multidimensional) forms to efficiently exploit temporal and spatial correlations simultaneously. Several matrix/tensor decomposition models are analyzed in view of efficient decorrelation of the multiway forms of MC-EEG. A compression algorithm is built based on the principle of “lossy plus residual coding,” consisting of a matrix/tensor decomposition-based coder in the lossy layer followed by arithmetic coding in the residual layer. This approach guarantees a specifiable maximum absolute error between original and reconstructed signals. The compression algorithm is applied to three different scalp EEG datasets and an intracranial EEG dataset, each with different sampling rate and resolution. The proposed algorithm achieves attractive compression ratios compared to compressing individual channels separately. For similar compression ratios, the proposed algorithm achieves nearly fivefold lower average error compared to a similar wavelet-based volumetric MC-EEG compression algorithm.

  14. Frequency hopping signal detection based on wavelet decomposition and Hilbert-Huang transform

    Science.gov (United States)

    Zheng, Yang; Chen, Xihao; Zhu, Rui

    2017-07-01

    Frequency hopping (FH) signal is widely adopted by military communications as a kind of low probability interception signal. Therefore, it is very important to research the FH signal detection algorithm. The existing detection algorithm of FH signals based on the time-frequency analysis cannot satisfy the time and frequency resolution requirement at the same time due to the influence of window function. In order to solve this problem, an algorithm based on wavelet decomposition and Hilbert-Huang transform (HHT) was proposed. The proposed algorithm removes the noise of the received signals by wavelet decomposition and detects the FH signals by Hilbert-Huang transform. Simulation results show the proposed algorithm takes into account both the time resolution and the frequency resolution. Correspondingly, the accuracy of FH signals detection can be improved.

  15. Tissue artifact removal from respiratory signals based on empirical mode decomposition.

    Science.gov (United States)

    Liu, Shaopeng; Gao, Robert X; John, Dinesh; Staudenmayer, John; Freedson, Patty

    2013-05-01

    On-line measurement of respiration plays an important role in monitoring human physical activities. Such measurement commonly employs sensing belts secured around the rib cage and abdomen of the test object. Affected by the movement of body tissues, respiratory signals typically have a low signal-to-noise ratio. Removing tissue artifacts therefore is critical to ensuring effective respiration analysis. This paper presents a signal decomposition technique for tissue artifact removal from respiratory signals, based on the empirical mode decomposition (EMD). An algorithm based on the mutual information and power criteria was devised to automatically select appropriate intrinsic mode functions for tissue artifact removal and respiratory signal reconstruction. Performance of the EMD-algorithm was evaluated through simulations and real-life experiments (N = 105). Comparison with low-pass filtering that has been conventionally applied confirmed the effectiveness of the technique in tissue artifacts removal.

  16. Adaptive Fourier decomposition based R-peak detection for noisy ECG Signals.

    Science.gov (United States)

    Ze Wang; Chi Man Wong; Feng Wan

    2017-07-01

    An adaptive Fourier decomposition (AFD) based R-peak detection method is proposed for noisy ECG signals. Although lots of QRS detection methods have been proposed in literature, most detection methods require high signal quality. The proposed method extracts the R waves from the energy domain using the AFD and determines the R-peak locations based on the key decomposition parameters, achieving the denoising and the R-peak detection at the same time. Validated by clinical ECG signals in the MIT-BIH Arrhythmia Database, the proposed method shows better performance than the Pan-Tompkin (PT) algorithm in both situations of a native PT and the PT with a denoising process.

  17. Reliability-Based Optimization in Structural Engineering

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1994-01-01

    In this paper reliability-based optimization problems in structural engineering are formulated on the basis of the classical decision theory. Several formulations are presented: Reliability-based optimal design of structural systems with component or systems reliability constraints, reliability...

  18. Efficient Divide-And-Conquer Classification Based on Feature-Space Decomposition

    OpenAIRE

    Guo, Qi; Chen, Bo-Wei; Jiang, Feng; Ji, Xiangyang; Kung, Sun-Yuan

    2015-01-01

    This study presents a divide-and-conquer (DC) approach based on feature space decomposition for classification. When large-scale datasets are present, typical approaches usually employed truncated kernel methods on the feature space or DC approaches on the sample space. However, this did not guarantee separability between classes, owing to overfitting. To overcome such problems, this work proposes a novel DC approach on feature spaces consisting of three steps. Firstly, we divide the feature ...

  19. A Deep Learning Prediction Model Based on Extreme-Point Symmetric Mode Decomposition and Cluster Analysis

    OpenAIRE

    Li, Guohui; Zhang, Songling; Yang, Hong

    2017-01-01

    Aiming at the irregularity of nonlinear signal and its predicting difficulty, a deep learning prediction model based on extreme-point symmetric mode decomposition (ESMD) and clustering analysis is proposed. Firstly, the original data is decomposed by ESMD to obtain the finite number of intrinsic mode functions (IMFs) and residuals. Secondly, the fuzzy c-means is used to cluster the decomposed components, and then the deep belief network (DBN) is used to predict it. Finally, the reconstructed ...

  20. Systems-based decomposition schemes for the approximate solution of multi-term fractional differential equations

    Science.gov (United States)

    Ford, Neville J.; Connolly, Joseph A.

    2009-07-01

    We give a comparison of the efficiency of three alternative decomposition schemes for the approximate solution of multi-term fractional differential equations using the Caputo form of the fractional derivative. The schemes we compare are based on conversion of the original problem into a system of equations. We review alternative approaches and consider how the most appropriate numerical scheme may be chosen to solve a particular equation.

  1. A copyright protection scheme for digital images based on shuffled singular value decomposition and visual cryptography.

    Science.gov (United States)

    Devi, B Pushpa; Singh, Kh Manglem; Roy, Sudipta

    2016-01-01

    This paper proposes a new watermarking algorithm based on the shuffled singular value decomposition and the visual cryptography for copyright protection of digital images. It generates the ownership and identification shares of the image based on visual cryptography. It decomposes the image into low and high frequency sub-bands. The low frequency sub-band is further divided into blocks of same size after shuffling it and then the singular value decomposition is applied to each randomly selected block. Shares are generated by comparing one of the elements in the first column of the left orthogonal matrix with its corresponding element in the right orthogonal matrix of the singular value decomposition of the block of the low frequency sub-band. The experimental results show that the proposed scheme clearly verifies the copyright of the digital images, and is robust to withstand several image processing attacks. Comparison with the other related visual cryptography-based algorithms reveals that the proposed method gives better performance. The proposed method is especially resilient against the rotation attack.

  2. Adaptive Hybrid Visual Servo Regulation of Mobile Robots Based on Fast Homography Decomposition

    Directory of Open Access Journals (Sweden)

    Chunfu Wu

    2015-01-01

    Full Text Available For the monocular camera-based mobile robot system, an adaptive hybrid visual servo regulation algorithm which is based on a fast homography decomposition method is proposed to drive the mobile robot to its desired position and orientation, even when object’s imaging depth and camera’s position extrinsic parameters are unknown. Firstly, the homography’s particular properties caused by mobile robot’s 2-DOF motion are taken into account to induce a fast homography decomposition method. Secondly, the homography matrix and the extracted orientation error, incorporated with the desired view’s single feature point, are utilized to form an error vector and its open-loop error function. Finally, Lyapunov-based techniques are exploited to construct an adaptive regulation control law, followed by the experimental verification. The experimental results show that the proposed fast homography decomposition method is not only simple and efficient, but also highly precise. Meanwhile, the designed control law can well enable mobile robot position and orientation regulation despite the lack of depth information and camera’s position extrinsic parameters.

  3. Ammonia synthesis and decomposition on a Ru-based catalyst modeled by first-principles

    DEFF Research Database (Denmark)

    Hellman, A.; Honkala, Johanna Karoliina; Remediakis, Ioannis

    2009-01-01

    A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro-properties, such ......A recently published first-principles model for the ammonia synthesis on an unpromoted Ru-based catalyst is extended to also describe ammonia decomposition. In addition, further analysis concerning trends in ammonia productivity, surface conditions during the reaction, and macro......-properties, such as apparent activation energies and reaction orders are provided. All observed trends in activity are captured by the model and the absolute value of ammonia synthesis/decomposition productivity is predicted to within a factor of 1-100 depending on the experimental conditions. Moreover it is shown: (i......) that small changes in the relative adsorption potential energies are sufficient to get a quantitative agreement between theory and experiment (Appendix A) and (ii) that it is possible to reproduce results from the first-principles model by a simple micro-kinetic model (Appendix B)....

  4. Reliability-based optimization of engineering structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2008-01-01

    The theoretical basis for reliability-based structural optimization within the framework of Bayesian statistical decision theory is briefly described. Reliability-based cost benefit problems are formulated and exemplitied with structural optimization. The basic reliability-based optimization...... problems are generalized to the following extensions: interactive optimization, inspection and repair costs, systematic reconstruction, re-assessment of existing structures. Illustrative examples are presented including a simple introductory example, a decision problem related to bridge re...

  5. Gradient-based methods for production optimization of oil reservoirs

    Energy Technology Data Exchange (ETDEWEB)

    Suwartadi, Eka

    2012-07-01

    Production optimization for water flooding in the secondary phase of oil recovery is the main topic in this thesis. The emphasis has been on numerical optimization algorithms, tested on case examples using simple hypothetical oil reservoirs. Gradientbased optimization, which utilizes adjoint-based gradient computation, is used to solve the optimization problems. The first contribution of this thesis is to address output constraint problems. These kinds of constraints are natural in production optimization. Limiting total water production and water cut at producer wells are examples of such constraints. To maintain the feasibility of an optimization solution, a Lagrangian barrier method is proposed to handle the output constraints. This method incorporates the output constraints into the objective function, thus avoiding additional computations for the constraints gradient (Jacobian) which may be detrimental to the efficiency of the adjoint method. The second contribution is the study of the use of second-order adjoint-gradient information for production optimization. In order to speedup convergence rate in the optimization, one usually uses quasi-Newton approaches such as BFGS and SR1 methods. These methods compute an approximation of the inverse of the Hessian matrix given the first-order gradient from the adjoint method. The methods may not give significant speedup if the Hessian is ill-conditioned. We have developed and implemented the Hessian matrix computation using the adjoint method. Due to high computational cost of the Newton method itself, we instead compute the Hessian-timesvector product which is used in a conjugate gradient algorithm. Finally, the last contribution of this thesis is on surrogate optimization for water flooding in the presence of the output constraints. Two kinds of model order reduction techniques are applied to build surrogate models. These are proper orthogonal decomposition (POD) and the discrete empirical interpolation method (DEIM

  6. The Ontology of Knowledge Based Optimization

    OpenAIRE

    Nasution, Mahyuddin K. M.

    2012-01-01

    Optimization has been becoming a central of studies in mathematic and has many areas with different applications. However, many themes of optimization came from different area have not ties closing to origin concepts. This paper is to address some variants of optimization problems using ontology in order to building basic of knowledge about optimization, and then using it to enhance strategy to achieve knowledge based optimization.

  7. Detection of the ice assertion on aircraft using empirical mode decomposition enhanced by multi-objective optimization

    Science.gov (United States)

    Bagherzadeh, Seyed Amin; Asadi, Davood

    2017-05-01

    In search of a precise method for analyzing nonlinear and non-stationary flight data of an aircraft in the icing condition, an Empirical Mode Decomposition (EMD) algorithm enhanced by multi-objective optimization is introduced. In the proposed method, dissimilar IMF definitions are considered by the Genetic Algorithm (GA) in order to find the best decision parameters of the signal trend. To resolve disadvantages of the classical algorithm caused by the envelope concept, the signal trend is estimated directly in the proposed method. Furthermore, in order to simplify the performance and understanding of the EMD algorithm, the proposed method obviates the need for a repeated sifting process. The proposed enhanced EMD algorithm is verified by some benchmark signals. Afterwards, the enhanced algorithm is applied to simulated flight data in the icing condition in order to detect the ice assertion on the aircraft. The results demonstrate the effectiveness of the proposed EMD algorithm in aircraft ice detection by providing a figure of merit for the icing severity.

  8. COMPOSITE POLYMERICADDITIVESDESIGNATED FORCONCRETEMIXES BASED ONPOLYACRYLATES, PRODUCTS OF THERMAL DECOMPOSITION OF POLYAMIDE-6 AND LOW-MOLECULAR POLYETHYLENE

    Directory of Open Access Journals (Sweden)

    Polyakov Vyacheslav Sergeevich

    2012-07-01

    4 the optimal composite additive that increases the time period of stiffening of the cement grout , improves the water resistance and the compressive strength of concrete, represents the composition of polyacrylates and polymethacrylates, products of thermal decomposition of polyamide-6 and low-molecular polyethylene in the weight ratio of 1:1:0.5.

  9. Distributed Optimization based Dynamic Tariff for Congestion Management in Distribution Networks

    DEFF Research Database (Denmark)

    Huang, Shaojun; Wu, Qiuwei; Zhao, Haoran

    2017-01-01

    This paper proposes a distributed optimization based dynamic tariff (DDT) method for congestion management in distribution networks with high penetration of electric vehicles (EVs) and heat pumps (HPs). The DDT method employs a decomposition based optimization method to have aggregators explicitly...... is able to minimize the overall energy consumption cost and line loss cost, which is different from previous decomposition-based methods such as multiagent system methods. In addition, a reconditioning method and an integral controller are introduced to improve convergence of the distributed optimization...... where challenges arise due to multiple congestion points, multiple types of flexible demands and network constraints. The case studies demonstrate the efficacy of the DDT method for congestion management in distribution networks....

  10. The design and implementation of signal decomposition system of CL multi-wavelet transform based on DSP builder

    Science.gov (United States)

    Huang, Yan; Wang, Zhihui

    2015-12-01

    With the development of FPGA, DSP Builder is widely applied to design system-level algorithms. The algorithm of CL multi-wavelet is more advanced and effective than scalar wavelets in processing signal decomposition. Thus, a system of CL multi-wavelet based on DSP Builder is designed for the first time in this paper. The system mainly contains three parts: a pre-filtering subsystem, a one-level decomposition subsystem and a two-level decomposition subsystem. It can be converted into hardware language VHDL by the Signal Complier block that can be used in Quartus II. After analyzing the energy indicator, it shows that this system outperforms Daubenchies wavelet in signal decomposition. Furthermore, it has proved to be suitable for the implementation of signal fusion based on SoPC hardware, and it will become a solid foundation in this new field.

  11. Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots

    Directory of Open Access Journals (Sweden)

    Ching-Long Shih

    2012-08-01

    Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.

  12. Ensemble empirical mode decomposition based fluorescence spectral noise reduction for low concentration PAHs

    Science.gov (United States)

    Wang, Shu-tao; Yang, Xue-ying; Kong, De-ming; Wang, Yu-tian

    2017-11-01

    A new noise reduction method based on ensemble empirical mode decomposition (EEMD) is proposed to improve the detection effect for fluorescence spectra. Polycyclic aromatic hydrocarbons (PAHs) pollutants, as a kind of important current environmental pollution source, are highly oncogenic. Using the fluorescence spectroscopy method, the PAHs pollutants can be detected. However, instrument will produce noise in the experiment. Weak fluorescent signals can be affected by noise, so we propose a way to denoise and improve the detection effect. Firstly, we use fluorescence spectrometer to detect PAHs to obtain fluorescence spectra. Subsequently, noises are reduced by EEMD algorithm. Finally, the experiment results show the proposed method is feasible.

  13. Phase-only asymmetric optical cryptosystem based on random modulus decomposition

    Science.gov (United States)

    Xu, Hongfeng; Xu, Wenhui; Wang, Shuaihua; Wu, Shaofan

    2018-06-01

    We propose a phase-only asymmetric optical cryptosystem based on random modulus decomposition (RMD). The cryptosystem is presented for effectively improving the capacity to resist various attacks, including the attack of iterative algorithms. On the one hand, RMD and phase encoding are combined to remove the constraints that can be used in the attacking process. On the other hand, the security keys (geometrical parameters) introduced by Fresnel transform can increase the key variety and enlarge the key space simultaneously. Numerical simulation results demonstrate the strong feasibility, security and robustness of the proposed cryptosystem. This cryptosystem will open up many new opportunities in the application fields of optical encryption and authentication.

  14. A Matrix-Free Posterior Ensemble Kalman Filter Implementation Based on a Modified Cholesky Decomposition

    Directory of Open Access Journals (Sweden)

    Elias D. Nino-Ruiz

    2017-07-01

    Full Text Available In this paper, a matrix-free posterior ensemble Kalman filter implementation based on a modified Cholesky decomposition is proposed. The method works as follows: the precision matrix of the background error distribution is estimated based on a modified Cholesky decomposition. The resulting estimator can be expressed in terms of Cholesky factors which can be updated based on a series of rank-one matrices in order to approximate the precision matrix of the analysis distribution. By using this matrix, the posterior ensemble can be built by either sampling from the posterior distribution or using synthetic observations. Furthermore, the computational effort of the proposed method is linear with regard to the model dimension and the number of observed components from the model domain. Experimental tests are performed making use of the Lorenz-96 model. The results reveal that, the accuracy of the proposed implementation in terms of root-mean-square-error is similar, and in some cases better, to that of a well-known ensemble Kalman filter (EnKF implementation: the local ensemble transform Kalman filter. In addition, the results are comparable to those obtained by the EnKF with large ensemble sizes.

  15. A Type-2 Block-Component-Decomposition Based 2D AOA Estimation Algorithm for an Electromagnetic Vector Sensor Array

    Directory of Open Access Journals (Sweden)

    Yu-Fei Gao

    2017-04-01

    Full Text Available This paper investigates a two-dimensional angle of arrival (2D AOA estimation algorithm for the electromagnetic vector sensor (EMVS array based on Type-2 block component decomposition (BCD tensor modeling. Such a tensor decomposition method can take full advantage of the multidimensional structural information of electromagnetic signals to accomplish blind estimation for array parameters with higher resolution. However, existing tensor decomposition methods encounter many restrictions in applications of the EMVS array, such as the strict requirement for uniqueness conditions of decomposition, the inability to handle partially-polarized signals, etc. To solve these problems, this paper investigates tensor modeling for partially-polarized signals of an L-shaped EMVS array. The 2D AOA estimation algorithm based on rank- ( L 1 , L 2 , · BCD is developed, and the uniqueness condition of decomposition is analyzed. By means of the estimated steering matrix, the proposed algorithm can automatically achieve angle pair-matching. Numerical experiments demonstrate that the present algorithm has the advantages of both accuracy and robustness of parameter estimation. Even under the conditions of lower SNR, small angular separation and limited snapshots, the proposed algorithm still possesses better performance than subspace methods and the canonical polyadic decomposition (CPD method.

  16. Exact Partial Information Decompositions for Gaussian Systems Based on Dependency Constraints

    Directory of Open Access Journals (Sweden)

    Jim W. Kay

    2018-03-01

    Full Text Available The Partial Information Decomposition, introduced by Williams P. L. et al. (2010, provides a theoretical framework to characterize and quantify the structure of multivariate information sharing. A new method ( I dep has recently been proposed by James R. G. et al. (2017 for computing a two-predictor partial information decomposition over discrete spaces. A lattice of maximum entropy probability models is constructed based on marginal dependency constraints, and the unique information that a particular predictor has about the target is defined as the minimum increase in joint predictor-target mutual information when that particular predictor-target marginal dependency is constrained. Here, we apply the I dep approach to Gaussian systems, for which the marginally constrained maximum entropy models are Gaussian graphical models. Closed form solutions for the I dep PID are derived for both univariate and multivariate Gaussian systems. Numerical and graphical illustrations are provided, together with practical and theoretical comparisons of the I dep PID with the minimum mutual information partial information decomposition ( I mmi , which was discussed by Barrett A. B. (2015. The results obtained using I dep appear to be more intuitive than those given with other methods, such as I mmi , in which the redundant and unique information components are constrained to depend only on the predictor-target marginal distributions. In particular, it is proved that the I mmi method generally produces larger estimates of redundancy and synergy than does the I dep method. In discussion of the practical examples, the PIDs are complemented by the use of tests of deviance for the comparison of Gaussian graphical models.

  17. Polarimetric SAR interferometry-based decomposition modelling for reliable scattering retrieval

    Science.gov (United States)

    Agrawal, Neeraj; Kumar, Shashi; Tolpekin, Valentyn

    2016-05-01

    Fully Polarimetric SAR (PolSAR) data is used for scattering information retrieval from single SAR resolution cell. Single SAR resolution cell may contain contribution from more than one scattering objects. Hence, single or dual polarized data does not provide all the possible scattering information. So, to overcome this problem fully Polarimetric data is used. It was observed in previous study that fully Polarimetric data of different dates provide different scattering values for same object and coefficient of determination obtained from linear regression between volume scattering and aboveground biomass (AGB) shows different values for the SAR dataset of different dates. Scattering values are important input elements for modelling of forest aboveground biomass. In this research work an approach is proposed to get reliable scattering from interferometric pair of fully Polarimetric RADARSAT-2 data. The field survey for data collection was carried out for Barkot forest during November 10th to December 5th, 2014. Stratified random sampling was used to collect field data for circumference at breast height (CBH) and tree height measurement. Field-measured AGB was compared with the volume scattering elements obtained from decomposition modelling of individual PolSAR images and PolInSAR coherency matrix. Yamaguchi 4-component decomposition was implemented to retrieve scattering elements from SAR data. PolInSAR based decomposition was the great challenge in this work and it was implemented with certain assumptions to create Hermitian coherency matrix with co-registered polarimetric interferometric pair of SAR data. Regression analysis between field-measured AGB and volume scattering element obtained from PolInSAR data showed highest (0.589) coefficient of determination. The same regression with volume scattering elements of individual SAR images showed 0.49 and 0.50 coefficients of determination for master and slave images respectively. This study recommends use of

  18. Determination of knock characteristics in spark ignition engines: an approach based on ensemble empirical mode decomposition

    International Nuclear Information System (INIS)

    Li, Ning; Liang, Caiping; Yang, Jianguo; Zhou, Rui

    2016-01-01

    Knock is one of the major constraints to improve the performance and thermal efficiency of spark ignition (SI) engines. It can also result in severe permanent engine damage under certain operating conditions. Based on the ensemble empirical mode decomposition (EEMD), this paper proposes a new approach to determine the knock characteristics in SI engines. By adding a uniformly distributed and finite white Gaussian noise, the EEMD can preserve signal continuity in different scales and therefore alleviates the mode-mixing problem occurring in the classic empirical mode decomposition (EMD). The feasibilities of applying the EEMD to detect the knock signatures of a test SI engine via the pressure signal measured from combustion chamber and the vibration signal measured from cylinder head are investigated. Experimental results show that the EEMD-based method is able to detect the knock signatures from both the pressure signal and vibration signal, even in initial stage of knock. Finally, by comparing the application results with those obtained by short-time Fourier transform (STFT), Wigner–Ville distribution (WVD) and discrete wavelet transform (DWT), the superiority of the EEMD method in determining knock characteristics is demonstrated. (paper)

  19. A new solar power output prediction based on hybrid forecast engine and decomposition model.

    Science.gov (United States)

    Zhang, Weijiang; Dang, Hongshe; Simoes, Rolando

    2018-06-12

    Regarding to the growing trend of photovoltaic (PV) energy as a clean energy source in electrical networks and its uncertain nature, PV energy prediction has been proposed by researchers in recent decades. This problem is directly effects on operation in power network while, due to high volatility of this signal, an accurate prediction model is demanded. A new prediction model based on Hilbert Huang transform (HHT) and integration of improved empirical mode decomposition (IEMD) with feature selection and forecast engine is presented in this paper. The proposed approach is divided into three main sections. In the first section, the signal is decomposed by the proposed IEMD as an accurate decomposition tool. To increase the accuracy of the proposed method, a new interpolation method has been used instead of cubic spline curve (CSC) fitting in EMD. Then the obtained output is entered into the new feature selection procedure to choose the best candidate inputs. Finally, the signal is predicted by a hybrid forecast engine composed of support vector regression (SVR) based on an intelligent algorithm. The effectiveness of the proposed approach has been verified over a number of real-world engineering test cases in comparison with other well-known models. The obtained results prove the validity of the proposed method. Copyright © 2018 ISA. Published by Elsevier Ltd. All rights reserved.

  20. Calculation and decomposition of indirect carbon emissions from residential consumption in China based on the input–output model

    International Nuclear Information System (INIS)

    Zhu Qin; Peng Xizhe; Wu Kaiya

    2012-01-01

    Based on the input–output model and the comparable price input–output tables, the current paper investigates the indirect carbon emissions from residential consumption in China in 1992–2005, and examines the impacts on the emissions using the structural decomposition method. The results demonstrate that the rise of the residential consumption level played a dominant role in the growth of residential indirect emissions. The persistent decline of the carbon emission intensity of industrial sectors presented a significant negative effect on the emissions. The change in the intermediate demand of industrial sectors resulted in an overall positive effect, except in the initial years. The increase in population prompted the indirect emissions to a certain extent; however, population size is no longer the main reason for the growth of the emissions. The change in the consumption structure showed a weak positive effect, demonstrating the importance for China to control and slow down the increase in the emissions while in the process of optimizing the residential consumption structure. The results imply that the means for restructuring the economy and improving efficiency, rather than for lowering the consumption scale, should be adopted by China to achieve the targets of energy conservation and emission reduction. - Highlights: ► We build the input–output model of indirect carbon emissions from residential consumption. ► We calculate the indirect emissions using the comparable price input–output tables. ► We examine the impacts on the indirect emissions using the structural decomposition method. ► The change in the consumption structure showed a weak positive effect on the emissions. ► China's population size is no longer the main reason for the growth of the emissions.

  1. Decomposition of Polarimetric SAR Images Based on Second- and Third-order Statics Analysis

    Science.gov (United States)

    Kojima, S.; Hensley, S.

    2012-12-01

    There are many papers concerning the research of the decomposition of polerimetric SAR imagery. Most of them are based on second-order statics analysis that Freeman and Durden [1] suggested for the reflection symmetry condition that implies that the co-polarization and cross-polarization correlations are close to zero. Since then a number of improvements and enhancements have been proposed to better understand the underlying backscattering mechanisms present in polarimetric SAR images. For example, Yamaguchi et al. [2] added the helix component into Freeman's model and developed a 4 component scattering model for the non-reflection symmetry condition. In addition, Arii et al. [3] developed an adaptive model-based decomposition method that could estimate both the mean orientation angle and a degree of randomness for the canopy scattering for each pixel in a SAR image without the reflection symmetry condition. This purpose of this research is to develop a new decomposition method based on second- and third-order statics analysis to estimate the surface, dihedral, volume and helix scattering components from polarimetric SAR images without the specific assumptions concerning the model for the volume scattering. In addition, we evaluate this method by using both simulation and real UAVSAR data and compare this method with other methods. We express the volume scattering component using the wire formula and formulate the relationship equation between backscattering echo and each component such as the surface, dihedral, volume and helix via linearization based on second- and third-order statics. In third-order statics, we calculate the correlation of the correlation coefficients for each polerimetric data and get one new relationship equation to estimate each polarization component such as HH, VV and VH for the volume. As a result, the equation for the helix component in this method is the same formula as one in Yamaguchi's method. However, the equation for the volume

  2. Chatter identification in milling of Inconel 625 based on recurrence plot technique and Hilbert vibration decomposition

    Directory of Open Access Journals (Sweden)

    Lajmert Paweł

    2018-01-01

    Full Text Available In the paper a cutting stability in the milling process of nickel based alloy Inconel 625 is analysed. This problem is often considered theoretically, but the theoretical finding do not always agree with experimental results. For this reason, the paper presents different methods for instability identification during real machining process. A stability lobe diagram is created based on data obtained in impact test of an end mill. Next, the cutting tests were conducted in which the axial cutting depth of cut was gradually increased in order to find a stability limit. Finally, based on the cutting force measurements the stability estimation problem is investigated using the recurrence plot technique and Hilbert vibration decomposition method.

  3. Coarse-to-fine markerless gait analysis based on PCA and Gauss-Laguerre decomposition

    Science.gov (United States)

    Goffredo, Michela; Schmid, Maurizio; Conforto, Silvia; Carli, Marco; Neri, Alessandro; D'Alessio, Tommaso

    2005-04-01

    Human movement analysis is generally performed through the utilization of marker-based systems, which allow reconstructing, with high levels of accuracy, the trajectories of markers allocated on specific points of the human body. Marker based systems, however, show some drawbacks that can be overcome by the use of video systems applying markerless techniques. In this paper, a specifically designed computer vision technique for the detection and tracking of relevant body points is presented. It is based on the Gauss-Laguerre Decomposition, and a Principal Component Analysis Technique (PCA) is used to circumscribe the region of interest. Results obtained on both synthetic and experimental tests provide significant reduction of the computational costs, with no significant reduction of the tracking accuracy.

  4. Bayesian Multi-Energy Computed Tomography reconstruction approaches based on decomposition models

    International Nuclear Information System (INIS)

    Cai, Caifang

    2013-01-01

    Multi-Energy Computed Tomography (MECT) makes it possible to get multiple fractions of basis materials without segmentation. In medical application, one is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical MECT measurements are usually obtained with polychromatic X-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam poly-chromaticity fail to estimate the correct decomposition fractions and result in Beam-Hardening Artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log pre-processing and the water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on non-linear forward models counting the beam poly-chromaticity show great potential for giving accurate fraction images.This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint Maximum A Posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a non-quadratic cost function. To solve it, the use of a monotone Conjugate Gradient (CG) algorithm with suboptimal descent steps is proposed.The performances of the proposed approach are analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  5. Qualitative Fault Isolation of Hybrid Systems: A Structural Model Decomposition-Based Approach

    Science.gov (United States)

    Bregon, Anibal; Daigle, Matthew; Roychoudhury, Indranil

    2016-01-01

    Quick and robust fault diagnosis is critical to ensuring safe operation of complex engineering systems. A large number of techniques are available to provide fault diagnosis in systems with continuous dynamics. However, many systems in aerospace and industrial environments are best represented as hybrid systems that consist of discrete behavioral modes, each with its own continuous dynamics. These hybrid dynamics make the on-line fault diagnosis task computationally more complex due to the large number of possible system modes and the existence of autonomous mode transitions. This paper presents a qualitative fault isolation framework for hybrid systems based on structural model decomposition. The fault isolation is performed by analyzing the qualitative information of the residual deviations. However, in hybrid systems this process becomes complex due to possible existence of observation delays, which can cause observed deviations to be inconsistent with the expected deviations for the current mode in the system. The great advantage of structural model decomposition is that (i) it allows to design residuals that respond to only a subset of the faults, and (ii) every time a mode change occurs, only a subset of the residuals will need to be reconfigured, thus reducing the complexity of the reasoning process for isolation purposes. To demonstrate and test the validity of our approach, we use an electric circuit simulation as the case study.

  6. Wavelet decomposition based principal component analysis for face recognition using MATLAB

    Science.gov (United States)

    Sharma, Mahesh Kumar; Sharma, Shashikant; Leeprechanon, Nopbhorn; Ranjan, Aashish

    2016-03-01

    For the realization of face recognition systems in the static as well as in the real time frame, algorithms such as principal component analysis, independent component analysis, linear discriminate analysis, neural networks and genetic algorithms are used for decades. This paper discusses an approach which is a wavelet decomposition based principal component analysis for face recognition. Principal component analysis is chosen over other algorithms due to its relative simplicity, efficiency, and robustness features. The term face recognition stands for identifying a person from his facial gestures and having resemblance with factor analysis in some sense, i.e. extraction of the principal component of an image. Principal component analysis is subjected to some drawbacks, mainly the poor discriminatory power and the large computational load in finding eigenvectors, in particular. These drawbacks can be greatly reduced by combining both wavelet transform decomposition for feature extraction and principal component analysis for pattern representation and classification together, by analyzing the facial gestures into space and time domain, where, frequency and time are used interchangeably. From the experimental results, it is envisaged that this face recognition method has made a significant percentage improvement in recognition rate as well as having a better computational efficiency.

  7. Empirical Research on China’s Carbon Productivity Decomposition Model Based on Multi-Dimensional Factors

    Directory of Open Access Journals (Sweden)

    Jianchang Lu

    2015-04-01

    Full Text Available Based on the international community’s analysis of the present CO2 emissions situation, a Log Mean Divisia Index (LMDI decomposition model is proposed in this paper, aiming to reflect the decomposition of carbon productivity. The model is designed by analyzing the factors that affect carbon productivity. China’s contribution to carbon productivity is analyzed from the dimensions of influencing factors, regional structure and industrial structure. It comes to the conclusions that: (a economic output, the provincial carbon productivity and energy structure are the most influential factors, which are consistent with China’s current actual policy; (b the distribution patterns of economic output, carbon productivity and energy structure in different regions have nothing to do with the Chinese traditional sense of the regional economic development patterns; (c considering the regional protectionism, regional actual situation need to be considered at the same time; (d in the study of the industrial structure, the contribution value of industry is the most prominent factor for China’s carbon productivity, while the industrial restructuring has not been done well enough.

  8. A demodulating approach based on local mean decomposition and its applications in mechanical fault diagnosis

    International Nuclear Information System (INIS)

    Chen, Baojia; He, Zhengjia; Chen, Xuefeng; Cao, Hongrui; Cai, Gaigai; Zi, Yanyang

    2011-01-01

    Since machinery fault vibration signals are usually multicomponent modulation signals, how to decompose complex signals into a set of mono-components whose instantaneous frequency (IF) has physical sense has become a key issue. Local mean decomposition (LMD) is a new kind of time–frequency analysis approach which can decompose a signal adaptively into a set of product function (PF) components. In this paper, a modulation feature extraction method-based LMD is proposed. The envelope of a PF is the instantaneous amplitude (IA) and the derivative of the unwrapped phase of a purely flat frequency demodulated (FM) signal is the IF. The computed IF and IA are displayed together in the form of time–frequency representation (TFR). Modulation features can be extracted from the spectrum analysis of the IA and IF. In order to make the IF have physical meaning, the phase-unwrapping algorithm and IF processing method of extrema are presented in detail along with a simulation FM signal example. Besides, the dependence of the LMD method on the signal-to-noise ratio (SNR) is also investigated by analyzing synthetic signals which are added with Gaussian noise. As a result, the recommended critical SNRs for PF decomposition and IF extraction are given according to the practical application. Successful fault diagnosis on a rolling bearing and gear of locomotive bogies shows that LMD has better identification capacity for modulation signal processing and is very suitable for failure detection in rotating machinery

  9. Sparse Localization with a Mobile Beacon Based on LU Decomposition in Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2015-09-01

    Full Text Available Node localization is the core in wireless sensor network. It can be solved by powerful beacons, which are equipped with global positioning system devices to know their location information. In this article, we present a novel sparse localization approach with a mobile beacon based on LU decomposition. Our scheme firstly translates node localization problem into a 1-sparse vector recovery problem by establishing sparse localization model. Then, LU decomposition pre-processing is adopted to solve the problem that measurement matrix does not meet the re¬stricted isometry property. Later, the 1-sparse vector can be exactly recovered by compressive sensing. Finally, as the 1-sparse vector is approximate sparse, weighted Cen¬troid scheme is introduced to accurately locate the node. Simulation and analysis show that our scheme has better localization performance and lower requirement for the mobile beacon than MAP+GC, MAP-M, and MAP-MN schemes. In addition, the obstacles and DOI have little effect on the novel scheme, and it has great localization performance under low SNR, thus, the scheme proposed is robust.

  10. CT Image Sequence Restoration Based on Sparse and Low-Rank Decomposition

    Science.gov (United States)

    Gou, Shuiping; Wang, Yueyue; Wang, Zhilong; Peng, Yong; Zhang, Xiaopeng; Jiao, Licheng; Wu, Jianshe

    2013-01-01

    Blurry organ boundaries and soft tissue structures present a major challenge in biomedical image restoration. In this paper, we propose a low-rank decomposition-based method for computed tomography (CT) image sequence restoration, where the CT image sequence is decomposed into a sparse component and a low-rank component. A new point spread function of Weiner filter is employed to efficiently remove blur in the sparse component; a wiener filtering with the Gaussian PSF is used to recover the average image of the low-rank component. And then we get the recovered CT image sequence by combining the recovery low-rank image with all recovery sparse image sequence. Our method achieves restoration results with higher contrast, sharper organ boundaries and richer soft tissue structure information, compared with existing CT image restoration methods. The robustness of our method was assessed with numerical experiments using three different low-rank models: Robust Principle Component Analysis (RPCA), Linearized Alternating Direction Method with Adaptive Penalty (LADMAP) and Go Decomposition (GoDec). Experimental results demonstrated that the RPCA model was the most suitable for the small noise CT images whereas the GoDec model was the best for the large noisy CT images. PMID:24023764

  11. The Speech multi features fusion perceptual hash algorithm based on tensor decomposition

    Science.gov (United States)

    Huang, Y. B.; Fan, M. H.; Zhang, Q. Y.

    2018-03-01

    With constant progress in modern speech communication technologies, the speech data is prone to be attacked by the noise or maliciously tampered. In order to make the speech perception hash algorithm has strong robustness and high efficiency, this paper put forward a speech perception hash algorithm based on the tensor decomposition and multi features is proposed. This algorithm analyses the speech perception feature acquires each speech component wavelet packet decomposition. LPCC, LSP and ISP feature of each speech component are extracted to constitute the speech feature tensor. Speech authentication is done by generating the hash values through feature matrix quantification which use mid-value. Experimental results showing that the proposed algorithm is robust for content to maintain operations compared with similar algorithms. It is able to resist the attack of the common background noise. Also, the algorithm is highly efficiency in terms of arithmetic, and is able to meet the real-time requirements of speech communication and complete the speech authentication quickly.

  12. Regional income inequality model based on theil index decomposition and weighted variance coeficient

    Science.gov (United States)

    Sitepu, H. R.; Darnius, O.; Tambunan, W. N.

    2018-03-01

    Regional income inequality is an important issue in the study on economic development of a certain region. Rapid economic development may not in accordance with people’s per capita income. The method of measuring the regional income inequality has been suggested by many experts. This research used Theil index and weighted variance coefficient in order to measure the regional income inequality. Regional income decomposition which becomes the productivity of work force and their participation in regional income inequality, based on Theil index, can be presented in linear relation. When the economic assumption in j sector, sectoral income value, and the rate of work force are used, the work force productivity imbalance can be decomposed to become the component in sectors and in intra-sectors. Next, weighted variation coefficient is defined in the revenue and productivity of the work force. From the quadrate of the weighted variation coefficient result, it was found that decomposition of regional revenue imbalance could be analyzed by finding out how far each component contribute to regional imbalance which, in this research, was analyzed in nine sectors of economic business.

  13. Multi-Scale Pixel-Based Image Fusion Using Multivariate Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Naveed ur Rehman

    2015-05-01

    Full Text Available A novel scheme to perform the fusion of multiple images using the multivariate empirical mode decomposition (MEMD algorithm is proposed. Standard multi-scale fusion techniques make a priori assumptions regarding input data, whereas standard univariate empirical mode decomposition (EMD-based fusion techniques suffer from inherent mode mixing and mode misalignment issues, characterized respectively by either a single intrinsic mode function (IMF containing multiple scales or the same indexed IMFs corresponding to multiple input images carrying different frequency information. We show that MEMD overcomes these problems by being fully data adaptive and by aligning common frequency scales from multiple channels, thus enabling their comparison at a pixel level and subsequent fusion at multiple data scales. We then demonstrate the potential of the proposed scheme on a large dataset of real-world multi-exposure and multi-focus images and compare the results against those obtained from standard fusion algorithms, including the principal component analysis (PCA, discrete wavelet transform (DWT and non-subsampled contourlet transform (NCT. A variety of image fusion quality measures are employed for the objective evaluation of the proposed method. We also report the results of a hypothesis testing approach on our large image dataset to identify statistically-significant performance differences.

  14. Effects of catalyst-bed’s structure parameters on decomposition and combustion characteristics of an ammonium dinitramide (ADN)-based thruster

    International Nuclear Information System (INIS)

    Yu, Yu-Song; Li, Guo-Xiu; Zhang, Tao; Chen, Jun; Wang, Meng

    2015-01-01

    Highlights: • The decomposition and combustion process is investigated by numerical method. • Heat transfer in catalyst bed is modeled using non-isothermal and radiation model. • The wall heat transfer can impact on the distribution of temperature and species. • The value of catalyst bed length, diameter and wall thickness are optimized. - Abstract: The present investigation numerically studies the evolutions of decomposition and combustion within an ADN-based thruster, and the effects of the catalyst-bed’s three structure parameters (length, diameter, and wall thickness) on the general performance of ADN-based thruster have been systematically investigated. Based upon the calculated results, it can be known that the distribution of temperature gives a Gaussian manner at the exits of the catalyst-bed and the combustion chamber, and the temperature can be obviously effected by each the three structure parameters of the catalyst-bed. With the rise of each the three structure parameter, the temperature will first increases and decreases, and there exists an optimal design value making the temperature be the highest. Via the comparison on the maximal temperature at combustion chamber’s exit and the specific impulse, it can be obtained that the wall thickness plays an important role in the influences on the general performance of ADN-based thruster while the catalyst-bed’s length has the weak effects on the general performance among the three structure parameters.

  15. WEALTH-BASED INEQUALITY IN CHILD IMMUNIZATION IN INDIA: A DECOMPOSITION APPROACH.

    Science.gov (United States)

    Debnath, Avijit; Bhattacharjee, Nairita

    2018-05-01

    SummaryDespite years of health and medical advancement, children still suffer from infectious diseases that are vaccine preventable. India reacted in 1978 by launching the Expanded Programme on Immunization in an attempt to reduce the incidence of vaccine-preventable diseases (VPDs). Although the nation has made remarkable progress over the years, there is significant variation in immunization coverage across different socioeconomic strata. This study attempted to identify the determinants of wealth-based inequality in child immunization using a new, modified method. The present study was based on 11,001 eligible ever-married women aged 15-49 and their children aged 12-23 months. Data were from the third District Level Household and Facility Survey (DLHS-3) of India, 2007-08. Using an approximation of Erreyger's decomposition technique, the study identified unequal access to antenatal care as the main factor associated with inequality in immunization coverage in India.

  16. Multiple image encryption scheme based on pixel exchange operation and vector decomposition

    Science.gov (United States)

    Xiong, Y.; Quan, C.; Tay, C. J.

    2018-02-01

    We propose a new multiple image encryption scheme based on a pixel exchange operation and a basic vector decomposition in Fourier domain. In this algorithm, original images are imported via a pixel exchange operator, from which scrambled images and pixel position matrices are obtained. Scrambled images encrypted into phase information are imported using the proposed algorithm and phase keys are obtained from the difference between scrambled images and synthesized vectors in a charge-coupled device (CCD) plane. The final synthesized vector is used as an input in a random phase encoding (DRPE) scheme. In the proposed encryption scheme, pixel position matrices and phase keys serve as additional private keys to enhance the security of the cryptosystem which is based on a 4-f system. Numerical simulations are presented to demonstrate the feasibility and robustness of the proposed encryption scheme.

  17. Dynamic Power Dispatch Considering Electric Vehicles and Wind Power Using Decomposition Based Multi-Objective Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Boyang Qu

    2017-12-01

    Full Text Available The intermittency of wind power and the large-scale integration of electric vehicles (EVs bring new challenges to the reliability and economy of power system dispatching. In this paper, a novel multi-objective dynamic economic emission dispatch (DEED model is proposed considering the EVs and uncertainties of wind power. The total fuel cost and pollutant emission are considered as the optimization objectives, and the vehicle to grid (V2G power and the conventional generator output power are set as the decision variables. The stochastic wind power is derived by Weibull probability distribution function. Under the premise of meeting the system energy and user’s travel demand, the charging and discharging behavior of the EVs are dynamically managed. Moreover, we propose a two-step dynamic constraint processing strategy for decision variables based on penalty function, and, on this basis, the Multi-Objective Evolutionary Algorithm Based on Decomposition (MOEA/D algorithm is improved. The proposed model and approach are verified by the 10-generator system. The results demonstrate that the proposed DEED model and the improved MOEA/D algorithm are effective and reasonable.

  18. Emergy-Based Regional Socio-Economic Metabolism Analysis: An Application of Data Envelopment Analysis and Decomposition Analysis

    OpenAIRE

    Zilong Zhang; Xingpeng Chen; Peter Heck

    2014-01-01

    Integrated analysis on socio-economic metabolism could provide a basis for understanding and optimizing regional sustainability. The paper conducted socio-economic metabolism analysis by means of the emergy accounting method coupled with data envelopment analysis and decomposition analysis techniques to assess the sustainability of Qingyang city and its eight sub-region system, as well as to identify the major driving factors of performance change during 2000–2007, to serve as the basis for f...

  19. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha

    2013-11-25

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  20. Classifiers based on optimal decision rules

    KAUST Repository

    Amin, Talha M.; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2013-01-01

    Based on dynamic programming approach we design algorithms for sequential optimization of exact and approximate decision rules relative to the length and coverage [3, 4]. In this paper, we use optimal rules to construct classifiers, and study two questions: (i) which rules are better from the point of view of classification-exact or approximate; and (ii) which order of optimization gives better results of classifier work: length, length+coverage, coverage, or coverage+length. Experimental results show that, on average, classifiers based on exact rules are better than classifiers based on approximate rules, and sequential optimization (length+coverage or coverage+length) is better than the ordinary optimization (length or coverage).

  1. Analysis of Human's Motions Based on Local Mean Decomposition in Through-wall Radar Detection

    Science.gov (United States)

    Lu, Qi; Liu, Cai; Zeng, Zhaofa; Li, Jing; Zhang, Xuebing

    2016-04-01

    Observation of human motions through a wall is an important issue in security applications and search-and rescue. Radar has advantages in looking through walls where other sensors give low performance or cannot be used at all. Ultrawideband (UWB) radar has high spatial resolution as a result of employment of ultranarrow pulses. It has abilities to distinguish the closely positioned targets and provide time-lapse information of targets. Moreover, the UWB radar shows good performance in wall penetration when the inherently short pulses spread their energy over a broad frequency range. Human's motions show periodic features including respiration, swing arms and legs, fluctuations of the torso. Detection of human targets is based on the fact that there is always periodic motion due to breathing or other body movements like walking. The radar can gain the reflections from each human body parts and add the reflections at each time sample. The periodic movements will cause micro-Doppler modulation in the reflected radar signals. Time-frequency analysis methods are consider as the effective tools to analysis and extract micro-Doppler effects caused by the periodic movements in the reflected radar signal, such as short-time Fourier transform (STFT), wavelet transform (WT), and Hilbert-Huang transform (HHT).The local mean decomposition (LMD), initially developed by Smith (2005), is to decomposed amplitude and frequency modulated signals into a small set of product functions (PFs), each of which is the product of an envelope signal and a frequency modulated signal from which a time-vary instantaneous phase and instantaneous frequency can be derived. As bypassing the Hilbert transform, the LMD has no demodulation error coming from window effect and involves no negative frequency without physical sense. Also, the instantaneous attributes obtained by LMD are more stable and precise than those obtained by the empirical mode decomposition (EMD) because LMD uses smoothed local

  2. Deconvolutions based on singular value decomposition and the pseudoinverse: a guide for beginners.

    Science.gov (United States)

    Hendler, R W; Shrager, R I

    1994-01-01

    Singular value decomposition (SVD) is deeply rooted in the theory of linear algebra, and because of this is not readily understood by a large group of researchers who could profit from its application. In this paper, we discuss the subject on a level that should be understandable to scientists who are not well versed in linear algebra. However, because it is necessary that certain key concepts in linear algebra be appreciated in order to comprehend what is accomplished by SVD, we present the section, 'Bare basics of linear algebra'. This is followed by a discussion of the theory of SVD. Next we present step-by-step examples to illustrate how SVD is applied to deconvolute a titration involving a mixture of three pH indicators. One noiseless case is presented as well as two cases where either a fixed or varying noise level is present. Finally, we discuss additional deconvolutions of mixed spectra based on the use of the pseudoinverse.

  3. Decomposition of atmospheric water content into cluster contributions based on theoretical association equilibrium constants

    International Nuclear Information System (INIS)

    Slanina, Z.

    1987-01-01

    Water vapor is treated as an equilibrium mixture of water clusters (H 2 O)/sub i/ using quantum-chemical evaluation of the equilibrium constants of water associations. The model is adapted to the conditions of atmospheric humidity, and a decomposition algorithm is suggested using the temperature and mass concentration of water as input information and used for a demonstration of evaluation of the water oligomer populations in the Earth's atmosphere. An upper limit of the populations is set up based on the water content in saturated aqueous vapor. It is proved that the cluster population in the saturated water vapor, as well as in the Earth's atmosphere for a typical temperature/humidity profile, increases with increasing temperatures

  4. A domain decomposition approach for full-field measurements based identification of local elastic parameters

    KAUST Repository

    Lubineau, Gilles

    2015-03-01

    We propose a domain decomposition formalism specifically designed for the identification of local elastic parameters based on full-field measurements. This technique is made possible by a multi-scale implementation of the constitutive compatibility method. Contrary to classical approaches, the constitutive compatibility method resolves first some eigenmodes of the stress field over the structure rather than directly trying to recover the material properties. A two steps micro/macro reconstruction of the stress field is performed: a Dirichlet identification problem is solved first over every subdomain, the macroscopic equilibrium is then ensured between the subdomains in a second step. We apply the method to large linear elastic 2D identification problems to efficiently produce estimates of the material properties at a much lower computational cost than classical approaches.

  5. Fringe-projection profilometry based on two-dimensional empirical mode decomposition.

    Science.gov (United States)

    Zheng, Suzhen; Cao, Yiping

    2013-11-01

    In 3D shape measurement, because deformed fringes often contain low-frequency information degraded with random noise and background intensity information, a new fringe-projection profilometry is proposed based on 2D empirical mode decomposition (2D-EMD). The fringe pattern is first decomposed into numbers of intrinsic mode functions by 2D-EMD. Because the method has partial noise reduction, the background components can be removed to obtain the fundamental components needed to perform Hilbert transformation to retrieve the phase information. The 2D-EMD can effectively extract the modulation phase of a single direction fringe and an inclined fringe pattern because it is a full 2D analysis method and considers the relationship between adjacent lines of a fringe patterns. In addition, as the method does not add noise repeatedly, as does ensemble EMD, the data processing time is shortened. Computer simulations and experiments prove the feasibility of this method.

  6. An epileptic seizures detection algorithm based on the empirical mode decomposition of EEG.

    Science.gov (United States)

    Orosco, Lorena; Laciar, Eric; Correa, Agustina Garces; Torres, Abel; Graffigna, Juan P

    2009-01-01

    Epilepsy is a neurological disorder that affects around 50 million people worldwide. The seizure detection is an important component in the diagnosis of epilepsy. In this study, the Empirical Mode Decomposition (EMD) method was proposed on the development of an automatic epileptic seizure detection algorithm. The algorithm first computes the Intrinsic Mode Functions (IMFs) of EEG records, then calculates the energy of each IMF and performs the detection based on an energy threshold and a minimum duration decision. The algorithm was tested in 9 invasive EEG records provided and validated by the Epilepsy Center of the University Hospital of Freiburg. In 90 segments analyzed (39 with epileptic seizures) the sensitivity and specificity obtained with the method were of 56.41% and 75.86% respectively. It could be concluded that EMD is a promissory method for epileptic seizure detection in EEG records.

  7. Synthesis and thermal decomposition kinetics of Th(IV) complex with unsymmetrical Schiff base ligand

    International Nuclear Information System (INIS)

    Fan Yuhua; Bi Caifeng; Liu Siquan; Yang Lirong; Liu Feng; Ai Xiaokang

    2006-01-01

    A new unsymmetrical Schiff base ligand (H 2 LLi) was synthesized using L-lysine, o-vanillin and salicylaladyde. Thorium(IV) complex of this ligand [Th(H 2 L)(NO 3 )](NO 3 ) 2 x 3H 2 O have been prepared and characterized by elemental analyses, IR, UV and molar conductance. The thermal decomposition kinetics of the complex for the second stage was studied under non-isothermal condition by TG and DTG methods. The kinetic equation may be expressed as: dα/dt = A x e -E/RT x 1/2 (1-α) x [-ln(1-α)] -1 . The kinetic parameters (E, A), activation entropy ΔS ≠ and activation free-energy ΔG ≠ were also calculated. (author)

  8. A hybrid bird mating optimizer algorithm with teaching-learning-based optimization for global numerical optimization

    Directory of Open Access Journals (Sweden)

    Qingyang Zhang

    2015-02-01

    Full Text Available Bird Mating Optimizer (BMO is a novel meta-heuristic optimization algorithm inspired by intelligent mating behavior of birds. However, it is still insufficient in convergence of speed and quality of solution. To overcome these drawbacks, this paper proposes a hybrid algorithm (TLBMO, which is established by combining the advantages of Teaching-learning-based optimization (TLBO and Bird Mating Optimizer (BMO. The performance of TLBMO is evaluated on 23 benchmark functions, and compared with seven state-of-the-art approaches, namely BMO, TLBO, Artificial Bee Bolony (ABC, Particle Swarm Optimization (PSO, Fast Evolution Programming (FEP, Differential Evolution (DE, Group Search Optimization (GSO. Experimental results indicate that the proposed method performs better than other existing algorithms for global numerical optimization.

  9. A medium term bulk production cost model based on decomposition techniques

    Energy Technology Data Exchange (ETDEWEB)

    Ramos, A.; Munoz, L. [Univ. Pontificia Comillas, Madrid (Spain). Inst. de Investigacion Tecnologica; Martinez-Corcoles, F.; Martin-Corrochano, V. [IBERDROLA, Madrid (Spain)

    1995-11-01

    This model provides the minimum variable cost subject to operating constraints (generation, transmission and fuel constraints). Generation constraints include power reserve margin with respect to the system peak load, first Kirchhoff`s law at each node, hydro energy scheduling, maintenance scheduling, and generation limitations. Transmission constraints cover the second Kirchhoff`s law and transmission limitations. The generation and transmission economic dispatch is approximated by the linearized (also called DC) load flow. Network losses are included as a non linear approximation. Fuel constraints include minimum consumption quotas and fuel scheduling for domestic coal thermal plants. This production costing problem is formulated as a large-scale non linear optimization problem solved by generalized Benders decomposition method. Master problem determines the inter-period decisions, i.e., maintenance, fuel and hydro scheduling, and each subproblem solves the intra-period decisions, i.e., generation and transmission economic dispatch for one period. The model has been implemented in GAMS, a mathematical programming language. An application to the large-scale Spanish electric power system is presented. 11 refs

  10. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  11. Reliability Based Optimization of Fire Protection

    DEFF Research Database (Denmark)

    Thoft-Christensen, Palle

    fire protection (PFP) of firewalls and structural members. The paper is partly based on research performed within the EU supported research project B/E-4359 "Optimized Fire Safety of Offshore Structures" and partly on research supported by the Danish Technical Research Council (see Thoft-Christensen [1......]). Special emphasis is put on the optimization software developed within the project.......It is well known that fire is one of the major risks of serious damage or total loss of several types of structures such as nuclear installations, buildings, offshore platforms/topsides etc. This paper presents a methodology and software for reliability based optimization of the layout of passive...

  12. Optical colour image watermarking based on phase-truncated linear canonical transform and image decomposition

    Science.gov (United States)

    Su, Yonggang; Tang, Chen; Li, Biyuan; Lei, Zhenkun

    2018-05-01

    This paper presents a novel optical colour image watermarking scheme based on phase-truncated linear canonical transform (PT-LCT) and image decomposition (ID). In this proposed scheme, a PT-LCT-based asymmetric cryptography is designed to encode the colour watermark into a noise-like pattern, and an ID-based multilevel embedding method is constructed to embed the encoded colour watermark into a colour host image. The PT-LCT-based asymmetric cryptography, which can be optically implemented by double random phase encoding with a quadratic phase system, can provide a higher security to resist various common cryptographic attacks. And the ID-based multilevel embedding method, which can be digitally implemented by a computer, can make the information of the colour watermark disperse better in the colour host image. The proposed colour image watermarking scheme possesses high security and can achieve a higher robustness while preserving the watermark’s invisibility. The good performance of the proposed scheme has been demonstrated by extensive experiments and comparison with other relevant schemes.

  13. Rapid surface defect detection based on singular value decomposition using steel strips as an example

    Science.gov (United States)

    Sun, Qianlai; Wang, Yin; Sun, Zhiyi

    2018-05-01

    For most surface defect detection methods based on image processing, image segmentation is a prerequisite for determining and locating the defect. In our previous work, a method based on singular value decomposition (SVD) was used to determine and approximately locate surface defects on steel strips without image segmentation. For the SVD-based method, the image to be inspected was projected onto its first left and right singular vectors respectively. If there were defects in the image, there would be sharp changes in the projections. Then the defects may be determined and located according sharp changes in the projections of each image to be inspected. This method was simple and practical but the SVD should be performed for each image to be inspected. Owing to the high time complexity of SVD itself, it did not have a significant advantage in terms of time consumption over image segmentation-based methods. Here, we present an improved SVD-based method. In the improved method, a defect-free image is considered as the reference image which is acquired under the same environment as the image to be inspected. The singular vectors of each image to be inspected are replaced by the singular vectors of the reference image, and SVD is performed only once for the reference image off-line before detecting of the defects, thus greatly reducing the time required. The improved method is more conducive to real-time defect detection. Experimental results confirm its validity.

  14. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    Science.gov (United States)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-12-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions.

  15. A new linear back projection algorithm to electrical tomography based on measuring data decomposition

    International Nuclear Information System (INIS)

    Sun, Benyuan; Yue, Shihong; Cui, Ziqiang; Wang, Huaxiang

    2015-01-01

    As an advanced measurement technique of non-radiant, non-intrusive, rapid response, and low cost, the electrical tomography (ET) technique has developed rapidly in recent decades. The ET imaging algorithm plays an important role in the ET imaging process. Linear back projection (LBP) is the most used ET algorithm due to its advantages of dynamic imaging process, real-time response, and easy realization. But the LBP algorithm is of low spatial resolution due to the natural ‘soft field’ effect and ‘ill-posed solution’ problems; thus its applicable ranges are greatly limited. In this paper, an original data decomposition method is proposed, and every ET measuring data are decomposed into two independent new data based on the positive and negative sensing areas of the measuring data. Consequently, the number of total measuring data is extended to twice as many as the number of the original data, thus effectively reducing the ‘ill-posed solution’. On the other hand, an index to measure the ‘soft field’ effect is proposed. The index shows that the decomposed data can distinguish between different contributions of various units (pixels) for any ET measuring data, and can efficiently reduce the ‘soft field’ effect of the ET imaging process. In light of the data decomposition method, a new linear back projection algorithm is proposed to improve the spatial resolution of the ET image. A series of simulations and experiments are applied to validate the proposed algorithm by the real-time performances and the progress of spatial resolutions. (paper)

  16. Hyperspectral chemical plume detection algorithms based on multidimensional iterative filtering decomposition.

    Science.gov (United States)

    Cicone, A; Liu, J; Zhou, H

    2016-04-13

    Chemicals released in the air can be extremely dangerous for human beings and the environment. Hyperspectral images can be used to identify chemical plumes, however the task can be extremely challenging. Assuming we know a priori that some chemical plume, with a known frequency spectrum, has been photographed using a hyperspectral sensor, we can use standard techniques such as the so-called matched filter or adaptive cosine estimator, plus a properly chosen threshold value, to identify the position of the chemical plume. However, due to noise and inadequate sensing, the accurate identification of chemical pixels is not easy even in this apparently simple situation. In this paper, we present a post-processing tool that, in a completely adaptive and data-driven fashion, allows us to improve the performance of any classification methods in identifying the boundaries of a plume. This is done using the multidimensional iterative filtering (MIF) algorithm (Cicone et al. 2014 (http://arxiv.org/abs/1411.6051); Cicone & Zhou 2015 (http://arxiv.org/abs/1507.07173)), which is a non-stationary signal decomposition method like the pioneering empirical mode decomposition method (Huang et al. 1998 Proc. R. Soc. Lond. A 454, 903. (doi:10.1098/rspa.1998.0193)). Moreover, based on the MIF technique, we propose also a pre-processing method that allows us to decorrelate and mean-centre a hyperspectral dataset. The cosine similarity measure, which often fails in practice, appears to become a successful and outperforming classifier when equipped with such a pre-processing method. We show some examples of the proposed methods when applied to real-life problems. © 2016 The Author(s).

  17. Ambiguity attacks on robust blind image watermarking scheme based on redundant discrete wavelet transform and singular value decomposition

    Directory of Open Access Journals (Sweden)

    Khaled Loukhaoukha

    2017-12-01

    Full Text Available Among emergent applications of digital watermarking are copyright protection and proof of ownership. Recently, Makbol and Khoo (2013 have proposed for these applications a new robust blind image watermarking scheme based on the redundant discrete wavelet transform (RDWT and the singular value decomposition (SVD. In this paper, we present two ambiguity attacks on this algorithm that have shown that this algorithm fails when used to provide robustness applications like owner identification, proof of ownership, and transaction tracking. Keywords: Ambiguity attack, Image watermarking, Singular value decomposition, Redundant discrete wavelet transform

  18. Genetic algorithm based separation cascade optimization

    International Nuclear Information System (INIS)

    Mahendra, A.K.; Sanyal, A.; Gouthaman, G.; Bera, T.K.

    2008-01-01

    The conventional separation cascade design procedure does not give an optimum design because of squaring-off, variation of flow rates and separation factor of the element with respect to stage location. Multi-component isotope separation further complicates the design procedure. Cascade design can be stated as a constrained multi-objective optimization. Cascade's expectation from the separating element is multi-objective i.e. overall separation factor, cut, optimum feed and separative power. Decision maker may aspire for more comprehensive multi-objective goals where optimization of cascade is coupled with the exploration of separating element optimization vector space. In real life there are many issues which make it important to understand the decision maker's perception of cost-quality-speed trade-off and consistency of preferences. Genetic algorithm (GA) is one such evolutionary technique that can be used for cascade design optimization. This paper addresses various issues involved in the GA based multi-objective optimization of the separation cascade. Reference point based optimization methodology with GA based Pareto optimality concept for separation cascade was found pragmatic and promising. This method should be explored, tested, examined and further developed for binary as well as multi-component separations. (author)

  19. Structural investigation of oxovanadium(IV) Schiff base complexes: X-ray crystallography, electrochemistry and kinetic of thermal decomposition

    Czech Academy of Sciences Publication Activity Database

    Asadi, M.; Asadi, Z.; Savaripoor, N.; Dušek, Michal; Eigner, Václav; Shorkaei, M.R.; Sedaghat, M.

    2015-01-01

    Roč. 136, Feb (2015), 625-634 ISSN 1386-1425 R&D Projects: GA ČR(CZ) GAP204/11/0809 Institutional support: RVO:68378271 Keywords : Oxovanadium(IV) complexes * Schiff base * Kinetic s of thermal decomposition * Electrochemistry Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 2.653, year: 2015

  20. Capturing alternative secondary structures of RNA by decomposition of base-pairing probabilities.

    Science.gov (United States)

    Hagio, Taichi; Sakuraba, Shun; Iwakiri, Junichi; Mori, Ryota; Asai, Kiyoshi

    2018-02-19

    It is known that functional RNAs often switch their functions by forming different secondary structures. Popular tools for RNA secondary structures prediction, however, predict the single 'best' structures, and do not produce alternative structures. There are bioinformatics tools to predict suboptimal structures, but it is difficult to detect which alternative secondary structures are essential. We proposed a new computational method to detect essential alternative secondary structures from RNA sequences by decomposing the base-pairing probability matrix. The decomposition is calculated by a newly implemented software tool, RintW, which efficiently computes the base-pairing probability distributions over the Hamming distance from arbitrary reference secondary structures. The proposed approach has been demonstrated on ROSE element RNA thermometer sequence and Lysine RNA ribo-switch, showing that the proposed approach captures conformational changes in secondary structures. We have shown that alternative secondary structures are captured by decomposing base-paring probabilities over Hamming distance. Source code is available from http://www.ncRNA.org/RintW .

  1. Bivariate empirical mode decomposition for ECG-based biometric identification with emotional data.

    Science.gov (United States)

    Ferdinando, Hany; Seppanen, Tapio; Alasaarela, Esko

    2017-07-01

    Emotions modulate ECG signals such that they might affect ECG-based biometric identification in real life application. It motivated in finding good feature extraction methods where the emotional state of the subjects has minimum impacts. This paper evaluates feature extraction based on bivariate empirical mode decomposition (BEMD) for biometric identification when emotion is considered. Using the ECG signal from the Mahnob-HCI database for affect recognition, the features were statistical distributions of dominant frequency after applying BEMD analysis to ECG signals. The achieved accuracy was 99.5% with high consistency using kNN classifier in 10-fold cross validation to identify 26 subjects when the emotional states of the subjects were ignored. When the emotional states of the subject were considered, the proposed method also delivered high accuracy, around 99.4%. We concluded that the proposed method offers emotion-independent features for ECG-based biometric identification. The proposed method needs more evaluation related to testing with other classifier and variation in ECG signals, e.g. normal ECG vs. ECG with arrhythmias, ECG from various ages, and ECG from other affective databases.

  2. Investigating properties of the cardiovascular system using innovative analysis algorithms based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Yeh, Jia-Rong; Lin, Tzu-Yu; Chen, Yun; Sun, Wei-Zen; Abbod, Maysam F; Shieh, Jiann-Shing

    2012-01-01

    Cardiovascular system is known to be nonlinear and nonstationary. Traditional linear assessments algorithms of arterial stiffness and systemic resistance of cardiac system accompany the problem of nonstationary or inconvenience in practical applications. In this pilot study, two new assessment methods were developed: the first is ensemble empirical mode decomposition based reflection index (EEMD-RI) while the second is based on the phase shift between ECG and BP on cardiac oscillation. Both methods utilise the EEMD algorithm which is suitable for nonlinear and nonstationary systems. These methods were used to investigate the properties of arterial stiffness and systemic resistance for a pig's cardiovascular system via ECG and blood pressure (BP). This experiment simulated a sequence of continuous changes of blood pressure arising from steady condition to high blood pressure by clamping the artery and an inverse by relaxing the artery. As a hypothesis, the arterial stiffness and systemic resistance should vary with the blood pressure due to clamping and relaxing the artery. The results show statistically significant correlations between BP, EEMD-based RI, and the phase shift between ECG and BP on cardiac oscillation. The two assessments results demonstrate the merits of the EEMD for signal analysis.

  3. Development of GPT-based optimization algorithm

    International Nuclear Information System (INIS)

    White, J.R.; Chapman, D.M.; Biswas, D.

    1985-01-01

    The University of Lowell and Westinghouse Electric Corporation are involved in a joint effort to evaluate the potential benefits of generalized/depletion perturbation theory (GPT/DTP) methods for a variety of light water reactor (LWR) physics applications. One part of that work has focused on the development of a GPT-based optimization algorithm for the overall design, analysis, and optimization of LWR reload cores. The use of GPT sensitivity data in formulating the fuel management optimization problem is conceptually straightforward; it is the actual execution of the concept that is challenging. Thus, the purpose of this paper is to address some of the major difficulties, to outline our approach to these problems, and to present some illustrative examples of an efficient GTP-based optimization scheme

  4. Interactive Reliability-Based Optimal Design

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.

    1994-01-01

    Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... tasks, namely finite element analyses, sensitivity analyses, reliability analyses and application of an optimization algorithm. In the paper it is shown how these four tasks can be linked effectively and how existing information on design variables, Lagrange multipliers and the Hessian matrix can...

  5. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.; Ghommem, Mehdi; Kagadis, George C.; Katsanos, Konstantinos H.; Loukopoulos, Vassilios C.; Burganos, Vasilis N.; Nikiforidis, George C.

    2014-01-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must

  6. A robust indicator based on singular value decomposition for flaw feature detection from noisy ultrasonic signals

    Science.gov (United States)

    Cui, Ximing; Wang, Zhe; Kang, Yihua; Pu, Haiming; Deng, Zhiyang

    2018-05-01

    Singular value decomposition (SVD) has been proven to be an effective de-noising tool for flaw echo signal feature detection in ultrasonic non-destructive evaluation (NDE). However, the uncertainty in the arbitrary manner of the selection of an effective singular value weakens the robustness of this technique. Improper selection of effective singular values will lead to bad performance of SVD de-noising. What is more, the computational complexity of SVD is too large for it to be applied in real-time applications. In this paper, to eliminate the uncertainty in SVD de-noising, a novel flaw indicator, named the maximum singular value indicator (MSI), based on short-time SVD (STSVD), is proposed for flaw feature detection from a measured signal in ultrasonic NDE. In this technique, the measured signal is first truncated into overlapping short-time data segments to put feature information of a transient flaw echo signal in local field, and then the MSI can be obtained from the SVD of each short-time data segment. Research shows that this indicator can clearly indicate the location of ultrasonic flaw signals, and the computational complexity of this STSVD-based indicator is significantly reduced with the algorithm proposed in this paper. Both simulation and experiments show that this technique is very efficient for real-time application in flaw detection from noisy data.

  7. Video steganography based on bit-plane decomposition of wavelet-transformed video

    Science.gov (United States)

    Noda, Hideki; Furuta, Tomofumi; Niimi, Michiharu; Kawaguchi, Eiji

    2004-06-01

    This paper presents a steganography method using lossy compressed video which provides a natural way to send a large amount of secret data. The proposed method is based on wavelet compression for video data and bit-plane complexity segmentation (BPCS) steganography. BPCS steganography makes use of bit-plane decomposition and the characteristics of the human vision system, where noise-like regions in bit-planes of a dummy image are replaced with secret data without deteriorating image quality. In wavelet-based video compression methods such as 3-D set partitioning in hierarchical trees (SPIHT) algorithm and Motion-JPEG2000, wavelet coefficients in discrete wavelet transformed video are quantized into a bit-plane structure and therefore BPCS steganography can be applied in the wavelet domain. 3-D SPIHT-BPCS steganography and Motion-JPEG2000-BPCS steganography are presented and tested, which are the integration of 3-D SPIHT video coding and BPCS steganography, and that of Motion-JPEG2000 and BPCS, respectively. Experimental results show that 3-D SPIHT-BPCS is superior to Motion-JPEG2000-BPCS with regard to embedding performance. In 3-D SPIHT-BPCS steganography, embedding rates of around 28% of the compressed video size are achieved for twelve bit representation of wavelet coefficients with no noticeable degradation in video quality.

  8. A new approach for crude oil price analysis based on empirical mode decomposition

    International Nuclear Information System (INIS)

    Zhang, Xun; Wang, Shou-Yang; Lai, K.K.

    2008-01-01

    The importance of understanding the underlying characteristics of international crude oil price movements attracts much attention from academic researchers and business practitioners. Due to the intrinsic complexity of the oil market, however, most of them fail to produce consistently good results. Empirical Mode Decomposition (EMD), recently proposed by Huang et al., appears to be a novel data analysis method for nonlinear and non-stationary time series. By decomposing a time series into a small number of independent and concretely implicational intrinsic modes based on scale separation, EMD explains the generation of time series data from a novel perspective. Ensemble EMD (EEMD) is a substantial improvement of EMD which can better separate the scales naturally by adding white noise series to the original time series and then treating the ensemble averages as the true intrinsic modes. In this paper, we extend EEMD to crude oil price analysis. First, three crude oil price series with different time ranges and frequencies are decomposed into several independent intrinsic modes, from high to low frequency. Second, the intrinsic modes are composed into a fluctuating process, a slowly varying part and a trend based on fine-to-coarse reconstruction. The economic meanings of the three components are identified as short term fluctuations caused by normal supply-demand disequilibrium or some other market activities, the effect of a shock of a significant event, and a long term trend. Finally, the EEMD is shown to be a vital technique for crude oil price analysis. (author)

  9. Multicrack Localization in Rotors Based on Proper Orthogonal Decomposition Using Fractal Dimension and Gapped Smoothing Method

    Directory of Open Access Journals (Sweden)

    Zhiwen Lu

    2016-01-01

    Full Text Available Multicrack localization in operating rotor systems is still a challenge today. Focusing on this challenge, a new approach based on proper orthogonal decomposition (POD is proposed for multicrack localization in rotors. A two-disc rotor-bearing system with breathing cracks is established by the finite element method and simulated sensors are distributed along the rotor to obtain the steady-state transverse responses required by POD. Based on the discontinuities introduced in the proper orthogonal modes (POMs at the locations of cracks, the characteristic POM (CPOM, which is sensitive to crack locations and robust to noise, is selected for cracks localization. Instead of using the CPOM directly, due to its difficulty to localize incipient cracks, damage indexes using fractal dimension (FD and gapped smoothing method (GSM are adopted, in order to extract the locations more efficiently. The method proposed in this work is validated to be effective for multicrack localization in rotors by numerical experiments on rotors in different crack configuration cases considering the effects of noise. In addition, the feasibility of using fewer sensors is also investigated.

  10. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    International Nuclear Information System (INIS)

    Han, G.; Lin, B.; Xu, Z.

    2017-01-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  11. Electrocardiogram signal denoising based on empirical mode decomposition technique: an overview

    Science.gov (United States)

    Han, G.; Lin, B.; Xu, Z.

    2017-03-01

    Electrocardiogram (ECG) signal is nonlinear and non-stationary weak signal which reflects whether the heart is functioning normally or abnormally. ECG signal is susceptible to various kinds of noises such as high/low frequency noises, powerline interference and baseline wander. Hence, the removal of noises from ECG signal becomes a vital link in the ECG signal processing and plays a significant role in the detection and diagnosis of heart diseases. The review will describe the recent developments of ECG signal denoising based on Empirical Mode Decomposition (EMD) technique including high frequency noise removal, powerline interference separation, baseline wander correction, the combining of EMD and Other Methods, EEMD technique. EMD technique is a quite potential and prospective but not perfect method in the application of processing nonlinear and non-stationary signal like ECG signal. The EMD combined with other algorithms is a good solution to improve the performance of noise cancellation. The pros and cons of EMD technique in ECG signal denoising are discussed in detail. Finally, the future work and challenges in ECG signal denoising based on EMD technique are clarified.

  12. Demonstration of base catalyzed decomposition process, Navy Public Works Center, Guam, Mariana Islands

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, A.J.; Freeman, H.D.; Brown, M.D.; Zacher, A.H.; Neuenschwander, G.N.; Wilcox, W.A.; Gano, S.R. [Pacific Northwest National Lab., Richland, WA (United States); Kim, B.C.; Gavaskar, A.R. [Battelle Columbus Div., OH (United States)

    1996-02-01

    Base Catalyzed Decomposition (BCD) is a chemical dehalogenation process designed for treating soils and other substrate contaminated with polychlorinated biphenyls (PCB), pesticides, dioxins, furans, and other hazardous organic substances. PCBs are heavy organic liquids once widely used in industry as lubricants, heat transfer oils, and transformer dielectric fluids. In 1976, production was banned when PCBs were recognized as carcinogenic substances. It was estimated that significant quantities (one billion tons) of U.S. soils, including areas on U.S. military bases outside the country, were contaminated by PCB leaks and spills, and cleanup activities began. The BCD technology was developed in response to these activities. This report details the evolution of the process, from inception to deployment in Guam, and describes the process and system components provided to the Navy to meet the remediation requirements. The report is divided into several sections to cover the range of development and demonstration activities. Section 2.0 gives an overview of the project history. Section 3.0 describes the process chemistry and remediation steps involved. Section 4.0 provides a detailed description of each component and specific development activities. Section 5.0 details the testing and deployment operations and provides the results of the individual demonstration campaigns. Section 6.0 gives an economic assessment of the process. Section 7.0 presents the conclusions and recommendations form this project. The appendices contain equipment and instrument lists, equipment drawings, and detailed run and analytical data.

  13. Hierarchical prediction of industrial water demand based on refined Laspeyres decomposition analysis.

    Science.gov (United States)

    Shang, Yizi; Lu, Shibao; Gong, Jiaguo; Shang, Ling; Li, Xiaofei; Wei, Yongping; Shi, Hongwang

    2017-12-01

    A recent study decomposed the changes in industrial water use into three hierarchies (output, technology, and structure) using a refined Laspeyres decomposition model, and found monotonous and exclusive trends in the output and technology hierarchies. Based on that research, this study proposes a hierarchical prediction approach to forecast future industrial water demand. Three water demand scenarios (high, medium, and low) were then established based on potential future industrial structural adjustments, and used to predict water demand for the structural hierarchy. The predictive results of this approach were compared with results from a grey prediction model (GPM (1, 1)). The comparison shows that the results of the two approaches were basically identical, differing by less than 10%. Taking Tianjin, China, as a case, and using data from 2003-2012, this study predicts that industrial water demand will continuously increase, reaching 580 million m 3 , 776.4 million m 3 , and approximately 1.09 billion m 3 by the years 2015, 2020 and 2025 respectively. It is concluded that Tianjin will soon face another water crisis if no immediate measures are taken. This study recommends that Tianjin adjust its industrial structure with water savings as the main objective, and actively seek new sources of water to increase its supply.

  14. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  15. Environmental life-cycle comparisons of two polychlorinated biphenyl remediation technologies: Incineration and base catalyzed decomposition

    International Nuclear Information System (INIS)

    Hu Xintao; Zhu Jianxin; Ding Qiong

    2011-01-01

    Highlights: → We study the environmental impacts of two kinds of remediation technologies including Infrared High Temperature Incineration(IHTI) and Base Catalyzed Decomposition(BCD). → Combined midpoint/damage approaches were calculated for two technologies. → The results showed that major environmental impacts arose from energy consumption. → BCD has a lower environmental impact than IHTI in the view of single score. - Abstract: Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and

  16. Deterministic and probabilistic interval prediction for short-term wind power generation based on variational mode decomposition and machine learning methods

    International Nuclear Information System (INIS)

    Zhang, Yachao; Liu, Kaipei; Qin, Liang; An, Xueli

    2016-01-01

    Highlights: • Variational mode decomposition is adopted to process original wind power series. • A novel combined model based on machine learning methods is established. • An improved differential evolution algorithm is proposed for weight adjustment. • Probabilistic interval prediction is performed by quantile regression averaging. - Abstract: Due to the increasingly significant energy crisis nowadays, the exploitation and utilization of new clean energy gains more and more attention. As an important category of renewable energy, wind power generation has become the most rapidly growing renewable energy in China. However, the intermittency and volatility of wind power has restricted the large-scale integration of wind turbines into power systems. High-precision wind power forecasting is an effective measure to alleviate the negative influence of wind power generation on the power systems. In this paper, a novel combined model is proposed to improve the prediction performance for the short-term wind power forecasting. Variational mode decomposition is firstly adopted to handle the instability of the raw wind power series, and the subseries can be reconstructed by measuring sample entropy of the decomposed modes. Then the base models can be established for each subseries respectively. On this basis, the combined model is developed based on the optimal virtual prediction scheme, the weight matrix of which is dynamically adjusted by a self-adaptive multi-strategy differential evolution algorithm. Besides, a probabilistic interval prediction model based on quantile regression averaging and variational mode decomposition-based hybrid models is presented to quantify the potential risks of the wind power series. The simulation results indicate that: (1) the normalized mean absolute errors of the proposed combined model from one-step to three-step forecasting are 4.34%, 6.49% and 7.76%, respectively, which are much lower than those of the base models and the hybrid

  17. Simulation-based optimization of thermal systems

    International Nuclear Information System (INIS)

    Jaluria, Yogesh

    2009-01-01

    This paper considers the design and optimization of thermal systems on the basis of the mathematical and numerical modeling of the system. Many complexities are often encountered in practical thermal processes and systems, making the modeling challenging and involved. These include property variations, complicated regions, combined transport mechanisms, chemical reactions, and intricate boundary conditions. The paper briefly presents approaches that may be used to accurately simulate these systems. Validation of the numerical model is a particularly critical aspect and is discussed. It is important to couple the modeling with the system performance, design, control and optimization. This aspect, which has often been ignored in the literature, is considered in this paper. Design of thermal systems based on concurrent simulation and experimentation is also discussed in terms of dynamic data-driven optimization methods. Optimization of the system and of the operating conditions is needed to minimize costs and improve product quality and system performance. Different optimization strategies that are currently used for thermal systems are outlined, focusing on new and emerging strategies. Of particular interest is multi-objective optimization, since most thermal systems involve several important objective functions, such as heat transfer rate and pressure in electronic cooling systems. A few practical thermal systems are considered in greater detail to illustrate these approaches and to present typical simulation, design and optimization results

  18. Coverage-based constraints for IMRT optimization

    Science.gov (United States)

    Mescher, H.; Ulrich, S.; Bangert, M.

    2017-09-01

    Radiation therapy treatment planning requires an incorporation of uncertainties in order to guarantee an adequate irradiation of the tumor volumes. In current clinical practice, uncertainties are accounted for implicitly with an expansion of the target volume according to generic margin recipes. Alternatively, it is possible to account for uncertainties by explicit minimization of objectives that describe worst-case treatment scenarios, the expectation value of the treatment or the coverage probability of the target volumes during treatment planning. In this note we show that approaches relying on objectives to induce a specific coverage of the clinical target volumes are inevitably sensitive to variation of the relative weighting of the objectives. To address this issue, we introduce coverage-based constraints for intensity-modulated radiation therapy (IMRT) treatment planning. Our implementation follows the concept of coverage-optimized planning that considers explicit error scenarios to calculate and optimize patient-specific probabilities q(\\hat{d}, \\hat{v}) of covering a specific target volume fraction \\hat{v} with a certain dose \\hat{d} . Using a constraint-based reformulation of coverage-based objectives we eliminate the trade-off between coverage and competing objectives during treatment planning. In-depth convergence tests including 324 treatment plan optimizations demonstrate the reliability of coverage-based constraints for varying levels of probability, dose and volume. General clinical applicability of coverage-based constraints is demonstrated for two cases. A sensitivity analysis regarding penalty variations within this planing study based on IMRT treatment planning using (1) coverage-based constraints, (2) coverage-based objectives, (3) probabilistic optimization, (4) robust optimization and (5) conventional margins illustrates the potential benefit of coverage-based constraints that do not require tedious adjustment of target volume objectives.

  19. A Tensor Decomposition-Based Approach for Detecting Dynamic Network States From EEG.

    Science.gov (United States)

    Mahyari, Arash Golibagh; Zoltowski, David M; Bernat, Edward M; Aviyente, Selin

    2017-01-01

    Functional connectivity (FC), defined as the statistical dependency between distinct brain regions, has been an important tool in understanding cognitive brain processes. Most of the current works in FC have focused on the assumption of temporally stationary networks. However, recent empirical work indicates that FC is dynamic due to cognitive functions. The purpose of this paper is to understand the dynamics of FC for understanding the formation and dissolution of networks of the brain. In this paper, we introduce a two-step approach to characterize the dynamics of functional connectivity networks (FCNs) by first identifying change points at which the network connectivity across subjects shows significant changes and then summarizing the FCNs between consecutive change points. The proposed approach is based on a tensor representation of FCNs across time and subjects yielding a four-mode tensor. The change points are identified using a subspace distance measure on low-rank approximations to the tensor at each time point. The network summarization is then obtained through tensor-matrix projections across the subject and time modes. The proposed framework is applied to electroencephalogram (EEG) data collected during a cognitive control task. The detected change-points are consistent with a priori known ERN interval. The results show significant connectivities in medial-frontal regions which are consistent with widely observed ERN amplitude measures. The tensor-based method outperforms conventional matrix-based methods such as singular value decomposition in terms of both change-point detection and state summarization. The proposed tensor-based method captures the topological structure of FCNs which provides more accurate change-point-detection and state summarization.

  20. Benchmarking of a T-wave alternans detection method based on empirical mode decomposition.

    Science.gov (United States)

    Blanco-Velasco, Manuel; Goya-Esteban, Rebeca; Cruz-Roldán, Fernando; García-Alberola, Arcadi; Rojo-Álvarez, José Luis

    2017-07-01

    T-wave alternans (TWA) is a fluctuation of the ST-T complex occurring on an every-other-beat basis of the surface electrocardiogram (ECG). It has been shown to be an informative risk stratifier for sudden cardiac death, though the lack of gold standard to benchmark detection methods has promoted the use of synthetic signals. This work proposes a novel signal model to study the performance of a TWA detection. Additionally, the methodological validation of a denoising technique based on empirical mode decomposition (EMD), which is used here along with the spectral method, is also tackled. The proposed test bed system is based on the following guidelines: (1) use of open source databases to enable experimental replication; (2) use of real ECG signals and physiological noise; (3) inclusion of randomized TWA episodes. Both sensitivity (Se) and specificity (Sp) are separately analyzed. Also a nonparametric hypothesis test, based on Bootstrap resampling, is used to determine whether the presence of the EMD block actually improves the performance. The results show an outstanding specificity when the EMD block is used, even in very noisy conditions (0.96 compared to 0.72 for SNR = 8 dB), being always superior than that of the conventional SM alone. Regarding the sensitivity, using the EMD method also outperforms in noisy conditions (0.57 compared to 0.46 for SNR=8 dB), while it decreases in noiseless conditions. The proposed test setting designed to analyze the performance guarantees that the actual physiological variability of the cardiac system is reproduced. The use of the EMD-based block in noisy environment enables the identification of most patients with fatal arrhythmias. Copyright © 2017 Elsevier B.V. All rights reserved.

  1. Thermal decomposition of dimethoxymethane and dimethyl carbonate catalyzed by solid acids and bases

    International Nuclear Information System (INIS)

    Fu Yuchuan; Zhu Haiyan; Shen Jianyi

    2005-01-01

    The thermal decomposition of dimethoxymethane (DMM) and dimethyl carbonate (DMC) on MgO, H-ZSM-5, SiO 2 , γ-Al 2 O 3 and ZnO was studied using a fixed bed isothermal reactor equipped with an online gas chromatograph. It was found that DMM was stable on MgO at temperatures up to 623 K, while it was decomposed over the acidic H-ZSM-5 with 99% conversion at 423 K. On the other hand, DMC was easily decomposed on the strong solid base and acid. The conversion of DMC was 76% on MgO at 473 K, and 98% on H-ZSM-5 at 423 K. It was even easier decomposed on the amphoteric γ-Al 2 O 3 . Both DMM and DMC were relatively stable on SiO 2 possessing little surface acidity and basicity. They were even more stable on ZnO with the conversion of DMM and DMC of about 1.5% at 573 K. Thus, metal oxides with either strong acidity or basicity are not suitable for the selective oxidation of DMM to DMC, while ZnO may be used as a component for the reaction

  2. THE STUDY OF SPECTRUM RECONSTRUCTION BASED ON FUZZY SET FULL CONSTRAINT AND MULTIENDMEMBER DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Y. Sun

    2017-09-01

    Full Text Available Hyperspectral imaging system can obtain spectral and spatial information simultaneously with bandwidth to the level of 10 nm or even less. Therefore, hyperspectral remote sensing has the ability to detect some kinds of objects which can not be detected in wide-band remote sensing, making it becoming one of the hottest spots in remote sensing. In this study, under conditions with a fuzzy set of full constraints, Normalized Multi-Endmember Decomposition Method (NMEDM for vegetation, water, and soil was proposed to reconstruct hyperspectral data using a large number of high-quality multispectral data and auxiliary spectral library data. This study considered spatial and temporal variation and decreased the calculation time required to reconstruct the hyper-spectral data. The results of spectral reconstruction based on NMEDM showed that the reconstructed data has good qualities and certain applications, which makes it possible to carry out spectral features identification. This method also extends the application of depth and breadth of remote sensing data, helping to explore the law between multispectral and hyperspectral data.

  3. Automatic decomposition of a complex hologram based on the virtual diffraction plane framework

    International Nuclear Information System (INIS)

    Jiao, A S M; Tsang, P W M; Lam, Y K; Poon, T-C; Liu, J-P; Lee, C-C

    2014-01-01

    Holography is a technique for capturing the hologram of a three-dimensional scene. In many applications, it is often pertinent to retain specific items of interest in the hologram, rather than retaining the full information, which may cause distraction in the analytical process that follows. For a real optical image that is captured with a camera or scanner, this process can be realized by applying image segmentation algorithms to decompose an image into its constituent entities. However, because it is different from an optical image, classic image segmentation methods cannot be applied directly to a hologram, as each pixel in the hologram carries holistic, rather than local, information of the object scene. In this paper, we propose a method to perform automatic decomposition of a complex hologram based on a recently proposed technique called the virtual diffraction plane (VDP) framework. Briefly, a complex hologram is back-propagated to a hypothetical plane known as the VDP. Next, the image on the VDP is automatically decomposed, through the use of the segmentation on the magnitude of the VDP image, into multiple sub-VDP images, each representing the diffracted waves of an isolated entity in the scene. Finally, each sub-VDP image is reverted back to a hologram. As such, a complex hologram can be decomposed into a plurality of subholograms, each representing a discrete object in the scene. We have demonstrated the successful performance of our proposed method by decomposing a complex hologram that is captured through the optical scanning holography (OSH) technique. (papers)

  4. Crude oil price analysis and forecasting based on variational mode decomposition and independent component analysis

    Science.gov (United States)

    E, Jianwei; Bao, Yanling; Ye, Jimin

    2017-10-01

    As one of the most vital energy resources in the world, crude oil plays a significant role in international economic market. The fluctuation of crude oil price has attracted academic and commercial attention. There exist many methods in forecasting the trend of crude oil price. However, traditional models failed in predicting accurately. Based on this, a hybrid method will be proposed in this paper, which combines variational mode decomposition (VMD), independent component analysis (ICA) and autoregressive integrated moving average (ARIMA), called VMD-ICA-ARIMA. The purpose of this study is to analyze the influence factors of crude oil price and predict the future crude oil price. Major steps can be concluded as follows: Firstly, applying the VMD model on the original signal (crude oil price), the modes function can be decomposed adaptively. Secondly, independent components are separated by the ICA, and how the independent components affect the crude oil price is analyzed. Finally, forecasting the price of crude oil price by the ARIMA model, the forecasting trend demonstrates that crude oil price declines periodically. Comparing with benchmark ARIMA and EEMD-ICA-ARIMA, VMD-ICA-ARIMA can forecast the crude oil price more accurately.

  5. Singular value decomposition based feature extraction technique for physiological signal analysis.

    Science.gov (United States)

    Chang, Cheng-Ding; Wang, Chien-Chih; Jiang, Bernard C

    2012-06-01

    Multiscale entropy (MSE) is one of the popular techniques to calculate and describe the complexity of the physiological signal. Many studies use this approach to detect changes in the physiological conditions in the human body. However, MSE results are easily affected by noise and trends, leading to incorrect estimation of MSE values. In this paper, singular value decomposition (SVD) is adopted to replace MSE to extract the features of physiological signals, and adopt the support vector machine (SVM) to classify the different physiological states. A test data set based on the PhysioNet website was used, and the classification results showed that using SVD to extract features of the physiological signal could attain a classification accuracy rate of 89.157%, which is higher than that using the MSE value (71.084%). The results show the proposed analysis procedure is effective and appropriate for distinguishing different physiological states. This promising result could be used as a reference for doctors in diagnosis of congestive heart failure (CHF) disease.

  6. Multivariate Empirical Mode Decomposition Based Signal Analysis and Efficient-Storage in Smart Grid

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Lu [University of Tennessee, Knoxville (UTK); Albright, Austin P [ORNL; Rahimpour, Alireza [University of Tennessee, Knoxville (UTK); Guo, Jiandong [University of Tennessee, Knoxville (UTK); Qi, Hairong [University of Tennessee, Knoxville (UTK); Liu, Yilu [University of Tennessee (UTK) and Oak Ridge National Laboratory (ORNL)

    2017-01-01

    Wide-area-measurement systems (WAMSs) are used in smart grid systems to enable the efficient monitoring of grid dynamics. However, the overwhelming amount of data and the severe contamination from noise often impede the effective and efficient data analysis and storage of WAMS generated measurements. To solve this problem, we propose a novel framework that takes advantage of Multivariate Empirical Mode Decomposition (MEMD), a fully data-driven approach to analyzing non-stationary signals, dubbed MEMD based Signal Analysis (MSA). The frequency measurements are considered as a linear superposition of different oscillatory components and noise. The low-frequency components, corresponding to the long-term trend and inter-area oscillations, are grouped and compressed by MSA using the mean shift clustering algorithm. Whereas, higher-frequency components, mostly noise and potentially part of high-frequency inter-area oscillations, are analyzed using Hilbert spectral analysis and they are delineated by statistical behavior. By conducting experiments on both synthetic and real-world data, we show that the proposed framework can capture the characteristics, such as trends and inter-area oscillation, while reducing the data storage requirements

  7. Spatial and Inter-temporal Sources of Poverty, Inequality and Gender Disparities in Cameroon: a Regression-Based Decomposition Analysis

    OpenAIRE

    Boniface Ngah Epo; Francis Menjo Baye; Nadine Teme Angele Manga

    2011-01-01

    This study applies the regression-based inequality decomposition technique to explain poverty and inequality trends in Cameroon. We also identify gender related factors which explain income disparities and discrimination based on the 2001 and 2007 Cameroon household consumption surveys. The results show that education, health, employment in the formal sector, age cohorts, household size, gender, ownership of farmland and urban versus rural residence explain household economic wellbeing; dispa...

  8. CFD-based optimization in plastics extrusion

    Science.gov (United States)

    Eusterholz, Sebastian; Elgeti, Stefanie

    2018-05-01

    This paper presents novel ideas in numerical design of mixing elements in single-screw extruders. The actual design process is reformulated as a shape optimization problem, given some functional, but possibly inefficient initial design. Thereby automatic optimization can be incorporated and the design process is advanced, beyond the simulation-supported, but still experience-based approach. This paper proposes concepts to extend a method which has been developed and validated for die design to the design of mixing-elements. For simplicity, it focuses on single-phase flows only. The developed method conducts forward-simulations to predict the quasi-steady melt behavior in the relevant part of the extruder. The result of each simulation is used in a black-box optimization procedure based on an efficient low-order parameterization of the geometry. To minimize user interaction, an objective function is formulated that quantifies the products' quality based on the forward simulation. This paper covers two aspects: (1) It reviews the set-up of the optimization framework as discussed in [1], and (2) it details the necessary extensions for the optimization of mixing elements in single-screw extruders. It concludes with a presentation of first advances in the unsteady flow simulation of a metering and mixing section with the SSMUM [2] using the Carreau material model.

  9. Decomposition mechanism of trichloroethylene based on by-product distribution in the hybrid barrier discharge plasma process

    Energy Technology Data Exchange (ETDEWEB)

    Han, Sang-Bo [Industry Applications Research Laboratory, Korea Electrotechnology Research Institute, Changwon, Kyeongnam (Korea, Republic of); Oda, Tetsuji [Department of Electrical Engineering, The University of Tokyo, Tokyo 113-8656 (Japan)

    2007-05-15

    The hybrid barrier discharge plasma process combined with ozone decomposition catalysts was studied experimentally for decomposing dilute trichloroethylene (TCE). Based on the fundamental experiment for catalytic activities on ozone decomposition, MnO{sub 2} was selected for application in the main experiments for its higher catalytic abilities than other metal oxides. A lower initial TCE concentration existed in the working gas; the larger ozone concentration was generated from the barrier discharge plasma treatment. Near complete decomposition of dichloro-acetylchloride (DCAC) into Cl{sub 2} and CO{sub x} was observed for an initial TCE concentration of less than 250 ppm. C=C {pi} bond cleavage in TCE gave a carbon single bond of DCAC through oxidation reaction during the barrier discharge plasma treatment. Those DCAC were easily broken in the subsequent catalytic reaction. While changing oxygen concentration in working gas, oxygen radicals in the plasma space strongly reacted with precursors of DCAC compared with those of trichloro-acetaldehyde. A chlorine radical chain reaction is considered as a plausible decomposition mechanism in the barrier discharge plasma treatment. The potential energy of oxygen radicals at the surface of the catalyst is considered as an important factor in causing reactive chemical reactions.

  10. Design and cost of the sulfuric acid decomposition reactor for the sulfur based hydrogen processes - HTR2008-58009

    International Nuclear Information System (INIS)

    Hu, T. Y.; Connolly, S. M.; Lahoda, E. J.; Kriel, W.

    2008-01-01

    The key interface component between the reactor and chemical systems for the sulfuric acid based processes to make hydrogen is the sulfuric acid decomposition reactor. The materials issues for the decomposition reactor are severe since sulfuric acid must be heated, vaporized and decomposed. SiC has been identified and proven by others to be an acceptable material. However, SiC has a significant design issue when it must be interfaced with metals for connection to the remainder of the process. Westinghouse has developed a design utilizing SiC for the high temperature portions of the reactor that are in contact with the sulfuric acid and polymeric coated steel for low temperature portions. This design is expected to have a reasonable cost for an operating lifetime of 20 years. It can be readily maintained in the field, and is transportable by truck (maximum OD is 4.5 meters). This paper summarizes the detailed engineering design of the Westinghouse Decomposition Reactor and the decomposition reactor's capital cost. (authors)

  11. Thermal Decomposition Characteristics of Orthorhombic Ammonium Perchlorate (o-AP) and an 0-AP/HTPB-Based Propellant

    International Nuclear Information System (INIS)

    BEHRENS JR., RICHARD; MINIER, LEANNA M.G.

    1999-01-01

    A study to characterize the low-temperature reactive processes for o-AP and an AP/HTPB-based propellant (class 1.3) is being conducted in the laboratory using the techniques of simultaneous thermogravimetric modulated beam mass spectrometry (STMBMS) and scanning electron microscopy (SEM). The results presented in this paper are a follow up of the previous work that showed the overall decomposition to be complex and controlled by both physical and chemical processes. The decomposition is characterized by the occurrence of one major event that consumes up to(approx)35% of the AP, depending upon particle size, and leaves behind a porous agglomerate of AP. The major gaseous products released during this event include H(sub 2)O, O(sub 2), Cl(sub 2), N(sub 2)O and HCl. The recent efforts provide further insight into the decomposition processes for o-AP. The temporal behaviors of the gas formation rates (GFRs) for the products indicate that the major decomposition event consists of three chemical channels. The first and third channels are affected by the pressure in the reaction cell and occur at the surface or in the gas phase above the surface of the AP particles. The second channel is not affected by pressure and accounts for the solid-phase reactions characteristic of o-AP. The third channel involves the interactions of the decomposition products with the surface of the AP. SEM images of partially decomposed o-AP provide insight to how the morphology changes as the decomposition progresses. A conceptual model has been developed, based upon the STMBMS and SEM results, that provides a basic description of the processes. The thermal decomposition characteristics of the propellant are evaluated from the identities of the products and the temporal behaviors of their GFRs. First, the volatile components in the propellant evolve from the propellant as it is heated. Second, the hot AP (and HClO(sub 4)) at the AP-binder interface oxidize the binder through reactions that

  12. Reliability-Based Optimization of Wind Turbines

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Tarp-Johansen, N.J.

    2004-01-01

    Reliability-based optimization of the main tower and monopile foundation of an offshore wind turbine is considered. Different formulations are considered of the objective function including benefits and building and failure costs of the wind turbine. Also different reconstruction policies in case...

  13. Performance-based Pareto optimal design

    NARCIS (Netherlands)

    Sariyildiz, I.S.; Bittermann, M.S.; Ciftcioglu, O.

    2008-01-01

    A novel approach for performance-based design is presented, where Pareto optimality is pursued. Design requirements may contain linguistic information, which is difficult to bring into computation or make consistent their impartial estimations from case to case. Fuzzy logic and soft computing are

  14. The duality of optimal exercise and domineering claims : a Doob-Meyer decomposition approach to the Snell envelope

    NARCIS (Netherlands)

    Jamshidian, F.

    We develop a concept of a “domineering claim” and apply it to the existence, uniqueness and properties of optimal stopping times in continuous time. The notion pinpoints a key observation of pathwise optimality implicit in Davis and Karatzas. It also ties in well with several formulations of a

  15. Three-Component Decomposition Based on Stokes Vector for Compact Polarimetric SAR

    Directory of Open Access Journals (Sweden)

    Hanning Wang

    2015-09-01

    Full Text Available In this paper, a three-component decomposition algorithm is proposed for processing compact polarimetric SAR images. By using the correspondence between the covariance matrix and the Stokes vector, three-component scattering models for CTLR and DCP modes are established. The explicit expression of decomposition results is then derived by setting the contribution of volume scattering as a free parameter. The degree of depolarization is taken as the upper bound of the free parameter, for the constraint that the weighting factor of each scattering component should be nonnegative. Several methods are investigated to estimate the free parameter suitable for decomposition. The feasibility of this algorithm is validated by AIRSAR data over San Francisco and RADARSAT-2 data over Flevoland.

  16. Optimization of modal filters based on arrays of piezoelectric sensors

    International Nuclear Information System (INIS)

    Pagani, Carlos C Jr; Trindade, Marcelo A

    2009-01-01

    Modal filters may be obtained by a properly designed weighted sum of the output signals of an array of sensors distributed on the host structure. Although several research groups have been interested in techniques for designing and implementing modal filters based on a given array of sensors, the effect of the array topology on the effectiveness of the modal filter has received much less attention. In particular, it is known that some parameters, such as size, shape and location of a sensor, are very important in determining the observability of a vibration mode. Hence, this paper presents a methodology for the topological optimization of an array of sensors in order to maximize the effectiveness of a set of selected modal filters. This is done using a genetic algorithm optimization technique for the selection of 12 piezoceramic sensors from an array of 36 piezoceramic sensors regularly distributed on an aluminum plate, which maximize the filtering performance, over a given frequency range, of a set of modal filters, each one aiming to isolate one of the first vibration modes. The vectors of the weighting coefficients for each modal filter are evaluated using QR decomposition of the complex frequency response function matrix. Results show that the array topology is not very important for lower frequencies but it greatly affects the filter effectiveness for higher frequencies. Therefore, it is possible to improve the effectiveness and frequency range of a set of modal filters by optimizing the topology of an array of sensors. Indeed, using 12 properly located piezoceramic sensors bonded on an aluminum plate it is shown that the frequency range of a set of modal filters may be enlarged by 25–50%

  17. Parameter optimization toward optimal microneedle-based dermal vaccination.

    Science.gov (United States)

    van der Maaden, Koen; Varypataki, Eleni Maria; Yu, Huixin; Romeijn, Stefan; Jiskoot, Wim; Bouwstra, Joke

    2014-11-20

    Microneedle-based vaccination has several advantages over vaccination by using conventional hypodermic needles. Microneedles are used to deliver a drug into the skin in a minimally-invasive and potentially pain free manner. Besides, the skin is a potent immune organ that is highly suitable for vaccination. However, there are several factors that influence the penetration ability of the skin by microneedles and the immune responses upon microneedle-based immunization. In this study we assessed several different microneedle arrays for their ability to penetrate ex vivo human skin by using trypan blue and (fluorescently or radioactively labeled) ovalbumin. Next, these different microneedles and several factors, including the dose of ovalbumin, the effect of using an impact-insertion applicator, skin location of microneedle application, and the area of microneedle application, were tested in vivo in mice. The penetration ability and the dose of ovalbumin that is delivered into the skin were shown to be dependent on the use of an applicator and on the microneedle geometry and size of the array. Besides microneedle penetration, the above described factors influenced the immune responses upon microneedle-based vaccination in vivo. It was shown that the ovalbumin-specific antibody responses upon microneedle-based vaccination could be increased up to 12-fold when an impact-insertion applicator was used, up to 8-fold when microneedles were applied over a larger surface area, and up to 36-fold dependent on the location of microneedle application. Therefore, these influencing factors should be considered to optimize microneedle-based dermal immunization technologies. Copyright © 2014 Elsevier B.V. All rights reserved.

  18. Newton-Raphson based modified Laplace Adomian decomposition method for solving quadratic Riccati differential equations

    Directory of Open Access Journals (Sweden)

    Mishra Vinod

    2016-01-01

    Full Text Available Numerical Laplace transform method is applied to approximate the solution of nonlinear (quadratic Riccati differential equations mingled with Adomian decomposition method. A new technique is proposed in this work by reintroducing the unknown function in Adomian polynomial with that of well known Newton-Raphson formula. The solutions obtained by the iterative algorithm are exhibited in an infinite series. The simplicity and efficacy of method is manifested with some examples in which comparisons are made among the exact solutions, ADM (Adomian decomposition method, HPM (Homotopy perturbation method, Taylor series method and the proposed scheme.

  19. Speech Denoising in White Noise Based on Signal Subspace Low-rank Plus Sparse Decomposition

    Directory of Open Access Journals (Sweden)

    yuan Shuai

    2017-01-01

    Full Text Available In this paper, a new subspace speech enhancement method using low-rank and sparse decomposition is presented. In the proposed method, we firstly structure the corrupted data as a Toeplitz matrix and estimate its effective rank for the underlying human speech signal. Then the low-rank and sparse decomposition is performed with the guidance of speech rank value to remove the noise. Extensive experiments have been carried out in white Gaussian noise condition, and experimental results show the proposed method performs better than conventional speech enhancement methods, in terms of yielding less residual noise and lower speech distortion.

  20. Dynamic relationships between microbial biomass, respiration, inorganic nutrients and enzyme activities: informing enzyme based decomposition models

    Directory of Open Access Journals (Sweden)

    Daryl L Moorhead

    2013-08-01

    Full Text Available We re-examined data from a recent litter decay study to determine if additional insights could be gained to inform decomposition modeling. Rinkes et al. (2013 conducted 14-day laboratory incubations of sugar maple (Acer saccharum or white oak (Quercus alba leaves, mixed with sand (0.4% organic C content or loam (4.1% organic C. They measured microbial biomass C, carbon dioxide efflux, soil ammonium, nitrate, and phosphate concentrations, and β-glucosidase (BG, β-N-acetyl-glucosaminidase (NAG, and acid phosphatase (AP activities on days 1, 3, and 14. Analyses of relationships among variables yielded different insights than original analyses of individual variables. For example, although respiration rates per g soil were higher for loam than sand, rates per g soil C were actually higher for sand than loam, and rates per g microbial C showed little difference between treatments. Microbial biomass C peaked on day 3 when biomass-specific activities of enzymes were lowest, suggesting uptake of litter C without extracellular hydrolysis. This result refuted a common model assumption that all enzyme production is constitutive and thus proportional to biomass, and/or indicated that part of litter decay is independent of enzyme activity. The length and angle of vectors defined by ratios of enzyme activities (BG/NAG versus BG/AP represent relative microbial investments in C (length, and N and P (angle acquiring enzymes. Shorter lengths on day 3 suggested low C limitation, whereas greater lengths on day 14 suggested an increase in C limitation with decay. The soils and litter in this study generally had stronger P limitation (angles > 45˚. Reductions in vector angles to < 45˚ for sand by day 14 suggested a shift to N limitation. These relational variables inform enzyme-based models, and are usually much less ambiguous when obtained from a single study in which measurements were made on the same samples than when extrapolated from separate studies.

  1. Environmental life-cycle comparisons of two polychlorinated biphenyl remediation technologies: incineration and base catalyzed decomposition.

    Science.gov (United States)

    Hu, Xintao; Zhu, Jianxin; Ding, Qiong

    2011-07-15

    Remediation action is critical for the management of polychlorinated biphenyl (PCB) contaminated sites. Dozens of remediation technologies developed internationally could be divided in two general categories incineration and non-incineration. In this paper, life cycle assessment (LCA) was carried out to study the environmental impacts of these two kinds of remediation technologies in selected PCB contaminated sites, where Infrared High Temperature Incineration (IHTI) and Base Catalyzed Decomposition (BCD) were selected as representatives of incineration and non-incineration. A combined midpoint/damage approach was adopted by using SimaPro 7.2 and IMPACTA2002+ to assess the human toxicity, ecotoxicity, climate change impact, and resource consumption from the five subsystems of IHTI and BCD technologies, respectively. It was found that the major environmental impacts through the whole lifecycle arose from energy consumption in both IHTI and BCD processes. For IHTI, primary and secondary combustion subsystem contributes more than 50% of midpoint impacts concerning with carcinogens, respiratory inorganics, respiratory organics, terrestrial ecotoxity, terrestrial acidification/eutrophication and global warming. In BCD process, the rotary kiln reactor subsystem presents the highest contribution to almost all the midpoint impacts including global warming, non-renewable energy, non-carcinogens, terrestrial ecotoxity and respiratory inorganics. In the view of midpoint impacts, the characterization values for global warming from IHTI and BCD were about 432.35 and 38.5 kg CO(2)-eq per ton PCB-containing soils, respectively. LCA results showed that the single score of BCD environmental impact was 1468.97 Pt while IHTI's score is 2785.15 Pt, which indicates BCD potentially has a lower environmental impact than IHTI technology in the PCB contaminated soil remediation process. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. Modeling pollen time series using seasonal-trend decomposition procedure based on LOESS smoothing.

    Science.gov (United States)

    Rojo, Jesús; Rivero, Rosario; Romero-Morte, Jorge; Fernández-González, Federico; Pérez-Badia, Rosa

    2017-02-01

    Analysis of airborne pollen concentrations provides valuable information on plant phenology and is thus a useful tool in agriculture-for predicting harvests in crops such as the olive and for deciding when to apply phytosanitary treatments-as well as in medicine and the environmental sciences. Variations in airborne pollen concentrations, moreover, are indicators of changing plant life cycles. By modeling pollen time series, we can not only identify the variables influencing pollen levels but also predict future pollen concentrations. In this study, airborne pollen time series were modeled using a seasonal-trend decomposition procedure based on LOcally wEighted Scatterplot Smoothing (LOESS) smoothing (STL). The data series-daily Poaceae pollen concentrations over the period 2006-2014-was broken up into seasonal and residual (stochastic) components. The seasonal component was compared with data on Poaceae flowering phenology obtained by field sampling. Residuals were fitted to a model generated from daily temperature and rainfall values, and daily pollen concentrations, using partial least squares regression (PLSR). This method was then applied to predict daily pollen concentrations for 2014 (independent validation data) using results for the seasonal component of the time series and estimates of the residual component for the period 2006-2013. Correlation between predicted and observed values was r = 0.79 (correlation coefficient) for the pre-peak period (i.e., the period prior to the peak pollen concentration) and r = 0.63 for the post-peak period. Separate analysis of each of the components of the pollen data series enables the sources of variability to be identified more accurately than by analysis of the original non-decomposed data series, and for this reason, this procedure has proved to be a suitable technique for analyzing the main environmental factors influencing airborne pollen concentrations.

  3. An Improved Algorithm to Delineate Urban Targets with Model-Based Decomposition of PolSAR Data

    Directory of Open Access Journals (Sweden)

    Dingfeng Duan

    2017-10-01

    Full Text Available In model-based decomposition algorithms using polarimetric synthetic aperture radar (PolSAR data, urban targets are typically identified based on the existence of strong double-bounced scattering. However, urban targets with large azimuth orientation angles (AOAs produce strong volumetric scattering that appears similar to scattering characteristics from tree canopies. Due to scattering ambiguity, urban targets can be classified into the vegetation category if the same classification scheme of the model-based PolSAR decomposition algorithms is followed. To resolve the ambiguity and to reduce the misclassification eventually, we introduced a correlation coefficient that characterized scattering mechanisms of urban targets with variable AOAs. Then, an existing volumetric scattering model was modified, and a PolSAR decomposition algorithm developed. The validity and effectiveness of the algorithm were examined using four PolSAR datasets. The algorithm was valid and effective to delineate urban targets with a wide range of AOAs, and applicable to a broad range of ground targets from urban areas, and from upland and flooded forest stands.

  4. Comparison of two interpolation methods for empirical mode decomposition based evaluation of radiographic femur bone images.

    Science.gov (United States)

    Udhayakumar, Ganesan; Sujatha, Chinnaswamy Manoharan; Ramakrishnan, Swaminathan

    2013-01-01

    Analysis of bone strength in radiographic images is an important component of estimation of bone quality in diseases such as osteoporosis. Conventional radiographic femur bone images are used to analyze its architecture using bi-dimensional empirical mode decomposition method. Surface interpolation of local maxima and minima points of an image is a crucial part of bi-dimensional empirical mode decomposition method and the choice of appropriate interpolation depends on specific structure of the problem. In this work, two interpolation methods of bi-dimensional empirical mode decomposition are analyzed to characterize the trabecular femur bone architecture of radiographic images. The trabecular bone regions of normal and osteoporotic femur bone images (N = 40) recorded under standard condition are used for this study. The compressive and tensile strength regions of the images are delineated using pre-processing procedures. The delineated images are decomposed into their corresponding intrinsic mode functions using interpolation methods such as Radial basis function multiquadratic and hierarchical b-spline techniques. Results show that bi-dimensional empirical mode decomposition analyses using both interpolations are able to represent architectural variations of femur bone radiographic images. As the strength of the bone depends on architectural variation in addition to bone mass, this study seems to be clinically useful.

  5. Amplitude-cyclic frequency decomposition of vibration signals for bearing fault diagnosis based on phase editing

    Science.gov (United States)

    Barbini, L.; Eltabach, M.; Hillis, A. J.; du Bois, J. L.

    2018-03-01

    In rotating machine diagnosis different spectral tools are used to analyse vibration signals. Despite the good diagnostic performance such tools are usually refined, computationally complex to implement and require oversight of an expert user. This paper introduces an intuitive and easy to implement method for vibration analysis: amplitude cyclic frequency decomposition. This method firstly separates vibration signals accordingly to their spectral amplitudes and secondly uses the squared envelope spectrum to reveal the presence of cyclostationarity in each amplitude level. The intuitive idea is that in a rotating machine different components contribute vibrations at different amplitudes, for instance defective bearings contribute a very weak signal in contrast to gears. This paper also introduces a new quantity, the decomposition squared envelope spectrum, which enables separation between the components of a rotating machine. The amplitude cyclic frequency decomposition and the decomposition squared envelope spectrum are tested on real word signals, both at stationary and varying speeds, using data from a wind turbine gearbox and an aircraft engine. In addition a benchmark comparison to the spectral correlation method is presented.

  6. A new multivariate empirical mode decomposition method for improving the performance of SSVEP-based brain-computer interface

    Science.gov (United States)

    Chen, Yi-Feng; Atal, Kiran; Xie, Sheng-Quan; Liu, Quan

    2017-08-01

    Objective. Accurate and efficient detection of steady-state visual evoked potentials (SSVEP) in electroencephalogram (EEG) is essential for the related brain-computer interface (BCI) applications. Approach. Although the canonical correlation analysis (CCA) has been applied extensively and successfully to SSVEP recognition, the spontaneous EEG activities and artifacts that often occur during data recording can deteriorate the recognition performance. Therefore, it is meaningful to extract a few frequency sub-bands of interest to avoid or reduce the influence of unrelated brain activity and artifacts. This paper presents an improved method to detect the frequency component associated with SSVEP using multivariate empirical mode decomposition (MEMD) and CCA (MEMD-CCA). EEG signals from nine healthy volunteers were recorded to evaluate the performance of the proposed method for SSVEP recognition. Main results. We compared our method with CCA and temporally local multivariate synchronization index (TMSI). The results suggest that the MEMD-CCA achieved significantly higher accuracy in contrast to standard CCA and TMSI. It gave the improvements of 1.34%, 3.11%, 3.33%, 10.45%, 15.78%, 18.45%, 15.00% and 14.22% on average over CCA at time windows from 0.5 s to 5 s and 0.55%, 1.56%, 7.78%, 14.67%, 13.67%, 7.33% and 7.78% over TMSI from 0.75 s to 5 s. The method outperformed the filter-based decomposition (FB), empirical mode decomposition (EMD) and wavelet decomposition (WT) based CCA for SSVEP recognition. Significance. The results demonstrate the ability of our proposed MEMD-CCA to improve the performance of SSVEP-based BCI.

  7. Biogeography-Based Optimization with Orthogonal Crossover

    Directory of Open Access Journals (Sweden)

    Quanxi Feng

    2013-01-01

    Full Text Available Biogeography-based optimization (BBO is a new biogeography inspired, population-based algorithm, which mainly uses migration operator to share information among solutions. Similar to crossover operator in genetic algorithm, migration operator is a probabilistic operator and only generates the vertex of a hyperrectangle defined by the emigration and immigration vectors. Therefore, the exploration ability of BBO may be limited. Orthogonal crossover operator with quantization technique (QOX is based on orthogonal design and can generate representative solution in solution space. In this paper, a BBO variant is presented through embedding the QOX operator in BBO algorithm. Additionally, a modified migration equation is used to improve the population diversity. Several experiments are conducted on 23 benchmark functions. Experimental results show that the proposed algorithm is capable of locating the optimal or closed-to-optimal solution. Comparisons with other variants of BBO algorithms and state-of-the-art orthogonal-based evolutionary algorithms demonstrate that our proposed algorithm possesses faster global convergence rate, high-precision solution, and stronger robustness. Finally, the analysis result of the performance of QOX indicates that QOX plays a key role in the proposed algorithm.

  8. A New Efficient Algorithm for the 2D WLP-FDTD Method Based on Domain Decomposition Technique

    Directory of Open Access Journals (Sweden)

    Bo-Ao Xu

    2016-01-01

    Full Text Available This letter introduces a new efficient algorithm for the two-dimensional weighted Laguerre polynomials finite difference time-domain (WLP-FDTD method based on domain decomposition scheme. By using the domain decomposition finite difference technique, the whole computational domain is decomposed into several subdomains. The conventional WLP-FDTD and the efficient WLP-FDTD methods are, respectively, used to eliminate the splitting error and speed up the calculation in different subdomains. A joint calculation scheme is presented to reduce the amount of calculation. Through our work, the iteration is not essential to obtain the accurate results. Numerical example indicates that the efficiency and accuracy are improved compared with the efficient WLP-FDTD method.

  9. A Data-Driven Stochastic Reactive Power Optimization Considering Uncertainties in Active Distribution Networks and Decomposition Method

    DEFF Research Database (Denmark)

    Ding, Tao; Yang, Qingrun; Yang, Yongheng

    2018-01-01

    To address the uncertain output of distributed generators (DGs) for reactive power optimization in active distribution networks, the stochastic programming model is widely used. The model is employed to find an optimal control strategy with minimum expected network loss while satisfying all......, in this paper, a data-driven modeling approach is introduced to assume that the probability distribution from the historical data is uncertain within a confidence set. Furthermore, a data-driven stochastic programming model is formulated as a two-stage problem, where the first-stage variables find the optimal...... control for discrete reactive power compensation equipment under the worst probability distribution of the second stage recourse. The second-stage variables are adjusted to uncertain probability distribution. In particular, this two-stage problem has a special structure so that the second-stage problem...

  10. SEBAL-based Daily Actual Evapotranspiration Forecasting using Wavelets Decomposition Analysis and Multivariate Relevance Vector Machines

    Science.gov (United States)

    Torres, A. F.

    2011-12-01

    Agricultural lands are sources of food and energy for population around the globe. These lands are vulnerable to the impacts of climate change including variations in rainfall regimes, weather patterns, and decreased availability of water for irrigation. In addition, it is not unusual that irrigated agriculture is forced to divert less water in order to make it available for other uses, e.g. human consumption and others. As part of implementation of better policies for water control and management, irrigation companies and water user associations have been implemented water conveyance and distribution monitoring systems along with soil moisture sensors networks in the last decades. These systems allow them to manage and distribute water among the users based on their requirements and water availability while collecting information about actual soil moisture conditions in representative crop fields. In spite of this, requested water deliveries by farmers/water users is based typically on total water share, traditions and past experience on irrigation, which in most cases do not correspond to the actual crop evapotranspiration, already affected by climate change. Therefore it is necessary to provide actual information about the crop water requirements to water users/managers, so they can better quantify the required vs. available water for the irrigation events along the irrigation season. To estimate the actual evapotranspiration in a spatial extent the Sensitivity Analysis of the Surface Energy Balance Algorithm for Land (SEBAL) algorithm has demonstrated its effectiveness using satellite or airborne data. Nonetheless the estimation is restricted to the day when the geospatial information was obtained. Without information of precise future daily water crop demand there is a continuous challenge for the implementation of better water distribution and management policies in the irrigation system. The purpose of this study is to investigate the plausibility of using

  11. Optimal depth-based regional frequency analysis

    Directory of Open Access Journals (Sweden)

    H. Wazneh

    2013-06-01

    Full Text Available Classical methods of regional frequency analysis (RFA of hydrological variables face two drawbacks: (1 the restriction to a particular region which can lead to a loss of some information and (2 the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors. In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  12. Optimal depth-based regional frequency analysis

    Science.gov (United States)

    Wazneh, H.; Chebana, F.; Ouarda, T. B. M. J.

    2013-06-01

    Classical methods of regional frequency analysis (RFA) of hydrological variables face two drawbacks: (1) the restriction to a particular region which can lead to a loss of some information and (2) the definition of a region that generates a border effect. To reduce the impact of these drawbacks on regional modeling performance, an iterative method was proposed recently, based on the statistical notion of the depth function and a weight function φ. This depth-based RFA (DBRFA) approach was shown to be superior to traditional approaches in terms of flexibility, generality and performance. The main difficulty of the DBRFA approach is the optimal choice of the weight function ϕ (e.g., φ minimizing estimation errors). In order to avoid a subjective choice and naïve selection procedures of φ, the aim of the present paper is to propose an algorithm-based procedure to optimize the DBRFA and automate the choice of ϕ according to objective performance criteria. This procedure is applied to estimate flood quantiles in three different regions in North America. One of the findings from the application is that the optimal weight function depends on the considered region and can also quantify the region's homogeneity. By comparing the DBRFA to the canonical correlation analysis (CCA) method, results show that the DBRFA approach leads to better performances both in terms of relative bias and mean square error.

  13. Optimal interference code based on machine learning

    Science.gov (United States)

    Qian, Ye; Chen, Qian; Hu, Xiaobo; Cao, Ercong; Qian, Weixian; Gu, Guohua

    2016-10-01

    In this paper, we analyze the characteristics of pseudo-random code, by the case of m sequence. Depending on the description of coding theory, we introduce the jamming methods. We simulate the interference effect or probability model by the means of MATLAB to consolidate. In accordance with the length of decoding time the adversary spends, we find out the optimal formula and optimal coefficients based on machine learning, then we get the new optimal interference code. First, when it comes to the phase of recognition, this study judges the effect of interference by the way of simulating the length of time over the decoding period of laser seeker. Then, we use laser active deception jamming simulate interference process in the tracking phase in the next block. In this study we choose the method of laser active deception jamming. In order to improve the performance of the interference, this paper simulates the model by MATLAB software. We find out the least number of pulse intervals which must be received, then we can make the conclusion that the precise interval number of the laser pointer for m sequence encoding. In order to find the shortest space, we make the choice of the greatest common divisor method. Then, combining with the coding regularity that has been found before, we restore pulse interval of pseudo-random code, which has been already received. Finally, we can control the time period of laser interference, get the optimal interference code, and also increase the probability of interference as well.

  14. Decomposition techniques

    Science.gov (United States)

    Chao, T.T.; Sanzolone, R.F.

    1992-01-01

    Sample decomposition is a fundamental and integral step in the procedure of geochemical analysis. It is often the limiting factor to sample throughput, especially with the recent application of the fast and modern multi-element measurement instrumentation. The complexity of geological materials makes it necessary to choose the sample decomposition technique that is compatible with the specific objective of the analysis. When selecting a decomposition technique, consideration should be given to the chemical and mineralogical characteristics of the sample, elements to be determined, precision and accuracy requirements, sample throughput, technical capability of personnel, and time constraints. This paper addresses these concerns and discusses the attributes and limitations of many techniques of sample decomposition along with examples of their application to geochemical analysis. The chemical properties of reagents as to their function as decomposition agents are also reviewed. The section on acid dissolution techniques addresses the various inorganic acids that are used individually or in combination in both open and closed systems. Fluxes used in sample fusion are discussed. The promising microwave-oven technology and the emerging field of automation are also examined. A section on applications highlights the use of decomposition techniques for the determination of Au, platinum group elements (PGEs), Hg, U, hydride-forming elements, rare earth elements (REEs), and multi-elements in geological materials. Partial dissolution techniques used for geochemical exploration which have been treated in detail elsewhere are not discussed here; nor are fire-assaying for noble metals and decomposition techniques for X-ray fluorescence or nuclear methods be discussed. ?? 1992.

  15. A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems

    Directory of Open Access Journals (Sweden)

    Yingni Zhai

    2014-10-01

    Full Text Available Purpose: A decomposition heuristics based on multi-bottleneck machines for large-scale job shop scheduling problems (JSP is proposed.Design/methodology/approach: In the algorithm, a number of sub-problems are constructed by iteratively decomposing the large-scale JSP according to the process route of each job. And then the solution of the large-scale JSP can be obtained by iteratively solving the sub-problems. In order to improve the sub-problems' solving efficiency and the solution quality, a detection method for multi-bottleneck machines based on critical path is proposed. Therewith the unscheduled operations can be decomposed into bottleneck operations and non-bottleneck operations. According to the principle of “Bottleneck leads the performance of the whole manufacturing system” in TOC (Theory Of Constraints, the bottleneck operations are scheduled by genetic algorithm for high solution quality, and the non-bottleneck operations are scheduled by dispatching rules for the improvement of the solving efficiency.Findings: In the process of the sub-problems' construction, partial operations in the previous scheduled sub-problem are divided into the successive sub-problem for re-optimization. This strategy can improve the solution quality of the algorithm. In the process of solving the sub-problems, the strategy that evaluating the chromosome's fitness by predicting the global scheduling objective value can improve the solution quality.Research limitations/implications: In this research, there are some assumptions which reduce the complexity of the large-scale scheduling problem. They are as follows: The processing route of each job is predetermined, and the processing time of each operation is fixed. There is no machine breakdown, and no preemption of the operations is allowed. The assumptions should be considered if the algorithm is used in the actual job shop.Originality/value: The research provides an efficient scheduling method for the

  16. Factors Affecting Regional Per-Capita Carbon Emissions in China Based on an LMDI Factor Decomposition Model

    Science.gov (United States)

    Dong, Feng; Long, Ruyin; Chen, Hong; Li, Xiaohui; Yang, Qingliang

    2013-01-01

    China is considered to be the main carbon producer in the world. The per-capita carbon emissions indicator is an important measure of the regional carbon emissions situation. This study used the LMDI factor decomposition model–panel co-integration test two-step method to analyze the factors that affect per-capita carbon emissions. The main results are as follows. (1) During 1997, Eastern China, Central China, and Western China ranked first, second, and third in the per-capita carbon emissions, while in 2009 the pecking order changed to Eastern China, Western China, and Central China. (2) According to the LMDI decomposition results, the key driver boosting the per-capita carbon emissions in the three economic regions of China between 1997 and 2009 was economic development, and the energy efficiency was much greater than the energy structure after considering their effect on restraining increased per-capita carbon emissions. (3) Based on the decomposition, the factors that affected per-capita carbon emissions in the panel co-integration test showed that Central China had the best energy structure elasticity in its regional per-capita carbon emissions. Thus, Central China was ranked first for energy efficiency elasticity, while Western China was ranked first for economic development elasticity. PMID:24353753

  17. Dynamic Load Balancing Based on Constrained K-D Tree Decomposition for Parallel Particle Tracing

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Jiang; Guo, Hanqi; Yuan, Xiaoru; Hong, Fan; Peterka, Tom

    2018-01-01

    Particle tracing is a fundamental technique in flow field data visualization. In this work, we present a novel dynamic load balancing method for parallel particle tracing. Specifically, we employ a constrained k-d tree decomposition approach to dynamically redistribute tasks among processes. Each process is initially assigned a regularly partitioned block along with duplicated ghost layer under the memory limit. During particle tracing, the k-d tree decomposition is dynamically performed by constraining the cutting planes in the overlap range of duplicated data. This ensures that each process is reassigned particles as even as possible, and on the other hand the new assigned particles for a process always locate in its block. Result shows good load balance and high efficiency of our method.

  18. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide

    International Nuclear Information System (INIS)

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-01-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass

  19. Controllable pneumatic generator based on the catalytic decomposition of hydrogen peroxide

    Science.gov (United States)

    Kim, Kyung-Rok; Kim, Kyung-Soo; Kim, Soohyun

    2014-07-01

    This paper presents a novel compact and controllable pneumatic generator that uses hydrogen peroxide decomposition. A fuel micro-injector using a piston-pump mechanism is devised and tested to control the chemical decomposition rate. By controlling the injection rate, the feedback controller maintains the pressure of the gas reservoir at a desired pressure level. Thermodynamic analysis and experiments are performed to demonstrate the feasibility of the proposed pneumatic generator. Using a prototype of the pneumatic generator, it takes 6 s to reach 3.5 bars with a reservoir volume of 200 ml at the room temperature, which is sufficiently rapid and effective to maintain the repetitive lifting of a 1 kg mass.

  20. Single interval longwave radiation scheme based on the net exchanged rate decomposition with bracketing

    Czech Academy of Sciences Publication Activity Database

    Geleyn, J.- F.; Mašek, Jan; Brožková, Radmila; Kuma, P.; Degrauwe, D.; Hello, G.; Pristov, N.

    2017-01-01

    Roč. 143, č. 704 (2017), s. 1313-1335 ISSN 0035-9009 R&D Projects: GA MŠk(CZ) LO1415 Institutional support: RVO:86652079 Keywords : numerical weather prediction * climate models * clouds * parameterization * atmospheres * formulation * absorption * scattering * accurate * database * longwave radiative transfer * broadband approach * idealized optical paths * net exchanged rate decomposition * bracketing * selective intermittency Subject RIV: DG - Athmosphere Sciences, Meteorology OBOR OECD: Meteorology and atmospheric sciences Impact factor: 3.444, year: 2016

  1. Java-Based Coupling for Parallel Predictive-Adaptive Domain Decomposition

    Directory of Open Access Journals (Sweden)

    Cécile Germain‐Renaud

    1999-01-01

    Full Text Available Adaptive domain decomposition exemplifies the problem of integrating heterogeneous software components with intermediate coupling granularity. This paper describes an experiment where a data‐parallel (HPF client interfaces with a sequential computation server through Java. We show that seamless integration of data‐parallelism is possible, but requires most of the tools from the Java palette: Java Native Interface (JNI, Remote Method Invocation (RMI, callbacks and threads.

  2. Probabilistic inference with noisy-threshold models based on a CP tensor decomposition

    Czech Academy of Sciences Publication Activity Database

    Vomlel, Jiří; Tichavský, Petr

    2014-01-01

    Roč. 55, č. 4 (2014), s. 1072-1092 ISSN 0888-613X R&D Projects: GA ČR GA13-20012S; GA ČR GA102/09/1278 Institutional support: RVO:67985556 Keywords : Bayesian networks * Probabilistic inference * Candecomp-Parafac tensor decomposition * Symmetric tensor rank Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.451, year: 2014 http://library.utia.cas.cz/separaty/2014/MTR/vomlel-0427059.pdf

  3. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    OpenAIRE

    Qin, Xiwen; Li, Qiaoling; Dong, Xiaogang; Lv, Siqi

    2017-01-01

    Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD) and Random Forest (RF) is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs) by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet meth...

  4. Dynamic load balancing algorithm for molecular dynamics based on Voronoi cells domain decompositions

    Energy Technology Data Exchange (ETDEWEB)

    Fattebert, J.-L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Richards, D.F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Glosli, J.N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2012-12-01

    We present a new algorithm for automatic parallel load balancing in classical molecular dynamics. It assumes a spatial domain decomposition of particles into Voronoi cells. It is a gradient method which attempts to minimize a cost function by displacing Voronoi sites associated with each processor/sub-domain along steepest descent directions. Excellent load balance has been obtained for quasi-2D and 3D practical applications, with up to 440·106 particles on 65,536 MPI tasks.

  5. Cost versus life cycle assessment-based environmental impact optimization of drinking water production plants.

    Science.gov (United States)

    Capitanescu, F; Rege, S; Marvuglia, A; Benetto, E; Ahmadi, A; Gutiérrez, T Navarrete; Tiruta-Barna, L

    2016-07-15

    Empowering decision makers with cost-effective solutions for reducing industrial processes environmental burden, at both design and operation stages, is nowadays a major worldwide concern. The paper addresses this issue for the sector of drinking water production plants (DWPPs), seeking for optimal solutions trading-off operation cost and life cycle assessment (LCA)-based environmental impact while satisfying outlet water quality criteria. This leads to a challenging bi-objective constrained optimization problem, which relies on a computationally expensive intricate process-modelling simulator of the DWPP and has to be solved with limited computational budget. Since mathematical programming methods are unusable in this case, the paper examines the performances in tackling these challenges of six off-the-shelf state-of-the-art global meta-heuristic optimization algorithms, suitable for such simulation-based optimization, namely Strength Pareto Evolutionary Algorithm (SPEA2), Non-dominated Sorting Genetic Algorithm (NSGA-II), Indicator-based Evolutionary Algorithm (IBEA), Multi-Objective Evolutionary Algorithm based on Decomposition (MOEA/D), Differential Evolution (DE), and Particle Swarm Optimization (PSO). The results of optimization reveal that good reduction in both operating cost and environmental impact of the DWPP can be obtained. Furthermore, NSGA-II outperforms the other competing algorithms while MOEA/D and DE perform unexpectedly poorly. Copyright © 2016 Elsevier Ltd. All rights reserved.

  6. Risk-based optimization of land reclamation

    International Nuclear Information System (INIS)

    Lendering, K.T.; Jonkman, S.N.; Gelder, P.H.A.J.M. van; Peters, D.J.

    2015-01-01

    Large-scale land reclamations are generally constructed by means of a landfill well above mean sea level. This can be costly in areas where good quality fill material is scarce. An alternative to save materials and costs is a ‘polder terminal’. The quay wall acts as a flood defense and the terminal level is well below the level of the quay wall. Compared with a conventional terminal, the costs are lower, but an additional flood risk is introduced. In this paper, a risk-based optimization is developed for a conventional and a polder terminal. It considers the investment and residual flood risk. The method takes into account both the quay wall and terminal level, which determine the probability and damage of flooding. The optimal quay wall level is found by solving a Lambert function numerically. The terminal level is bounded by engineering boundary conditions, i.e. piping and uplift of the cover layer of the terminal yard. It is found that, for a representative case study, the saving of reclamation costs for a polder terminal is larger than the increase of flood risk. The model is applicable to other cases of land reclamation and to similar optimization problems in flood risk management. - Highlights: • A polder terminal can be an attractive alternative for a conventional terminal. • A polder terminal is feasible at locations with high reclamation cost. • A risk-based approach is required to determine the optimal protection levels. • The depth of the polder terminal yard is bounded by uplifting of the cover layer. • This paper can support decisions regarding alternatives for port expansions.

  7. Distance-Based Functional Diversity Measures and Their Decomposition: A Framework Based on Hill Numbers

    Science.gov (United States)

    Chiu, Chun-Huo; Chao, Anne

    2014-01-01

    Hill numbers (or the “effective number of species”) are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify “the effective number of equally abundant and (functionally) equally distinct species” in an assemblage. We also propose a class of mean functional diversity (per species), which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total) functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity) quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma) can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation) measures, including N-assemblage functional generalizations of

  8. Distance-based functional diversity measures and their decomposition: a framework based on Hill numbers.

    Directory of Open Access Journals (Sweden)

    Chun-Huo Chiu

    Full Text Available Hill numbers (or the "effective number of species" are increasingly used to characterize species diversity of an assemblage. This work extends Hill numbers to incorporate species pairwise functional distances calculated from species traits. We derive a parametric class of functional Hill numbers, which quantify "the effective number of equally abundant and (functionally equally distinct species" in an assemblage. We also propose a class of mean functional diversity (per species, which quantifies the effective sum of functional distances between a fixed species to all other species. The product of the functional Hill number and the mean functional diversity thus quantifies the (total functional diversity, i.e., the effective total distance between species of the assemblage. The three measures (functional Hill numbers, mean functional diversity and total functional diversity quantify different aspects of species trait space, and all are based on species abundance and species pairwise functional distances. When all species are equally distinct, our functional Hill numbers reduce to ordinary Hill numbers. When species abundances are not considered or species are equally abundant, our total functional diversity reduces to the sum of all pairwise distances between species of an assemblage. The functional Hill numbers and the mean functional diversity both satisfy a replication principle, implying the total functional diversity satisfies a quadratic replication principle. When there are multiple assemblages defined by the investigator, each of the three measures of the pooled assemblage (gamma can be multiplicatively decomposed into alpha and beta components, and the two components are independent. The resulting beta component measures pure functional differentiation among assemblages and can be further transformed to obtain several classes of normalized functional similarity (or differentiation measures, including N-assemblage functional

  9. Pixel-based OPC optimization based on conjugate gradients.

    Science.gov (United States)

    Ma, Xu; Arce, Gonzalo R

    2011-01-31

    Optical proximity correction (OPC) methods are resolution enhancement techniques (RET) used extensively in the semiconductor industry to improve the resolution and pattern fidelity of optical lithography. In pixel-based OPC (PBOPC), the mask is divided into small pixels, each of which is modified during the optimization process. Two critical issues in PBOPC are the required computational complexity of the optimization process, and the manufacturability of the optimized mask. Most current OPC optimization methods apply the steepest descent (SD) algorithm to improve image fidelity augmented by regularization penalties to reduce the complexity of the mask. Although simple to implement, the SD algorithm converges slowly. The existing regularization penalties, however, fall short in meeting the mask rule check (MRC) requirements often used in semiconductor manufacturing. This paper focuses on developing OPC optimization algorithms based on the conjugate gradient (CG) method which exhibits much faster convergence than the SD algorithm. The imaging formation process is represented by the Fourier series expansion model which approximates the partially coherent system as a sum of coherent systems. In order to obtain more desirable manufacturability properties of the mask pattern, a MRC penalty is proposed to enlarge the linear size of the sub-resolution assistant features (SRAFs), as well as the distances between the SRAFs and the main body of the mask. Finally, a projection method is developed to further reduce the complexity of the optimized mask pattern.

  10. Sequential ensemble-based optimal design for parameter estimation: SEQUENTIAL ENSEMBLE-BASED OPTIMAL DESIGN

    Energy Technology Data Exchange (ETDEWEB)

    Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA

    2016-10-01

    The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.

  11. PRODUCT OPTIMIZATION METHOD BASED ON ANALYSIS OF OPTIMAL VALUES OF THEIR CHARACTERISTICS

    Directory of Open Access Journals (Sweden)

    Constantin D. STANESCU

    2016-05-01

    Full Text Available The paper presents an original method of optimizing products based on the analysis of optimal values of their characteristics . Optimization method comprises statistical model and analytical model . With this original method can easily and quickly obtain optimal product or material .

  12. Robust optimization based upon statistical theory.

    Science.gov (United States)

    Sobotta, B; Söhn, M; Alber, M

    2010-08-01

    Organ movement is still the biggest challenge in cancer treatment despite advances in online imaging. Due to the resulting geometric uncertainties, the delivered dose cannot be predicted precisely at treatment planning time. Consequently, all associated dose metrics (e.g., EUD and maxDose) are random variables with a patient-specific probability distribution. The method that the authors propose makes these distributions the basis of the optimization and evaluation process. The authors start from a model of motion derived from patient-specific imaging. On a multitude of geometry instances sampled from this model, a dose metric is evaluated. The resulting pdf of this dose metric is termed outcome distribution. The approach optimizes the shape of the outcome distribution based on its mean and variance. This is in contrast to the conventional optimization of a nominal value (e.g., PTV EUD) computed on a single geometry instance. The mean and variance allow for an estimate of the expected treatment outcome along with the residual uncertainty. Besides being applicable to the target, the proposed method also seamlessly includes the organs at risk (OARs). The likelihood that a given value of a metric is reached in the treatment is predicted quantitatively. This information reveals potential hazards that may occur during the course of the treatment, thus helping the expert to find the right balance between the risk of insufficient normal tissue sparing and the risk of insufficient tumor control. By feeding this information to the optimizer, outcome distributions can be obtained where the probability of exceeding a given OAR maximum and that of falling short of a given target goal can be minimized simultaneously. The method is applicable to any source of residual motion uncertainty in treatment delivery. Any model that quantifies organ movement and deformation in terms of probability distributions can be used as basis for the algorithm. Thus, it can generate dose

  13. Effects of magnesium-based hydrogen storage materials on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant.

    Science.gov (United States)

    Liu, Leili; Li, Jie; Zhang, Lingyao; Tian, Siyu

    2018-01-15

    MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 were prepared, and their structure and hydrogen storage properties were determined through X-ray photoelectron spectroscopy and thermal analyzer. The effects of MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 on the thermal decomposition, burning rate, and explosive heat of ammonium perchlorate-based composite solid propellant were subsequently studied. Results indicated that MgH 2 , Mg 2 NiH 4 , and Mg 2 CuH 3 can decrease the thermal decomposition peak temperature and increase the total released heat of decomposition. These compounds can improve the effect of thermal decomposition of the propellant. The burning rates of the propellant increased using Mg-based hydrogen storage materials as promoter. The burning rates of the propellant also increased using MgH 2 instead of Al in the propellant, but its explosive heat was not enlarged. Nonetheless, the combustion heat of MgH 2 was higher than that of Al. A possible mechanism was thus proposed. Copyright © 2017. Published by Elsevier B.V.

  14. Gas Sensing Analysis of Ag-Decorated Graphene for Sulfur Hexafluoride Decomposition Products Based on the Density Functional Theory

    Directory of Open Access Journals (Sweden)

    Xiaoxing Zhang

    2016-11-01

    Full Text Available Detection of decomposition products of sulfur hexafluoride (SF6 is one of the best ways to diagnose early latent insulation faults in gas-insulated equipment, and the occurrence of sudden accidents can be avoided effectively by finding early latent faults. Recently, functionalized graphene, a kind of gas sensing material, has been reported to show good application prospects in the gas sensor field. Therefore, calculations were performed to analyze the gas sensing properties of intrinsic graphene (Int-graphene and functionalized graphene-based material, Ag-decorated graphene (Ag-graphene, for decomposition products of SF6, including SO2F2, SOF2, and SO2, based on density functional theory (DFT. We thoroughly investigated a series of parameters presenting gas-sensing properties of adsorbing process about gas molecule (SO2F2, SOF2, SO2 and double gas molecules (2SO2F2, 2SOF2, 2SO2 on Ag-graphene, including adsorption energy, net charge transfer, electronic state density, and the highest and lowest unoccupied molecular orbital. The results showed that the Ag atom significantly enhances the electrochemical reactivity of graphene, reflected in the change of conductivity during the adsorption process. SO2F2 and SO2 gas molecules on Ag-graphene presented chemisorption, and the adsorption strength was SO2F2 > SO2, while SOF2 absorption on Ag-graphene was physical adsorption. Thus, we concluded that Ag-graphene showed good selectivity and high sensitivity to SO2F2. The results can provide a helpful guide in exploring Ag-graphene material in experiments for monitoring the insulation status of SF6-insulated equipment based on detecting decomposition products of SF6.

  15. α-Decomposition for estimating parameters in common cause failure modeling based on causal inference

    International Nuclear Information System (INIS)

    Zheng, Xiaoyu; Yamaguchi, Akira; Takata, Takashi

    2013-01-01

    The traditional α-factor model has focused on the occurrence frequencies of common cause failure (CCF) events. Global α-factors in the α-factor model are defined as fractions of failure probability for particular groups of components. However, there are unknown uncertainties in the CCF parameters estimation for the scarcity of available failure data. Joint distributions of CCF parameters are actually determined by a set of possible causes, which are characterized by CCF-triggering abilities and occurrence frequencies. In the present paper, the process of α-decomposition (Kelly-CCF method) is developed to learn about sources of uncertainty in CCF parameter estimation. Moreover, it aims to evaluate CCF risk significances of different causes, which are named as decomposed α-factors. Firstly, a Hybrid Bayesian Network is adopted to reveal the relationship between potential causes and failures. Secondly, because all potential causes have different occurrence frequencies and abilities to trigger dependent failures or independent failures, a regression model is provided and proved by conditional probability. Global α-factors are expressed by explanatory variables (causes’ occurrence frequencies) and parameters (decomposed α-factors). At last, an example is provided to illustrate the process of hierarchical Bayesian inference for the α-decomposition process. This study shows that the α-decomposition method can integrate failure information from cause, component and system level. It can parameterize the CCF risk significance of possible causes and can update probability distributions of global α-factors. Besides, it can provide a reliable way to evaluate uncertainty sources and reduce the uncertainty in probabilistic risk assessment. It is recommended to build databases including CCF parameters and corresponding causes’ occurrence frequency of each targeted system

  16. Support Vector Regression Model Based on Empirical Mode Decomposition and Auto Regression for Electric Load Forecasting

    Directory of Open Access Journals (Sweden)

    Hong-Juan Li

    2013-04-01

    Full Text Available Electric load forecasting is an important issue for a power utility, associated with the management of daily operations such as energy transfer scheduling, unit commitment, and load dispatch. Inspired by strong non-linear learning capability of support vector regression (SVR, this paper presents a SVR model hybridized with the empirical mode decomposition (EMD method and auto regression (AR for electric load forecasting. The electric load data of the New South Wales (Australia market are employed for comparing the forecasting performances of different forecasting models. The results confirm the validity of the idea that the proposed model can simultaneously provide forecasting with good accuracy and interpretability.

  17. Analytical singular-value decomposition of three-dimensional, proximity-based SPECT systems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, Harrison H. [Arizona Univ., Tucson, AZ (United States). College of Optical Sciences; Arizona Univ., Tucson, AZ (United States). Center for Gamma-Ray Imaging; Holen, Roel van [Ghent Univ. (Belgium). Medical Image and Signal Processing (MEDISIP); Arizona Univ., Tucson, AZ (United States). Center for Gamma-Ray Imaging

    2011-07-01

    An operator formalism is introduced for the description of SPECT imaging systems that use solid-angle effects rather than pinholes or collimators, as in recent work by Mitchell and Cherry. The object is treated as a 3D function, without discretization, and the data are 2D functions on the detectors. An analytic singular-value decomposition of the resulting integral operator is performed and used to compute the measurement and null components of the objects. The results of the theory are confirmed with a Landweber algorithm that does not require a system matrix. (orig.)

  18. Progressivity of personal income tax in Croatia: decomposition of tax base and rate effects

    Directory of Open Access Journals (Sweden)

    Ivica Urban

    2006-09-01

    Full Text Available This paper presents progressivity breakdowns for Croatian personal income tax (henceforth PIT in 1997 and 2004. The decompositions reveal how the elements of the system – tax schedule, allowances, deductions and credits – contribute to the achievement of progressivity, over the quantiles of pre-tax income distribution. Through the use of ‘single parameter’ Gini indices, the social decision maker’s (henceforth SDM relatively more or less favorable inclination toward taxpayers in the lower tails of pre-tax income distribution is accounted for. Simulations are undertaken to show how the introduction of a flat-rate system would affect progressivity.

  19. The Fault Diagnosis of Rolling Bearing Based on Ensemble Empirical Mode Decomposition and Random Forest

    Directory of Open Access Journals (Sweden)

    Xiwen Qin

    2017-01-01

    Full Text Available Accurate diagnosis of rolling bearing fault on the normal operation of machinery and equipment has a very important significance. A method combining Ensemble Empirical Mode Decomposition (EEMD and Random Forest (RF is proposed. Firstly, the original signal is decomposed into several intrinsic mode functions (IMFs by EEMD, and the effective IMFs are selected. Then their energy entropy is calculated as the feature. Finally, the classification is performed by RF. In addition, the wavelet method is also used in the proposed process, the same as EEMD. The results of the comparison show that the EEMD method is more accurate than the wavelet method.

  20. Tissue decomposition from dual energy CT data for MC based dose calculation in particle therapy

    Energy Technology Data Exchange (ETDEWEB)

    Hünemohr, Nora, E-mail: n.huenemohr@dkfz.de; Greilich, Steffen [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg (Germany); Paganetti, Harald; Seco, Joao [Department of Radiation Oncology, Massachusetts General Hospital and Harvard Medical School, Boston, Massachusetts 02114 (United States); Jäkel, Oliver [Medical Physics in Radiation Oncology, German Cancer Research Center, 69120 Heidelberg, Germany and Department of Radiation Oncology and Radiation Therapy, University Hospital of Heidelberg, 69120 Heidelberg (Germany)

    2014-06-15

    Purpose: The authors describe a novel method of predicting mass density and elemental mass fractions of tissues from dual energy CT (DECT) data for Monte Carlo (MC) based dose planning. Methods: The relative electron density ϱ{sub e} and effective atomic number Z{sub eff} are calculated for 71 tabulated tissue compositions. For MC simulations, the mass density is derived via one linear fit in the ϱ{sub e} that covers the entire range of tissue compositions (except lung tissue). Elemental mass fractions are predicted from the ϱ{sub e} and the Z{sub eff} in combination. Since particle therapy dose planning and verification is especially sensitive to accurate material assignment, differences to the ground truth are further analyzed for mass density, I-value predictions, and stopping power ratios (SPR) for ions. Dose studies with monoenergetic proton and carbon ions in 12 tissues which showed the largest differences of single energy CT (SECT) to DECT are presented with respect to range uncertainties. The standard approach (SECT) and the new DECT approach are compared to reference Bragg peak positions. Results: Mean deviations to ground truth in mass density predictions could be reduced for soft tissue from (0.5±0.6)% (SECT) to (0.2±0.2)% with the DECT method. Maximum SPR deviations could be reduced significantly for soft tissue from 3.1% (SECT) to 0.7% (DECT) and for bone tissue from 0.8% to 0.1%. MeanI-value deviations could be reduced for soft tissue from (1.1±1.4%, SECT) to (0.4±0.3%) with the presented method. Predictions of elemental composition were improved for every element. Mean and maximum deviations from ground truth of all elemental mass fractions could be reduced by at least a half with DECT compared to SECT (except soft tissue hydrogen and nitrogen where the reduction was slightly smaller). The carbon and oxygen mass fraction predictions profit especially from the DECT information. Dose studies showed that most of the 12 selected tissues would

  1. Location based Network Optimizations for Mobile Wireless Networks

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen

    selection in Wi-Fi networks and predictive handover optimization in heterogeneous wireless networks. The investigations in this work have indicated that location based network optimizations are beneficial compared to typical link measurement based approaches. Especially the knowledge of geographical...

  2. Optimization of the Case Based Reasoning Systems

    International Nuclear Information System (INIS)

    Mohamed, A.H.

    2014-01-01

    Intrusion Detection System (IDS) have a great importance in saving the authority of the information widely spread all over the world through the networks. Many Case Based Systems concerned on the different methods of the unauthorized users/hackers that face the developers of the IDS. The proposed system introduces a new hybrid system that uses the genetic algorithm to optimize an IDS - case based system. It can detect the new anomalies appeared through the network and use the cases in the case library to determine the suitable solution for their behavior. The suggested system can solve the problem either by using an old identical solution or adapt the optimum one till have the targeted solution. The proposed system has been applied to block unauthorized users / hackers from attach the medical images for radiotherapy of the cancer diseases during their transmission through web. The proposed system can prove its accepted performance in this manner

  3. Analysis of temporal-longitudinal-latitudinal characteristics in the global ionosphere based on tensor rank-1 decomposition

    Science.gov (United States)

    Lu, Shikun; Zhang, Hao; Li, Xihai; Li, Yihong; Niu, Chao; Yang, Xiaoyun; Liu, Daizhi

    2018-03-01

    Combining analyses of spatial and temporal characteristics of the ionosphere is of great significance for scientific research and engineering applications. Tensor decomposition is performed to explore the temporal-longitudinal-latitudinal characteristics in the ionosphere. Three-dimensional tensors are established based on the time series of ionospheric vertical total electron content maps obtained from the Centre for Orbit Determination in Europe. To obtain large-scale characteristics of the ionosphere, rank-1 decomposition is used to obtain U^{(1)}, U^{(2)}, and U^{(3)}, which are the resulting vectors for the time, longitude, and latitude modes, respectively. Our initial finding is that the correspondence between the frequency spectrum of U^{(1)} and solar variation indicates that rank-1 decomposition primarily describes large-scale temporal variations in the global ionosphere caused by the Sun. Furthermore, the time lags between the maxima of the ionospheric U^{(2)} and solar irradiation range from 1 to 3.7 h without seasonal dependence. The differences in time lags may indicate different interactions between processes in the magnetosphere-ionosphere-thermosphere system. Based on the dataset displayed in the geomagnetic coordinates, the position of the barycenter of U^{(3)} provides evidence for north-south asymmetry (NSA) in the large-scale ionospheric variations. The daily variation in such asymmetry indicates the influences of solar ionization. The diurnal geomagnetic coordinate variations in U^{(3)} show that the large-scale EIA (equatorial ionization anomaly) variations during the day and night have similar characteristics. Considering the influences of geomagnetic disturbance on ionospheric behavior, we select the geomagnetic quiet GIMs to construct the ionospheric tensor. The results indicate that the geomagnetic disturbances have little effect on large-scale ionospheric characteristics.

  4. Harmonic analysis of traction power supply system based on wavelet decomposition

    Science.gov (United States)

    Dun, Xiaohong

    2018-05-01

    With the rapid development of high-speed railway and heavy-haul transport, AC drive electric locomotive and EMU large-scale operation in the country on the ground, the electrified railway has become the main harmonic source of China's power grid. In response to this phenomenon, the need for timely monitoring of power quality problems of electrified railway, assessment and governance. Wavelet transform is developed on the basis of Fourier analysis, the basic idea comes from the harmonic analysis, with a rigorous theoretical model, which has inherited and developed the local thought of Garbor transformation, and has overcome the disadvantages such as window fixation and lack of discrete orthogonally, so as to become a more recently studied spectral analysis tool. The wavelet analysis takes the gradual and precise time domain step in the high frequency part so as to focus on any details of the signal being analyzed, thereby comprehensively analyzing the harmonics of the traction power supply system meanwhile use the pyramid algorithm to increase the speed of wavelet decomposition. The matlab simulation shows that the use of wavelet decomposition of the traction power supply system for harmonic spectrum analysis is effective.

  5. Path planning of decentralized multi-quadrotor based on fuzzy-cell decomposition algorithm

    Science.gov (United States)

    Iswanto, Wahyunggoro, Oyas; Cahyadi, Adha Imam

    2017-04-01

    The paper aims to present a design algorithm for multi quadrotor lanes in order to move towards the goal quickly and avoid obstacles in an area with obstacles. There are several problems in path planning including how to get to the goal position quickly and avoid static and dynamic obstacles. To overcome the problem, therefore, the paper presents fuzzy logic algorithm and fuzzy cell decomposition algorithm. Fuzzy logic algorithm is one of the artificial intelligence algorithms which can be applied to robot path planning that is able to detect static and dynamic obstacles. Cell decomposition algorithm is an algorithm of graph theory used to make a robot path map. By using the two algorithms the robot is able to get to the goal position and avoid obstacles but it takes a considerable time because they are able to find the shortest path. Therefore, this paper describes a modification of the algorithms by adding a potential field algorithm used to provide weight values on the map applied for each quadrotor by using decentralized controlled, so that the quadrotor is able to move to the goal position quickly by finding the shortest path. The simulations conducted have shown that multi-quadrotor can avoid various obstacles and find the shortest path by using the proposed algorithms.

  6. Image reconstruction of fluorescent molecular tomography based on the tree structured Schur complement decomposition

    Directory of Open Access Journals (Sweden)

    Wang Jiajun

    2010-05-01

    Full Text Available Abstract Background The inverse problem of fluorescent molecular tomography (FMT often involves complex large-scale matrix operations, which may lead to unacceptable computational errors and complexity. In this research, a tree structured Schur complement decomposition strategy is proposed to accelerate the reconstruction process and reduce the computational complexity. Additionally, an adaptive regularization scheme is developed to improve the ill-posedness of the inverse problem. Methods The global system is decomposed level by level with the Schur complement system along two paths in the tree structure. The resultant subsystems are solved in combination with the biconjugate gradient method. The mesh for the inverse problem is generated incorporating the prior information. During the reconstruction, the regularization parameters are adaptive not only to the spatial variations but also to the variations of the objective function to tackle the ill-posed nature of the inverse problem. Results Simulation results demonstrate that the strategy of the tree structured Schur complement decomposition obviously outperforms the previous methods, such as the conventional Conjugate-Gradient (CG and the Schur CG methods, in both reconstruction accuracy and speed. As compared with the Tikhonov regularization method, the adaptive regularization scheme can significantly improve ill-posedness of the inverse problem. Conclusions The methods proposed in this paper can significantly improve the reconstructed image quality of FMT and accelerate the reconstruction process.

  7. Automated polyp measurement based on colon structure decomposition for CT colonography

    Science.gov (United States)

    Wang, Huafeng; Li, Lihong C.; Han, Hao; Peng, Hao; Song, Bowen; Wei, Xinzhou; Liang, Zhengrong

    2014-03-01

    Accurate assessment of colorectal polyp size is of great significance for early diagnosis and management of colorectal cancers. Due to the complexity of colon structure, polyps with diverse geometric characteristics grow from different landform surfaces. In this paper, we present a new colon decomposition approach for polyp measurement. We first apply an efficient maximum a posteriori expectation-maximization (MAP-EM) partial volume segmentation algorithm to achieve an effective electronic cleansing on colon. The global colon structure is then decomposed into different kinds of morphological shapes, e.g. haustral folds or haustral wall. Meanwhile, the polyp location is identified by an automatic computer aided detection algorithm. By integrating the colon structure decomposition with the computer aided detection system, a patch volume of colon polyps is extracted. Thus, polyp size assessment can be achieved by finding abnormal protrusion on a relative uniform morphological surface from the decomposed colon landform. We evaluated our method via physical phantom and clinical datasets. Experiment results demonstrate the feasibility of our method in consistently quantifying the size of polyp volume and, therefore, facilitating characterizing for clinical management.

  8. Nickel Oxide (NiO nanoparticles prepared by solid-state thermal decomposition of Nickel (II schiff base precursor

    Directory of Open Access Journals (Sweden)

    Aliakbar Dehno Khalaji

    2015-06-01

    Full Text Available In this paper, plate-like NiO nanoparticles were prepared by one-pot solid-state thermal decomposition of nickel (II Schiff base complex as new precursor. First, the nickel (II Schiff base precursor was prepared by solid-state grinding using nickel (II nitrate hexahydrate, Ni(NO32∙6H2O, and the Schiff base ligand N,N′-bis-(salicylidene benzene-1,4-diamine for 30 min without using any solvent, catalyst, template or surfactant. It was characterized by Fourier Transform Infrared spectroscopy (FT-IR and elemental analysis (CHN. The resultant solid was subsequently annealed in the electrical furnace at 450 °C for 3 h in air atmosphere. Nanoparticles of NiO were produced and characterized by X-ray powder diffraction (XRD at 2θ degree 0-140°, FT-IR spectroscopy, scanning electron microscopy (SEM and transmission electron microscopy (TEM. The XRD and FT-IR results showed that the product is pure and has good crystallinity with cubic structure because no characteristic peaks of impurity were observed, while the SEM and TEM results showed that the obtained product is tiny, aggregated with plate-like shape, narrow size distribution with an average size between 10-40 nm. Results show that the solid state thermal decomposition method is simple, environmentally friendly, safe and suitable for preparation of NiO nanoparticles. This method can also be used to synthesize nanoparticles of other metal oxides.

  9. TiO2 Immobilized on Manihot Carbon: Optimal Preparation and Evaluation of Its Activity in the Decomposition of Indigo Carmine

    Science.gov (United States)

    Antonio-Cisneros, Cynthia M.; Dávila-Jiménez, Martín M.; Elizalde-González, María P.; García-Díaz, Esmeralda

    2015-01-01

    Applications of carbon-TiO2 materials have attracted attention in nanotechnology due to their synergic effects. We report the immobilization of TiO2 on carbon prepared from residues of the plant Manihot, commercial TiO2 and glycerol. The objective was to obtain a moderate loading of the anatase phase by preserving the carbonaceous external surface and micropores of the composite. Two preparation methods were compared, including mixing dry precursors and immobilization using a glycerol slurry. The evaluation of the micropore blocking was performed using nitrogen adsorption isotherms. The results indicated that it was possible to use Manihot residues and glycerol to prepare an anatase-containing material with a basic surface and a significant SBET value. The activities of the prepared materials were tested in a decomposition assay of indigo carmine. The TiO2/carbon eliminated nearly 100% of the dye under UV irradiation using the optimal conditions found by a Taguchi L4 orthogonal array considering the specific surface, temperature and initial concentration. The reaction was monitored by UV-Vis spectrophotometry and LC-ESI-(Qq)-TOF-MS, enabling the identification of some intermediates. No isatin-5-sulfonic acid was detected after a 60 min photocatalytic reaction, and three sulfonated aromatic amines, including 4-amino-3-hydroxybenzenesulfonic acid, 2-(2-amino-5-sulfophenyl)-2-oxoacetic acid and 2-amino-5-sulfobenzoic acid, were present in the reaction mixture. PMID:25588214

  10. TiO2 Immobilized on Manihot Carbon: Optimal Preparation and Evaluation of Its Activity in the Decomposition of Indigo Carmine

    Directory of Open Access Journals (Sweden)

    Cynthia M. Antonio-Cisneros

    2015-01-01

    Full Text Available Applications of carbon-TiO2 materials have attracted attention in nanotechnology due to their synergic effects. We report the immobilization of TiO2 on carbon prepared from residues of the plant Manihot, commercial TiO2 and glycerol. The objective was to obtain a moderate loading of the anatase phase by preserving the carbonaceous external surface and micropores of the composite. Two preparation methods were compared, including mixing dry precursors and immobilization using a glycerol slurry. The evaluation of the micropore blocking was performed using nitrogen adsorption isotherms. The results indicated that it was possible to use Manihot residues and glycerol to prepare an anatase-containing material with a basic surface and a significant SBET value. The activities of the prepared materials were tested in a decomposition assay of indigo carmine. The TiO2/carbon eliminated nearly 100% of the dye under UV irradiation using the optimal conditions found by a Taguchi L4 orthogonal array considering the specific surface, temperature and initial concentration. The reaction was monitored by UV-Vis spectrophotometry and LC-ESI-(Qq-TOF-MS, enabling the identification of some intermediates. No isatin-5-sulfonic acid was detected after a 60 min photocatalytic reaction, and three sulfonated aromatic amines, including 4-amino-3-hydroxybenzenesulfonic acid, 2-(2-amino-5-sulfophenyl-2-oxoacetic acid and 2-amino-5-sulfobenzoic acid, were present in the reaction mixture.

  11. Performance-based shape optimization of continuum structures

    International Nuclear Information System (INIS)

    Liang Qingquan

    2010-01-01

    This paper presents a performance-based optimization (PBO) method for optimal shape design of continuum structures with stiffness constraints. Performance-based design concepts are incorporated in the shape optimization theory to achieve optimal designs. In the PBO method, the traditional shape optimization problem of minimizing the weight of a continuum structure with displacement or mean compliance constraints is transformed to the problem of maximizing the performance of the structure. The optimal shape of a continuum structure is obtained by gradually eliminating inefficient finite elements from the structure until its performance is maximized. Performance indices are employed to monitor the performance of optimized shapes in an optimization process. Performance-based optimality criteria are incorporated in the PBO method to identify the optimum from the optimization process. The PBO method is used to produce optimal shapes of plane stress continuum structures and plates in bending. Benchmark numerical results are provided to demonstrate the effectiveness of the PBO method for generating the maximum stiffness shape design of continuum structures. It is shown that the PBO method developed overcomes the limitations of traditional shape optimization methods in optimal design of continuum structures. Performance-based optimality criteria presented can be incorporated in any shape and topology optimization methods to obtain optimal designs of continuum structures.

  12. Decomposition techniques in mathematical programming engineering and science applications

    CERN Document Server

    Conejo, Antonio J; Minguez, Roberto; Garcia-Bertrand, Raquel

    2006-01-01

    Optimization plainly dominates the design, planning, operation, and c- trol of engineering systems. This is a book on optimization that considers particular cases of optimization problems, those with a decomposable str- ture that can be advantageously exploited. Those decomposable optimization problems are ubiquitous in engineering and science applications. The book considers problems with both complicating constraints and complicating va- ables, and analyzes linear and nonlinear problems, with and without in- ger variables. The decomposition techniques analyzed include Dantzig-Wolfe, Benders, Lagrangian relaxation, Augmented Lagrangian decomposition, and others. Heuristic techniques are also considered. Additionally, a comprehensive sensitivity analysis for characterizing the solution of optimization problems is carried out. This material is particularly novel and of high practical interest. This book is built based on many clarifying, illustrative, and compu- tional examples, which facilitate the learning p...

  13. Danburite decomposition by hydrochloric acid

    International Nuclear Information System (INIS)

    Mamatov, E.D.; Ashurov, N.A.; Mirsaidov, U.

    2011-01-01

    Present article is devoted to decomposition of danburite of Ak-Arkhar Deposit of Tajikistan by hydrochloric acid. The interaction of boron containing ores of Ak-Arkhar Deposit of Tajikistan with mineral acids, including hydrochloric acid was studied. The optimal conditions of extraction of valuable components from danburite composition were determined. The chemical composition of danburite of Ak-Arkhar Deposit was determined as well. The kinetics of decomposition of calcined danburite by hydrochloric acid was studied. The apparent activation energy of the process of danburite decomposition by hydrochloric acid was calculated.

  14. Improved performance of parallel surface/packed-bed discharge reactor for indoor VOCs decomposition: optimization of the reactor structure

    International Nuclear Information System (INIS)

    Jiang, Nan; Hui, Chun-Xue; Li, Jie; Lu, Na; Shang, Ke-Feng; Wu, Yan; Mizuno, Akira

    2015-01-01

    The purpose of this paper is to develop a high-efficiency air-cleaning system for volatile organic compounds (VOCs) existing in the workshop of a chemical factory. A novel parallel surface/packed-bed discharge (PSPBD) reactor, which utilized a combination of surface discharge (SD) plasma with packed-bed discharge (PBD) plasma, was designed and employed for VOCs removal in a closed vessel. In order to optimize the structure of the PSPBD reactor, the discharge characteristic, benzene removal efficiency, and energy yield were compared for different discharge lengths, quartz tube diameters, shapes of external high-voltage electrode, packed-bed discharge gaps, and packing pellet sizes, respectively. In the circulation test, 52.8% of benzene was removed and the energy yield achieved 0.79 mg kJ −1 after a 210 min discharge treatment in the PSPBD reactor, which was 10.3% and 0.18 mg kJ −1 higher, respectively, than in the SD reactor, 21.8% and 0.34 mg kJ −1 higher, respectively, than in the PBD reactor at 53 J l −1 . The improved performance in benzene removal and energy yield can be attributed to the plasma chemistry effect of the sequential processing in the PSPBD reactor. The VOCs mineralization and organic intermediates generated during discharge treatment were followed by CO x selectivity and FT-IR analyses. The experimental results indicate that the PSPBD plasma process is an effective and energy-efficient approach for VOCs removal in an indoor environment. (paper)

  15. Automatic screening of obstructive sleep apnea from the ECG based on empirical mode decomposition and wavelet analysis

    International Nuclear Information System (INIS)

    Mendez, M O; Cerutti, S; Bianchi, A M; Corthout, J; Van Huffel, S; Matteucci, M; Penzel, T

    2010-01-01

    This study analyses two different methods to detect obstructive sleep apnea (OSA) during sleep time based only on the ECG signal. OSA is a common sleep disorder caused by repetitive occlusions of the upper airways, which produces a characteristic pattern on the ECG. ECG features, such as the heart rate variability (HRV) and the QRS peak area, contain information suitable for making a fast, non-invasive and simple screening of sleep apnea. Fifty recordings freely available on Physionet have been included in this analysis, subdivided in a training and in a testing set. We investigated the possibility of using the recently proposed method of empirical mode decomposition (EMD) for this application, comparing the results with the ones obtained through the well-established wavelet analysis (WA). By these decomposition techniques, several features have been extracted from the ECG signal and complemented with a series of standard HRV time domain measures. The best performing feature subset, selected through a sequential feature selection (SFS) method, was used as the input of linear and quadratic discriminant classifiers. In this way we were able to classify the signals on a minute-by-minute basis as apneic or nonapneic with different best-subset sizes, obtaining an accuracy up to 89% with WA and 85% with EMD. Furthermore, 100% correct discrimination of apneic patients from normal subjects was achieved independently of the feature extractor. Finally, the same procedure was repeated by pooling features from standard HRV time domain, EMD and WA together in order to investigate if the two decomposition techniques could provide complementary features. The obtained accuracy was 89%, similarly to the one achieved using only Wavelet analysis as the feature extractor; however, some complementary features in EMD and WA are evident

  16. Forest height estimation from mountain forest areas using general model-based decomposition for polarimetric interferometric synthetic aperture radar images

    Science.gov (United States)

    Minh, Nghia Pham; Zou, Bin; Cai, Hongjun; Wang, Chengyi

    2014-01-01

    The estimation of forest parameters over mountain forest areas using polarimetric interferometric synthetic aperture radar (PolInSAR) images is one of the greatest interests in remote sensing applications. For mountain forest areas, scattering mechanisms are strongly affected by the ground topography variations. Most of the previous studies in modeling microwave backscattering signatures of forest area have been carried out over relatively flat areas. Therefore, a new algorithm for the forest height estimation from mountain forest areas using the general model-based decomposition (GMBD) for PolInSAR image is proposed. This algorithm enables the retrieval of not only the forest parameters, but also the magnitude associated with each mechanism. In addition, general double- and single-bounce scattering models are proposed to fit for the cross-polarization and off-diagonal term by separating their independent orientation angle, which remains unachieved in the previous model-based decompositions. The efficiency of the proposed approach is demonstrated with simulated data from PolSARProSim software and ALOS-PALSAR spaceborne PolInSAR datasets over the Kalimantan areas, Indonesia. Experimental results indicate that forest height could be effectively estimated by GMBD.

  17. Reliability-Based Optimization of Structural Elements

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    In this paper structural elements from an optimization point of view are considered, i.e. only the geometry of a structural element is optimized. Reliability modelling of the structural element is discussed both from an element point of view and from a system point of view. The optimization...

  18. Entropy-Based Method of Choosing the Decomposition Level in Wavelet Threshold De-noising

    Directory of Open Access Journals (Sweden)

    Yan-Fang Sang

    2010-06-01

    Full Text Available In this paper, the energy distributions of various noises following normal, log-normal and Pearson-III distributions are first described quantitatively using the wavelet energy entropy (WEE, and the results are compared and discussed. Then, on the basis of these analytic results, a method for use in choosing the decomposition level (DL in wavelet threshold de-noising (WTD is put forward. Finally, the performance of the proposed method is verified by analysis of both synthetic and observed series. Analytic results indicate that the proposed method is easy to operate and suitable for various signals. Moreover, contrary to traditional white noise testing which depends on “autocorrelations”, the proposed method uses energy distributions to distinguish real signals and noise in noisy series, therefore the chosen DL is reliable, and the WTD results of time series can be improved.

  19. Multidisciplinary Product Decomposition and Analysis Based on Design Structure Matrix Modeling

    DEFF Research Database (Denmark)

    Habib, Tufail

    2014-01-01

    Design structure matrix (DSM) modeling in complex system design supports to define physical and logical configuration of subsystems, components, and their relationships. This modeling includes product decomposition, identification of interfaces, and structure analysis to increase the architectural...... interactions across subsystems and components. For this purpose, Cambridge advanced modeler (CAM) software tool is used to develop the system matrix. The analysis of the product (printer) architecture includes clustering, partitioning as well as structure analysis of the system. The DSM analysis is helpful...... understanding of the system. Since product architecture has broad implications in relation to product life cycle issues, in this paper, mechatronic product is decomposed into subsystems and components, and then, DSM model is developed to examine the extent of modularity in the system and to manage multiple...

  20. Model Reduction Based on Proper Generalized Decomposition for the Stochastic Steady Incompressible Navier--Stokes Equations

    KAUST Repository

    Tamellini, L.; Le Maî tre, O.; Nouy, A.

    2014-01-01

    In this paper we consider a proper generalized decomposition method to solve the steady incompressible Navier-Stokes equations with random Reynolds number and forcing term. The aim of such a technique is to compute a low-cost reduced basis approximation of the full stochastic Galerkin solution of the problem at hand. A particular algorithm, inspired by the Arnoldi method for solving eigenproblems, is proposed for an efficient greedy construction of a deterministic reduced basis approximation. This algorithm decouples the computation of the deterministic and stochastic components of the solution, thus allowing reuse of preexisting deterministic Navier-Stokes solvers. It has the remarkable property of only requiring the solution of m uncoupled deterministic problems for the construction of an m-dimensional reduced basis rather than M coupled problems of the full stochastic Galerkin approximation space, with m l M (up to one order of magnitudefor the problem at hand in this work). © 2014 Society for Industrial and Applied Mathematics.

  1. Non invasive transcostal focusing based on the decomposition of the time reversal operator: in vitro validation

    Science.gov (United States)

    Cochard, Étienne; Prada, Claire; Aubry, Jean-François; Fink, Mathias

    2010-03-01

    Thermal ablation induced by high intensity focused ultrasound has produced promising clinical results to treat hepatocarcinoma and other liver tumors. However skin burns have been reported due to the high absorption of ultrasonic energy by the ribs. This study proposes a method to produce an acoustic field focusing on a chosen target while sparing the ribs, using the decomposition of the time-reversal operator (DORT method). The idea is to apply an excitation weight vector to the transducers array which is orthogonal to the subspace of emissions focusing on the ribs. The ratio of the energies absorbed at the focal point and on the ribs has been enhanced up to 100-fold as demonstrated by the measured specific absorption rates.

  2. Ensemble Empirical Mode Decomposition based methodology for ultrasonic testing of coarse grain austenitic stainless steels.

    Science.gov (United States)

    Sharma, Govind K; Kumar, Anish; Jayakumar, T; Purnachandra Rao, B; Mariyappa, N

    2015-03-01

    A signal processing methodology is proposed in this paper for effective reconstruction of ultrasonic signals in coarse grained high scattering austenitic stainless steel. The proposed methodology is comprised of the Ensemble Empirical Mode Decomposition (EEMD) processing of ultrasonic signals and application of signal minimisation algorithm on selected Intrinsic Mode Functions (IMFs) obtained by EEMD. The methodology is applied to ultrasonic signals obtained from austenitic stainless steel specimens of different grain size, with and without defects. The influence of probe frequency and data length of a signal on EEMD decomposition is also investigated. For a particular sampling rate and probe frequency, the same range of IMFs can be used to reconstruct the ultrasonic signal, irrespective of the grain size in the range of 30-210 μm investigated in this study. This methodology is successfully employed for detection of defects in a 50mm thick coarse grain austenitic stainless steel specimens. Signal to noise ratio improvement of better than 15 dB is observed for the ultrasonic signal obtained from a 25 mm deep flat bottom hole in 200 μm grain size specimen. For ultrasonic signals obtained from defects at different depths, a minimum of 7 dB extra enhancement in SNR is achieved as compared to the sum of selected IMF approach. The application of minimisation algorithm with EEMD processed signal in the proposed methodology proves to be effective for adaptive signal reconstruction with improved signal to noise ratio. This methodology was further employed for successful imaging of defects in a B-scan. Copyright © 2014. Published by Elsevier B.V.

  3. An acceleration technique for 2D MOC based on Krylov subspace and domain decomposition methods

    International Nuclear Information System (INIS)

    Zhang Hongbo; Wu Hongchun; Cao Liangzhi

    2011-01-01

    Highlights: → We convert MOC into linear system solved by GMRES as an acceleration method. → We use domain decomposition method to overcome the inefficiency on large matrices. → Parallel technology is applied and a matched ray tracing system is developed. → Results show good efficiency even in large-scale and strong scattering problems. → The emphasis is that the technique is geometry-flexible. - Abstract: The method of characteristics (MOC) has great geometrical flexibility but poor computational efficiency in neutron transport calculations. The generalized minimal residual (GMRES) method, a type of Krylov subspace method, is utilized to accelerate a 2D generalized geometry characteristics solver AutoMOC. In this technique, a form of linear algebraic equation system for angular flux moments and boundary fluxes is derived to replace the conventional characteristics sweep (i.e. inner iteration) scheme, and then the GMRES method is implemented as an efficient linear system solver. This acceleration method is proved to be reliable in theory and simple for implementation. Furthermore, as introducing no restriction in geometry treatment, it is suitable for acceleration of an arbitrary geometry MOC solver. However, it is observed that the speedup decreases when the matrix becomes larger. The spatial domain decomposition method and multiprocessing parallel technology are then employed to overcome the problem. The calculation domain is partitioned into several sub-domains. For each of them, a smaller matrix is established and solved by GMRES; and the adjacent sub-domains are coupled by 'inner-edges', where the trajectory mismatches are considered adequately. Moreover, a matched ray tracing system is developed on the basis of AutoCAD, which allows a user to define the sub-domains on demand conveniently. Numerical results demonstrate that the acceleration techniques are efficient without loss of accuracy, even in the case of large-scale and strong scattering

  4. Surface EMG decomposition based on K-means clustering and convolution kernel compensation.

    Science.gov (United States)

    Ning, Yong; Zhu, Xiangjun; Zhu, Shanan; Zhang, Yingchun

    2015-03-01

    A new approach has been developed by combining the K-mean clustering (KMC) method and a modified convolution kernel compensation (CKC) method for multichannel surface electromyogram (EMG) decomposition. The KMC method was first utilized to cluster vectors of observations at different time instants and then estimate the initial innervation pulse train (IPT). The CKC method, modified with a novel multistep iterative process, was conducted to update the estimated IPT. The performance of the proposed K-means clustering-Modified CKC (KmCKC) approach was evaluated by reconstructing IPTs from both simulated and experimental surface EMG signals. The KmCKC approach successfully reconstructed all 10 IPTs from the simulated surface EMG signals with true positive rates (TPR) of over 90% with a low signal-to-noise ratio (SNR) of -10 dB. More than 10 motor units were also successfully extracted from the 64-channel experimental surface EMG signals of the first dorsal interosseous (FDI) muscles when a contraction force was held at 8 N by using the KmCKC approach. A "two-source" test was further conducted with 64-channel surface EMG signals. The high percentage of common MUs and common pulses (over 92% at all force levels) between the IPTs reconstructed from the two independent groups of surface EMG signals demonstrates the reliability and capability of the proposed KmCKC approach in multichannel surface EMG decomposition. Results from both simulated and experimental data are consistent and confirm that the proposed KmCKC approach can successfully reconstruct IPTs with high accuracy at different levels of contraction.

  5. Optimal truss and frame design from projected homogenization-based topology optimization

    DEFF Research Database (Denmark)

    Larsen, S. D.; Sigmund, O.; Groen, J. P.

    2018-01-01

    In this article, we propose a novel method to obtain a near-optimal frame structure, based on the solution of a homogenization-based topology optimization model. The presented approach exploits the equivalence between Michell’s problem of least-weight trusses and a compliance minimization problem...... using optimal rank-2 laminates in the low volume fraction limit. In a fully automated procedure, a discrete structure is extracted from the homogenization-based continuum model. This near-optimal structure is post-optimized as a frame, where the bending stiffness is continuously decreased, to allow...

  6. Research on Multiaircraft Cooperative Suppression Interference Array Based on an Improved Multiobjective Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Huan Zhang

    2017-01-01

    Full Text Available For the problem of multiaircraft cooperative suppression interference array (MACSIA against the enemy air defense radar network in electronic warfare mission planning, firstly, the concept of route planning security zone is proposed and the solution to get the minimum width of security zone based on mathematical morphology is put forward. Secondly, the minimum width of security zone and the sum of the distance between each jamming aircraft and the center of radar network are regarded as objective function, and the multiobjective optimization model of MACSIA is built, and then an improved multiobjective particle swarm optimization algorithm is used to solve the model. The decomposition mechanism is adopted and the proportional distribution is used to maintain diversity of the new found nondominated solutions. Finally, the Pareto optimal solutions are analyzed by simulation, and the optimal MACSIA schemes of each jamming aircraft suppression against the enemy air defense radar network are obtained and verify that the built multiobjective optimization model is corrected. It also shows that the improved multiobjective particle swarm optimization algorithm for solving the problem of MACSIA is feasible and effective.

  7. GA based CNC turning center exploitation process parameters optimization

    Directory of Open Access Journals (Sweden)

    Z. Car

    2009-01-01

    Full Text Available This paper presents machining parameters (turning process optimization based on the use of artificial intelligence. To obtain greater efficiency and productivity of the machine tool, optimal cutting parameters have to be obtained. In order to find optimal cutting parameters, the genetic algorithm (GA has been used as an optimal solution finder. Optimization has to yield minimum machining time and minimum production cost, while considering technological and material constrains.

  8. A full-spectral Bayesian reconstruction approach based on the material decomposition model applied in dual-energy computed tomography

    International Nuclear Information System (INIS)

    Cai, C.; Rodet, T.; Mohammad-Djafari, A.; Legoupil, S.

    2013-01-01

    Purpose: Dual-energy computed tomography (DECT) makes it possible to get two fractions of basis materials without segmentation. One is the soft-tissue equivalent water fraction and the other is the hard-matter equivalent bone fraction. Practical DECT measurements are usually obtained with polychromatic x-ray beams. Existing reconstruction approaches based on linear forward models without counting the beam polychromaticity fail to estimate the correct decomposition fractions and result in beam-hardening artifacts (BHA). The existing BHA correction approaches either need to refer to calibration measurements or suffer from the noise amplification caused by the negative-log preprocessing and the ill-conditioned water and bone separation problem. To overcome these problems, statistical DECT reconstruction approaches based on nonlinear forward models counting the beam polychromaticity show great potential for giving accurate fraction images.Methods: This work proposes a full-spectral Bayesian reconstruction approach which allows the reconstruction of high quality fraction images from ordinary polychromatic measurements. This approach is based on a Gaussian noise model with unknown variance assigned directly to the projections without taking negative-log. Referring to Bayesian inferences, the decomposition fractions and observation variance are estimated by using the joint maximum a posteriori (MAP) estimation method. Subject to an adaptive prior model assigned to the variance, the joint estimation problem is then simplified into a single estimation problem. It transforms the joint MAP estimation problem into a minimization problem with a nonquadratic cost function. To solve it, the use of a monotone conjugate gradient algorithm with suboptimal descent steps is proposed.Results: The performance of the proposed approach is analyzed with both simulated and experimental data. The results show that the proposed Bayesian approach is robust to noise and materials. It is also

  9. CFD based draft tube hydraulic design optimization

    International Nuclear Information System (INIS)

    McNabb, J; Murry, N; Mullins, B F; Devals, C; Kyriacou, S A

    2014-01-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis

  10. CFD based draft tube hydraulic design optimization

    Science.gov (United States)

    McNabb, J.; Devals, C.; Kyriacou, S. A.; Murry, N.; Mullins, B. F.

    2014-03-01

    The draft tube design of a hydraulic turbine, particularly in low to medium head applications, plays an important role in determining the efficiency and power characteristics of the overall machine, since an important proportion of the available energy, being in kinetic form leaving the runner, needs to be recovered by the draft tube into static head. For large units, these efficiency and power characteristics can equate to large sums of money when considering the anticipated selling price of the energy produced over the machine's life-cycle. This same draft tube design is also a key factor in determining the overall civil costs of the powerhouse, primarily in excavation and concreting, which can amount to similar orders of magnitude as the price of the energy produced. Therefore, there is a need to find the optimum compromise between these two conflicting requirements. In this paper, an elaborate approach is described for dealing with this optimization problem. First, the draft tube's detailed geometry is defined as a function of a comprehensive set of design parameters (about 20 of which a subset is allowed to vary during the optimization process) and are then used in a non-uniform rational B-spline based geometric modeller to fully define the wetted surfaces geometry. Since the performance of the draft tube is largely governed by 3D viscous effects, such as boundary layer separation from the walls and swirling flow characteristics, which in turn governs the portion of the available kinetic energy which will be converted into pressure, a full 3D meshing and Navier-Stokes analysis is performed for each design. What makes this even more challenging is the fact that the inlet velocity distribution to the draft tube is governed by the runner at each of the various operating conditions that are of interest for the exploitation of the powerhouse. In order to determine these inlet conditions, a combined steady-state runner and an initial draft tube analysis, using a

  11. Logic-based methods for optimization combining optimization and constraint satisfaction

    CERN Document Server

    Hooker, John

    2011-01-01

    A pioneering look at the fundamental role of logic in optimization and constraint satisfaction While recent efforts to combine optimization and constraint satisfaction have received considerable attention, little has been said about using logic in optimization as the key to unifying the two fields. Logic-Based Methods for Optimization develops for the first time a comprehensive conceptual framework for integrating optimization and constraint satisfaction, then goes a step further and shows how extending logical inference to optimization allows for more powerful as well as flexible

  12. Symmetric Tensor Decomposition

    DEFF Research Database (Denmark)

    Brachat, Jerome; Comon, Pierre; Mourrain, Bernard

    2010-01-01

    We present an algorithm for decomposing a symmetric tensor, of dimension n and order d, as a sum of rank-1 symmetric tensors, extending the algorithm of Sylvester devised in 1886 for binary forms. We recall the correspondence between the decomposition of a homogeneous polynomial in n variables...... of polynomial equations of small degree in non-generic cases. We propose a new algorithm for symmetric tensor decomposition, based on this characterization and on linear algebra computations with Hankel matrices. The impact of this contribution is two-fold. First it permits an efficient computation...... of the decomposition of any tensor of sub-generic rank, as opposed to widely used iterative algorithms with unproved global convergence (e.g. Alternate Least Squares or gradient descents). Second, it gives tools for understanding uniqueness conditions and for detecting the rank....

  13. Optimizing a Water Simulation based on Wavefront Parameter Optimization

    OpenAIRE

    Lundgren, Martin

    2017-01-01

    DICE, a Swedish game company, wanted a more realistic water simulation. Currently, most large scale water simulations used in games are based upon ocean simulation technology. These techniques falter when used in other scenarios, such as coastlines. In order to produce a more realistic simulation, a new one was created based upon the water simulation technique "Wavefront Parameter Interpolation". This technique involves a rather extensive preprocess that enables ocean simulations to have inte...

  14. Dose optimization for dual-energy contrast-enhanced digital mammography based on an energy-resolved photon-counting detector: A Monte Carlo simulation study

    Science.gov (United States)

    Lee, Youngjin; Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo

    2017-03-01

    Dual-energy contrast-enhanced digital mammography (CEDM) has been used to decompose breast images and improve diagnostic accuracy for tumor detection. However, this technique causes an increase of radiation dose and an inaccuracy in material decomposition due to the limitations of conventional X-ray detectors. In this study, we simulated the dual-energy CEDM with an energy-resolved photon-counting detector (ERPCD) for reducing radiation dose and improving the quantitative accuracy of material decomposition images. The ERPCD-based dual-energy CEDM was compared to the conventional dual-energy CEDM in terms of radiation dose and quantitative accuracy. The correlation between radiation dose and image quality was also evaluated for optimizing the ERPCD-based dual-energy CEDM technique. The results showed that the material decomposition errors of the ERPCD-based dual-energy CEDM were 0.56-0.67 times lower than those of the conventional dual-energy CEDM. The imaging performance of the proposed technique was optimized at the radiation dose of 1.09 mGy, which is a half of the MGD for a single view mammogram. It can be concluded that the ERPCD-based dual-energy CEDM with an optimal exposure level is able to improve the quality of material decomposition images as well as reduce radiation dose.

  15. Multifractal features of EUA and CER futures markets by using multifractal detrended fluctuation analysis based on empirical model decomposition

    International Nuclear Information System (INIS)

    Cao, Guangxi; Xu, Wei

    2016-01-01

    Basing on daily price data of carbon emission rights in futures markets of Certified Emission Reduction (CER) and European Union Allowances (EUA), we analyze the multiscale characteristics of the markets by using empirical mode decomposition (EMD) and multifractal detrended fluctuation analysis (MFDFA) based on EMD. The complexity of the daily returns of CER and EUA futures markets changes with multiple time scales and multilayered features. The two markets also exhibit clear multifractal characteristics and long-range correlation. We employ shuffle and surrogate approaches to analyze the origins of multifractality. The long-range correlations and fat-tail distributions significantly contribute to multifractality. Furthermore, we analyze the influence of high returns on multifractality by using threshold method. The multifractality of the two futures markets is related to the presence of high values of returns in the price series.

  16. Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing.

    Science.gov (United States)

    Sun, Qi; Fu, Shujun

    2017-09-20

    Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.

  17. Optimization Strategies for Hardware-Based Cofactorization

    Science.gov (United States)

    Loebenberger, Daniel; Putzka, Jens

    We use the specific structure of the inputs to the cofactorization step in the general number field sieve (GNFS) in order to optimize the runtime for the cofactorization step on a hardware cluster. An optimal distribution of bitlength-specific ECM modules is proposed and compared to existing ones. With our optimizations we obtain a speedup between 17% and 33% of the cofactorization step of the GNFS when compared to the runtime of an unoptimized cluster.

  18. Energy saving analysis and management modeling based on index decomposition analysis integrated energy saving potential method: Application to complex chemical processes

    International Nuclear Information System (INIS)

    Geng, Zhiqiang; Gao, Huachao; Wang, Yanqing; Han, Yongming; Zhu, Qunxiong

    2017-01-01

    Highlights: • The integrated framework that combines IDA with energy-saving potential method is proposed. • Energy saving analysis and management framework of complex chemical processes is obtained. • This proposed method is efficient in energy optimization and carbon emissions of complex chemical processes. - Abstract: Energy saving and management of complex chemical processes play a crucial role in the sustainable development procedure. In order to analyze the effect of the technology, management level, and production structure having on energy efficiency and energy saving potential, this paper proposed a novel integrated framework that combines index decomposition analysis (IDA) with energy saving potential method. The IDA method can obtain the level of energy activity, energy hierarchy and energy intensity effectively based on data-drive to reflect the impact of energy usage. The energy saving potential method can verify the correctness of the improvement direction proposed by the IDA method. Meanwhile, energy efficiency improvement, energy consumption reduction and energy savings can be visually discovered by the proposed framework. The demonstration analysis of ethylene production has verified the practicality of the proposed method. Moreover, we can obtain the corresponding improvement for the ethylene production based on the demonstration analysis. The energy efficiency index and the energy saving potential of these worst months can be increased by 6.7% and 7.4%, respectively. And the carbon emissions can be reduced by 7.4–8.2%.

  19. Phase stability and decomposition processes in Ti-Al based intermetallics

    Energy Technology Data Exchange (ETDEWEB)

    Nakai, Kiyomichi [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ono, Toshiaki [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ohtsubo, Hiroyuki [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan); Ohmori, Yasuya [Department of Materials Science and Engineering, Faculty of Engineering, Ehime University, 3 Bunkyo-cho, Matsuyama 790 (Japan)

    1995-02-28

    The high-temperature phase equilibria and the phase decomposition of {alpha} and {beta} phases were studied by crystallographic analysis of the solidification microstructures of Ti-48at.%Al and Ti-48at.%Al-2at.%X (X=Mn, Cr, Mo) alloys. The effects on the phase stability of Zr and O atoms penetrating from the specimen surface were also examined for Ti-48at.%Al and Ti-50at.%Al alloys. The third elements Cr and Mo shift the {beta} phase region to higher Al concentrations, and the {beta} phase is ordered to the {beta}{sub 2} phase. The Zr and O atoms stabilize {beta} and {alpha} phases respectively. In the Zr-stabilized {beta} phase, {alpha}{sub 2} laths form with accompanying surface relief, and stacking faults which relax the elastic strain owing to lattice deformation are introduced after formation of {alpha}{sub 2} order domains. Thus shear is thought to operate after the phase transition from {beta} to {alpha}{sub 2} by short-range diffusion. A similar analysis was conducted for the Ti-Al binary system, and the transformation was interpreted from the CCT diagram constructed qualitatively. ((orig.))

  20. Stability monitoring for BWR based on singular value decomposition method using artificial neural network

    International Nuclear Information System (INIS)

    Tsuji, Masashi; Shimazu, Yoichiro; Michishita, Hiroshi

    2005-01-01

    A new method for evaluating the decay ratios in a boiling water reactor (BWR) using the singular value decomposition (SVD) method had been proposed. In this method, a signal component closely related to the BWR stability can be extracted from independent components of the neutron noise signal decomposed by the SVD method. However, real-time stability monitoring by the SVD method requires an efficient procedure for screening such components. For efficient screening, an artificial neural network (ANN) with three layers was adopted. The trained ANN was actually applied to decomposed components of local power range monitor (LPRM) signals that were measured in stability experiments conducted in the Ringhals-1 BWR. In each LPRM signal, multiple candidates were screened from the decomposed components. However, decay ratios could be estimated by introducing appropriate criterions for selecting the most suitable component among the candidates. The estimated decay ratios are almost identical to those evaluated by visual screening in a previous study. The selected components commonly have the largest singular value, the largest decay ratio and the least squared fitting error among the candidates. By virtue of excellent screening performance of the trained ANN, the real-time stability monitoring by the SVD method can be applied in practice. (author)

  1. Intuitive Density Functional Theory-Based Energy Decomposition Analysis for Protein-Ligand Interactions.

    Science.gov (United States)

    Phipps, M J S; Fox, T; Tautermann, C S; Skylaris, C-K

    2017-04-11

    First-principles quantum mechanical calculations with methods such as density functional theory (DFT) allow the accurate calculation of interaction energies between molecules. These interaction energies can be dissected into chemically relevant components such as electrostatics, polarization, and charge transfer using energy decomposition analysis (EDA) approaches. Typically EDA has been used to study interactions between small molecules; however, it has great potential to be applied to large biomolecular assemblies such as protein-protein and protein-ligand interactions. We present an application of EDA calculations to the study of ligands that bind to the thrombin protein, using the ONETEP program for linear-scaling DFT calculations. Our approach goes beyond simply providing the components of the interaction energy; we are also able to provide visual representations of the changes in density that happen as a result of polarization and charge transfer, thus pinpointing the functional groups between the ligand and protein that participate in each kind of interaction. We also demonstrate with this approach that we can focus on studying parts (fragments) of ligands. The method is relatively insensitive to the protocol that is used to prepare the structures, and the results obtained are therefore robust. This is an application to a real protein drug target of a whole new capability where accurate DFT calculations can produce both energetic and visual descriptors of interactions. These descriptors can be used to provide insights for tailoring interactions, as needed for example in drug design.

  2. Variational mode decomposition based approach for accurate classification of color fundus images with hemorrhages

    Science.gov (United States)

    Lahmiri, Salim; Shmuel, Amir

    2017-11-01

    Diabetic retinopathy is a disease that can cause a loss of vision. An early and accurate diagnosis helps to improve treatment of the disease and prognosis. One of the earliest characteristics of diabetic retinopathy is the appearance of retinal hemorrhages. The purpose of this study is to design a fully automated system for the detection of hemorrhages in a retinal image. In the first stage of our proposed system, a retinal image is processed with variational mode decomposition (VMD) to obtain the first variational mode, which captures the high frequency components of the original image. In the second stage, four texture descriptors are extracted from the first variational mode. Finally, a classifier trained with all computed texture descriptors is used to distinguish between images of healthy and unhealthy retinas with hemorrhages. Experimental results showed evidence of the effectiveness of the proposed system for detection of hemorrhages in the retina, since a perfect detection rate was achieved. Our proposed system for detecting diabetic retinopathy is simple and easy to implement. It requires only short processing time, and it yields higher accuracy in comparison with previously proposed methods for detecting diabetic retinopathy.

  3. Short-Term Wind Speed Forecasting Using Decomposition-Based Neural Networks Combining Abnormal Detection Method

    Directory of Open Access Journals (Sweden)

    Xuejun Chen

    2014-01-01

    Full Text Available As one of the most promising renewable resources in electricity generation, wind energy is acknowledged for its significant environmental contributions and economic competitiveness. Because wind fluctuates with strong variation, it is quite difficult to describe the characteristics of wind or to estimate the power output that will be injected into the grid. In particular, short-term wind speed forecasting, an essential support for the regulatory actions and short-term load dispatching planning during the operation of wind farms, is currently regarded as one of the most difficult problems to be solved. This paper contributes to short-term wind speed forecasting by developing two three-stage hybrid approaches; both are combinations of the five-three-Hanning (53H weighted average smoothing method, ensemble empirical mode decomposition (EEMD algorithm, and nonlinear autoregressive (NAR neural networks. The chosen datasets are ten-minute wind speed observations, including twelve samples, and our simulation indicates that the proposed methods perform much better than the traditional ones when addressing short-term wind speed forecasting problems.

  4. Transmission tariffs based on optimal power flow

    International Nuclear Information System (INIS)

    Wangensteen, Ivar; Gjelsvik, Anders

    1998-01-01

    This report discusses transmission pricing as a means of obtaining optimal scheduling and dispatch in a power system. This optimality includes consumption as well as generation. The report concentrates on how prices can be used as signals towards operational decisions of market participants (generators, consumers). The main focus is on deregulated systems with open access to the network. The optimal power flow theory, with demand side modelling included, is briefly reviewed. It turns out that the marginal costs obtained from the optimal power flow gives the optimal transmission tariff for the particular load flow in case. There is also a correspondence between losses and optimal prices. Emphasis is on simple examples that demonstrate the connection between optimal power flow results and tariffs. Various cases, such as open access and single owner are discussed. A key result is that the location of the ''marketplace'' in the open access case does not influence the net economical result for any of the parties involved (generators, network owner, consumer). The optimal power flow is instantaneous, and in its standard form cannot deal with energy constrained systems that are coupled in time, such as hydropower systems with reservoirs. A simplified example of how the theory can be extended to such a system is discussed. An example of the influence of security constraints on prices is also given. 4 refs., 24 figs., 7 tabs

  5. Phantom-less bone mineral density (BMD) measurement using dual energy computed tomography-based 3-material decomposition

    Science.gov (United States)

    Hofmann, Philipp; Sedlmair, Martin; Krauss, Bernhard; Wichmann, Julian L.; Bauer, Ralf W.; Flohr, Thomas G.; Mahnken, Andreas H.

    2016-03-01

    Osteoporosis is a degenerative bone disease usually diagnosed at the manifestation of fragility fractures, which severely endanger the health of especially the elderly. To ensure timely therapeutic countermeasures, noninvasive and widely applicable diagnostic methods are required. Currently the primary quantifiable indicator for bone stability, bone mineral density (BMD), is obtained either by DEXA (Dual-energy X-ray absorptiometry) or qCT (quantitative CT). Both have respective advantages and disadvantages, with DEXA being considered as gold standard. For timely diagnosis of osteoporosis, another CT-based method is presented. A Dual Energy CT reconstruction workflow is being developed to evaluate BMD by evaluating lumbar spine (L1-L4) DE-CT images. The workflow is ROI-based and automated for practical use. A dual energy 3-material decomposition algorithm is used to differentiate bone from soft tissue and fat attenuation. The algorithm uses material attenuation coefficients on different beam energy levels. The bone fraction of the three different tissues is used to calculate the amount of hydroxylapatite in the trabecular bone of the corpus vertebrae inside a predefined ROI. Calibrations have been performed to obtain volumetric bone mineral density (vBMD) without having to add a calibration phantom or to use special scan protocols or hardware. Accuracy and precision are dependent on image noise and comparable to qCT images. Clinical indications are in accordance with the DEXA gold standard. The decomposition-based workflow shows bone degradation effects normally not visible on standard CT images which would induce errors in normal qCT results.

  6. Numerical simulation of ammonium dinitramide (ADN)-based non-toxic aerospace propellant decomposition and combustion in a monopropellant thruster

    International Nuclear Information System (INIS)

    Zhang, Tao; Li, Guoxiu; Yu, Yusong; Sun, Zuoyu; Wang, Meng; Chen, Jun

    2014-01-01

    Highlights: • Decomposition and combustion process of ADN-based thruster are studied. • Distribution of droplets is obtained during the process of spray hit on wire mesh. • Two temperature models are adopted to describe the heat transfer in porous media. • The influences brought by different mass flux and porosity are studied. - Abstract: Ammonium dinitramide (ADN) monopropellant is currently the most promising among all ‘green propellants’. In this paper, the decomposition and combustion process of liquid ADN-based ternary mixtures for propulsion are numerically studied. The R–R distribution model is used to study the initial boundary conditions of droplet distribution resulting from spray hit on a wire mesh based on PDA experiment. To simulate the heat-transfer characteristics between the gas–solid phases, a two-temperature porous medium model in a catalytic bed is used. An 11-species and 7-reactions chemistry model is used to study the catalytic and combustion processes. The final distribution of temperature, pressure, and other kinds of material component concentrations are obtained using the ADN thruster. The results of simulation conducted in the present study are well agree with previous experimental data, and the demonstration of the ADN thruster confirms that a good steady-state operation is achieved. The effects of spray inlet mass flux and porosity on monopropellant thruster performance are analyzed. The numerical results further show that a larger inlet mass flux results in better thruster performance and a catalytic bed porosity value of 0.5 can exhibit the best thruster performance. These findings can serve as a key reference for designing and testing non-toxic aerospace monopropellant thrusters

  7. A primal-dual decomposition based interior point approach to two-stage stochastic linear programming

    NARCIS (Netherlands)

    A.B. Berkelaar (Arjan); C.L. Dert (Cees); K.P.B. Oldenkamp; S. Zhang (Shuzhong)

    1999-01-01

    textabstractDecision making under uncertainty is a challenge faced by many decision makers. Stochastic programming is a major tool developed to deal with optimization with uncertainties that has found applications in, e.g. finance, such as asset-liability and bond-portfolio management.

  8. Gyroscope-driven mouse pointer with an EMOTIV® EEG headset and data analysis based on Empirical Mode Decomposition.

    Science.gov (United States)

    Rosas-Cholula, Gerardo; Ramirez-Cortes, Juan Manuel; Alarcon-Aquino, Vicente; Gomez-Gil, Pilar; Rangel-Magdaleno, Jose de Jesus; Reyes-Garcia, Carlos

    2013-08-14

    This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user's blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD). EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  9. Gyroscope-Driven Mouse Pointer with an EMOTIV® EEG Headset and Data Analysis Based on Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Carlos Reyes-Garcia

    2013-08-01

    Full Text Available This paper presents a project on the development of a cursor control emulating the typical operations of a computer-mouse, using gyroscope and eye-blinking electromyographic signals which are obtained through a commercial 16-electrode wireless headset, recently released by Emotiv. The cursor position is controlled using information from a gyroscope included in the headset. The clicks are generated through the user’s blinking with an adequate detection procedure based on the spectral-like technique called Empirical Mode Decomposition (EMD. EMD is proposed as a simple and quick computational tool, yet effective, aimed to artifact reduction from head movements as well as a method to detect blinking signals for mouse control. Kalman filter is used as state estimator for mouse position control and jitter removal. The detection rate obtained in average was 94.9%. Experimental setup and some obtained results are presented.

  10. Quantifying immediate price impact of trades based on the k-shell decomposition of stock trading networks

    Science.gov (United States)

    Xie, Wen-Jie; Li, Ming-Xia; Xu, Hai-Chuan; Chen, Wei; Zhou, Wei-Xing; Stanley, H. Eugene

    2016-10-01

    Traders in a stock market exchange stock shares and form a stock trading network. Trades at different positions of the stock trading network may contain different information. We construct stock trading networks based on the limit order book data and classify traders into k classes using the k-shell decomposition method. We investigate the influences of trading behaviors on the price impact by comparing a closed national market (A-shares) with an international market (B-shares), individuals and institutions, partially filled and filled trades, buyer-initiated and seller-initiated trades, and trades at different positions of a trading network. Institutional traders professionally use some trading strategies to reduce the price impact and individuals at the same positions in the trading network have a higher price impact than institutions. We also find that trades in the core have higher price impacts than those in the peripheral shell.

  11. A novel approach for baseline correction in 1H-MRS signals based on ensemble empirical mode decomposition.

    Science.gov (United States)

    Parto Dezfouli, Mohammad Ali; Dezfouli, Mohsen Parto; Rad, Hamidreza Saligheh

    2014-01-01

    Proton magnetic resonance spectroscopy ((1)H-MRS) is a non-invasive diagnostic tool for measuring biochemical changes in the human body. Acquired (1)H-MRS signals may be corrupted due to a wideband baseline signal generated by macromolecules. Recently, several methods have been developed for the correction of such baseline signals, however most of them are not able to estimate baseline in complex overlapped signal. In this study, a novel automatic baseline correction method is proposed for (1)H-MRS spectra based on ensemble empirical mode decomposition (EEMD). This investigation was applied on both the simulated data and the in-vivo (1)H-MRS of human brain signals. Results justify the efficiency of the proposed method to remove the baseline from (1)H-MRS signals.

  12. Resident Load Influence Analysis Method for Price Based on Non-intrusive Load Monitoring and Decomposition Data

    Science.gov (United States)

    Jiang, Wenqian; Zeng, Bo; Yang, Zhou; Li, Gang

    2018-01-01

    In the non-invasive load monitoring mode, the load decomposition can reflect the running state of each load, which will help the user reduce unnecessary energy costs. With the demand side management measures of time of using price, a resident load influence analysis method for time of using price (TOU) based on non-intrusive load monitoring data are proposed in the paper. Relying on the current signal of the resident load classification, the user equipment type, and different time series of self-elasticity and cross-elasticity of the situation could be obtained. Through the actual household load data test with the impact of TOU, part of the equipment will be transferred to the working hours, and users in the peak price of electricity has been reduced, and in the electricity at the time of the increase Electrical equipment, with a certain regularity.

  13. Real-time tumor ablation simulation based on the dynamic mode decomposition method

    KAUST Repository

    Bourantas, George C.

    2014-05-01

    Purpose: The dynamic mode decomposition (DMD) method is used to provide a reliable forecasting of tumor ablation treatment simulation in real time, which is quite needed in medical practice. To achieve this, an extended Pennes bioheat model must be employed, taking into account both the water evaporation phenomenon and the tissue damage during tumor ablation. Methods: A meshless point collocation solver is used for the numerical solution of the governing equations. The results obtained are used by the DMD method for forecasting the numerical solution faster than the meshless solver. The procedure is first validated against analytical and numerical predictions for simple problems. The DMD method is then applied to three-dimensional simulations that involve modeling of tumor ablation and account for metabolic heat generation, blood perfusion, and heat ablation using realistic values for the various parameters. Results: The present method offers very fast numerical solution to bioheat transfer, which is of clinical significance in medical practice. It also sidesteps the mathematical treatment of boundaries between tumor and healthy tissue, which is usually a tedious procedure with some inevitable degree of approximation. The DMD method provides excellent predictions of the temperature profile in tumors and in the healthy parts of the tissue, for linear and nonlinear thermal properties of the tissue. Conclusions: The low computational cost renders the use of DMD suitable forin situ real time tumor ablation simulations without sacrificing accuracy. In such a way, the tumor ablation treatment planning is feasible using just a personal computer thanks to the simplicity of the numerical procedure used. The geometrical data can be provided directly by medical image modalities used in everyday practice. © 2014 American Association of Physicists in Medicine.

  14. Benthic algae stimulate leaf litter decomposition in detritus-based headwater streams: a case of aquatic priming effect?

    Science.gov (United States)

    Danger, Michael; Cornut, Julien; Chauvet, Eric; Chavez, Paola; Elger, Arnaud; Lecerf, Antoine

    2013-07-01

    In detritus-based ecosystems, autochthonous primary production contributes very little to the detritus pool. Yet primary producers may still influence the functioning of these ecosystems through complex interactions with decomposers and detritivores. Recent studies have suggested that, in aquatic systems, small amounts of labile carbon (C) (e.g., producer exudates), could increase the mineralization of more recalcitrant organic-matter pools (e.g., leaf litter). This process, called priming effect, should be exacerbated under low-nutrient conditions and may alter the nature of interactions among microbial groups, from competition under low-nutrient conditions to indirect mutualism under high-nutrient conditions. Theoretical models further predict that primary producers may be competitively excluded when allochthonous C sources enter an ecosystem. In this study, the effects of a benthic diatom on aquatic hyphomycetes, bacteria, and leaf litter decomposition were investigated under two nutrient levels in a factorial microcosm experiment simulating detritus-based, headwater stream ecosystems. Contrary to theoretical expectations, diatoms and decomposers were able to coexist under both nutrient conditions. Under low-nutrient conditions, diatoms increased leaf litter decomposition rate by 20% compared to treatments where they were absent. No effect was observed under high-nutrient conditions. The increase in leaf litter mineralization rate induced a positive feedback on diatom densities. We attribute these results to the priming effect of labile C exudates from primary producers. The presence of diatoms in combination with fungal decomposers also promoted decomposer diversity and, under low-nutrient conditions, led to a significant decrease in leaf litter C:P ratio that could improve secondary production. Results from our microcosm experiment suggest new mechanisms by which primary producers may influence organic matter dynamics even in ecosystems where autochthonous

  15. Structural investigation of oxovanadium(IV) Schiff base complexes: X-ray crystallography, electrochemistry and kinetic of thermal decomposition.

    Science.gov (United States)

    Asadi, Mozaffar; Asadi, Zahra; Savaripoor, Nooshin; Dusek, Michal; Eigner, Vaclav; Shorkaei, Mohammad Ranjkesh; Sedaghat, Moslem

    2015-02-05

    A series of new VO(IV) complexes of tetradentate N2O2 Schiff base ligands (L(1)-L(4)), were synthesized and characterized by FT-IR, UV-vis and elemental analysis. The structure of the complex VOL(1)⋅DMF was also investigated by X-ray crystallography which revealed a vanadyl center with distorted octahedral coordination where the 2-aza and 2-oxo coordinating sites of the ligand were perpendicular to the "-yl" oxygen. The electrochemical properties of the vanadyl complexes were investigated by cyclic voltammetry. A good correlation was observed between the oxidation potentials and the electron withdrawing character of the substituents on the Schiff base ligands, showing the following trend: MeO5-H>5-Br>5-Cl. Furthermore, the kinetic parameters of thermal decomposition were calculated by using the Coats-Redfern equation. According to the Coats-Redfern plots the kinetics of thermal decomposition of studied complexes is of the first-order in all stages, the free energy of activation for each following stage is larger than the previous one and the complexes have good thermal stability. The preparation of VOL(1)⋅DMF yielded also another compound, one kind of vanadium oxide [VO]X, with different habitus of crystals, (platelet instead of prisma) and without L(1) ligand, consisting of a V10O28 cage, diaminium moiety and dimethylamonium as a counter ions. Because its crystal structure was also new, we reported it along with the targeted complex. Copyright © 2014 Elsevier B.V. All rights reserved.

  16. A Hybrid Forecasting Model Based on Empirical Mode Decomposition and the Cuckoo Search Algorithm: A Case Study for Power Load

    Directory of Open Access Journals (Sweden)

    Jiani Heng

    2016-01-01

    Full Text Available Power load forecasting always plays a considerable role in the management of a power system, as accurate forecasting provides a guarantee for the daily operation of the power grid. It has been widely demonstrated in forecasting that hybrid forecasts can improve forecast performance compared with individual forecasts. In this paper, a hybrid forecasting approach, comprising Empirical Mode Decomposition, CSA (Cuckoo Search Algorithm, and WNN (Wavelet Neural Network, is proposed. This approach constructs a more valid forecasting structure and more stable results than traditional ANN (Artificial Neural Network models such as BPNN (Back Propagation Neural Network, GABPNN (Back Propagation Neural Network Optimized by Genetic Algorithm, and WNN. To evaluate the forecasting performance of the proposed model, a half-hourly power load in New South Wales of Australia is used as a case study in this paper. The experimental results demonstrate that the proposed hybrid model is not only simple but also able to satisfactorily approximate the actual power load and can be an effective tool in planning and dispatch for smart grids.

  17. Global stability-based design optimization of truss structures using ...

    Indian Academy of Sciences (India)

    Furthermore, a pure pareto-ranking based multi-objective optimization model is employed for the design optimization of the truss structure with multiple objectives. The computational performance of the optimization model is increased by implementing an island model into its evolutionary search mechanism. The proposed ...

  18. Process optimization of friction stir welding based on thermal models

    DEFF Research Database (Denmark)

    Larsen, Anders Astrup

    2010-01-01

    This thesis investigates how to apply optimization methods to numerical models of a friction stir welding process. The work is intended as a proof-of-concept using different methods that are applicable to models of high complexity, possibly with high computational cost, and without the possibility...... information of the high-fidelity model. The optimization schemes are applied to stationary thermal models of differing complexity of the friction stir welding process. The optimization problems considered are based on optimizing the temperature field in the workpiece by finding optimal translational speed....... Also an optimization problem based on a microstructure model is solved, allowing the hardness distribution in the plate to be optimized. The use of purely thermal models represents a simplification of the real process; nonetheless, it shows the applicability of the optimization methods considered...

  19. PARTICLE SWARM OPTIMIZATION BASED OF THE MAXIMUM ...

    African Journals Online (AJOL)

    2010-06-30

    Jun 30, 2010 ... Keywords: Particle Swarm Optimization (PSO), photovoltaic system, MPOP, ... systems from one hand and because of the instantaneous change of ..... Because of the P-V characteristics this heuristic method is used to seek ...

  20. Product portfolio optimization based on substitution

    DEFF Research Database (Denmark)

    Myrodia, Anna; Moseley, A.; Hvam, Lars

    2017-01-01

    The development of production capabilities has led to proliferation of the product variety offered to the customer. Yet this fact does not directly imply increase of manufacturers' profitability, nor customers' satisfaction. Consequently, recent research focuses on portfolio optimization through...... substitution and standardization techniques. However when re-defining the strategic market decisions are characterized by uncertainty due to several parameters. In this study, by using a GAMS optimization model we present a method for supporting strategic decisions on substitution, by quantifying the impact...

  1. A Hybrid Optimization Framework with POD-based Order Reduction and Design-Space Evolution Scheme

    Science.gov (United States)

    Ghoman, Satyajit S.

    The main objective of this research is to develop an innovative multi-fidelity multi-disciplinary design, analysis and optimization suite that integrates certain solution generation codes and newly developed innovative tools to improve the overall optimization process. The research performed herein is divided into two parts: (1) the development of an MDAO framework by integration of variable fidelity physics-based computational codes, and (2) enhancements to such a framework by incorporating innovative features extending its robustness. The first part of this dissertation describes the development of a conceptual Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment (M3 DOE), in context of aircraft wing optimization. M 3 DOE provides the user a capability to optimize configurations with a choice of (i) the level of fidelity desired, (ii) the use of a single-step or multi-step optimization strategy, and (iii) combination of a series of structural and aerodynamic analyses. The modularity of M3 DOE allows it to be a part of other inclusive optimization frameworks. The M 3 DOE is demonstrated within the context of shape and sizing optimization of the wing of a Generic Business Jet aircraft. Two different optimization objectives, viz. dry weight minimization, and cruise range maximization are studied by conducting one low-fidelity and two high-fidelity optimization runs to demonstrate the application scope of M3 DOE. The second part of this dissertation describes the development of an innovative hybrid optimization framework that extends the robustness of M 3 DOE by employing a proper orthogonal decomposition-based design-space order reduction scheme combined with the evolutionary algorithm technique. The POD method of extracting dominant modes from an ensemble of candidate configurations is used for the design-space order reduction. The snapshot of candidate population is updated iteratively using evolutionary algorithm technique of

  2. Optimal design of hydraulic excavator working device based on multiple surrogate models

    Directory of Open Access Journals (Sweden)

    Qingying Qiu

    2016-05-01

    Full Text Available The optimal design of hydraulic excavator working device is often characterized by computationally expensive analysis methods such as finite element analysis. Significant difficulties also exist when using a sensitivity-based decomposition approach to such practical engineering problems because explicit mathematical formulas between the objective function and design variables are impossible to formulate. An effective alternative is known as the surrogate model. The purpose of this article is to provide a comparative study on multiple surrogate models, including the response surface methodology, Kriging, radial basis function, and support vector machine, and select the one that best fits the optimization of the working device. In this article, a new modeling strategy based on the combination of the dimension variables between hinge joints and the forces loaded on hinge joints of the working device is proposed. In addition, the extent to which the accuracy of the surrogate models depends on different design variables is presented. The bionic intelligent optimization algorithm is then used to obtain the optimal results, which demonstrate that the maximum stresses calculated by the predicted method and finite element analysis are quite similar, but the efficiency of the former is much higher than that of the latter.

  3. An Airway Network Flow Assignment Approach Based on an Efficient Multiobjective Optimization Framework

    Directory of Open Access Journals (Sweden)

    Xiangmin Guan

    2015-01-01

    Full Text Available Considering reducing the airspace congestion and the flight delay simultaneously, this paper formulates the airway network flow assignment (ANFA problem as a multiobjective optimization model and presents a new multiobjective optimization framework to solve it. Firstly, an effective multi-island parallel evolution algorithm with multiple evolution populations is employed to improve the optimization capability. Secondly, the nondominated sorting genetic algorithm II is applied for each population. In addition, a cooperative coevolution algorithm is adapted to divide the ANFA problem into several low-dimensional biobjective optimization problems which are easier to deal with. Finally, in order to maintain the diversity of solutions and to avoid prematurity, a dynamic adjustment operator based on solution congestion degree is specifically designed for the ANFA problem. Simulation results using the real traffic data from China air route network and daily flight plans demonstrate that the proposed approach can improve the solution quality effectively, showing superiority to the existing approaches such as the multiobjective genetic algorithm, the well-known multiobjective evolutionary algorithm based on decomposition, and a cooperative coevolution multiobjective algorithm as well as other parallel evolution algorithms with different migration topology.

  4. Effects of endogenous factors on regional land-use carbon emissions based on the Grossman decomposition model: a case study of Zhejiang Province, China.

    Science.gov (United States)

    Wu, Cifang; Li, Guan; Yue, Wenze; Lu, Rucheng; Lu, Zhangwei; You, Heyuan

    2015-02-01

    The impact of land-use change on greenhouse gas emissions has become a core issue in current studies on global change and carbon cycle. However, a comprehensive evaluation of the effects of land-use changes on carbon emissions is very necessary. This paper attempted to apply the Grossman decomposition model to estimate the scale, structural, and management effects of land-use carbon emissions based on final energy consumption by establishing the relationship between the types of land use and carbon emissions in energy consumption. It was shown that land-use carbon emissions increase from 169.5624 million tons in 2000 to 637.0984 million tons in 2010, with an annual average growth rate of 14.15%. Meanwhile, land-use carbon intensity increased from 17.59 t/ha in 2000 to 64.42 t/ha in 2010, with an average annual growth rate of 13.86%. The results indicated that rapid industrialization and urbanization in Zhejiang Province promptly increased urban land and industrial land, which consequently affected land-use extensive emissions. The structural and management effects did not mitigate land-use carbon emissions. By contrast, both factors evidently affected the growth of carbon emissions because of the rigid demands of energy-intensive land-use types and the absence of land management. Results called for the policy implications of optimizing land-use structures and strengthening land-use management.

  5. Effects of Endogenous Factors on Regional Land-Use Carbon Emissions Based on the Grossman Decomposition Model: A Case Study of Zhejiang Province, China

    Science.gov (United States)

    Wu, Cifang; Li, Guan; Yue, Wenze; Lu, Rucheng; Lu, Zhangwei; You, Heyuan

    2015-02-01

    The impact of land-use change on greenhouse gas emissions has become a core issue in current studies on global change and carbon cycle. However, a comprehensive evaluation of the effects of land-use changes on carbon emissions is very necessary. This paper attempted to apply the Grossman decomposition model to estimate the scale, structural, and management effects of land-use carbon emissions based on final energy consumption by establishing the relationship between the types of land use and carbon emissions in energy consumption. It was shown that land-use carbon emissions increase from 169.5624 million tons in 2000 to 637.0984 million tons in 2010, with an annual average growth rate of 14.15 %. Meanwhile, land-use carbon intensity increased from 17.59 t/ha in 2000 to 64.42 t/ha in 2010, with an average annual growth rate of 13.86 %. The results indicated that rapid industrialization and urbanization in Zhejiang Province promptly increased urban land and industrial land, which consequently affected land-use extensive emissions. The structural and management effects did not mitigate land-use carbon emissions. By contrast, both factors evidently affected the growth of carbon emissions because of the rigid demands of energy-intensive land-use types and the absence of land management. Results called for the policy implications of optimizing land-use structures and strengthening land-use management.

  6. Decomposition methods for unsupervised learning

    DEFF Research Database (Denmark)

    Mørup, Morten

    2008-01-01

    This thesis presents the application and development of decomposition methods for Unsupervised Learning. It covers topics from classical factor analysis based decomposition and its variants such as Independent Component Analysis, Non-negative Matrix Factorization and Sparse Coding...... methods and clustering problems is derived both in terms of classical point clustering but also in terms of community detection in complex networks. A guiding principle throughout this thesis is the principle of parsimony. Hence, the goal of Unsupervised Learning is here posed as striving for simplicity...... in the decompositions. Thus, it is demonstrated how a wide range of decomposition methods explicitly or implicitly strive to attain this goal. Applications of the derived decompositions are given ranging from multi-media analysis of image and sound data, analysis of biomedical data such as electroencephalography...

  7. Efficient decomposition and linearization methods for the stochastic transportation problem

    International Nuclear Information System (INIS)

    Holmberg, K.

    1993-01-01

    The stochastic transportation problem can be formulated as a convex transportation problem with nonlinear objective function and linear constraints. We compare several different methods based on decomposition techniques and linearization techniques for this problem, trying to find the most efficient method or combination of methods. We discuss and test a separable programming approach, the Frank-Wolfe method with and without modifications, the new technique of mean value cross decomposition and the more well known Lagrangian relaxation with subgradient optimization, as well as combinations of these approaches. Computational tests are presented, indicating that some new combination methods are quite efficient for large scale problems. (authors) (27 refs.)

  8. Amorphization of Fe-based alloy via wet mechanical alloying assisted by PCA decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Neamţu, B.V., E-mail: Bogdan.Neamtu@stm.utcluj.ro [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania); Chicinaş, H.F.; Marinca, T.F. [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania); Isnard, O. [Université Grenoble Alpes, Institut NEEL, F-38042, Grenoble (France); CNRS, Institut NEEL, 25 rue des martyrs, BP166, F-38042, Grenoble (France); Pană, O. [National Institute for Research and Development of Isotopic and Molecular Technologies, 65-103 Donath Street, 400293, Cluj-Napoca (Romania); Chicinaş, I. [Materials Science and Engineering Department, Technical University of Cluj-Napoca, 103-105, Muncii Avenue, 400641, Cluj-Napoca (Romania)

    2016-11-01

    used as microalloying elements which could provide the required extra amount of metalloids. - Highlights: • Amorphization of Fe{sub 75}Si{sub 20}B{sub 5} alloy via wet mechanical alloying is assisted by PCA decomposition. • Powder amorphization was not achieved even after 140 de hours of dry MA. • Wet MA using different PCA leads to powder amorphization at different MA duration. • Regardless of PCA type, contamination with 2.3 wt% C is needed for amorphization.

  9. Optimization of a space based radiator

    International Nuclear Information System (INIS)

    Sam, Kien Fan Cesar Hung; Deng Zhongmin

    2011-01-01

    Nowadays there is an increased demand in satellite weight reduction for the reduction of costs. Thermal control system designers have to face the challenge of reducing both the weight of the system and required heater power while maintaining the components temperature within their design ranges. The main purpose of this paper is to present an optimization of a heat pipe radiator applied to a practical engineering design application. For this study, a communications satellite payload panel was considered. Four radiator areas were defined instead of a centralized one in order to improve the heat rejection into space; the radiator's dimensions were determined considering worst hot scenario, solar fluxes, heat dissipation and the component's design temperature upper limit. Dimensions, thermal properties of the structural panel, optical properties and degradation/contamination on thermal control coatings were also considered. A thermal model was constructed for thermal analysis and two heat pipe network designs were evaluated and compared. The model that allowed better radiator efficiency was selected for parametric thermal analysis and optimization. This pursues finding the minimum size of the heat pipe network while keeping complying with thermal control requirements without increasing power consumption. - Highlights: →Heat pipe radiator optimization applied to a practical engineering design application. →The heat pipe radiator of a communications satellite panel is optimized. →A thermal model was built for parametric thermal analysis and optimization. →Optimal heat pipe network size is determined for the optimal weight solution. →The thermal compliance was verified by transient thermal analysis.

  10. Identification of Diethyl 2,5-Dioxahexane Dicarboxylate and Polyethylene Carbonate as Decomposition Products of Ethylene Carbonate Based Electrolytes by Fourier Transform Infrared Spectroscopy

    KAUST Repository

    Shi, Feifei; Zhao, Hui; Liu, Gao; Ross, Philip N.; Somorjai, Gabor A.; Komvopoulos, Kyriakos

    2014-01-01

    The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.

  11. Identification of Diethyl 2,5-Dioxahexane Dicarboxylate and Polyethylene Carbonate as Decomposition Products of Ethylene Carbonate Based Electrolytes by Fourier Transform Infrared Spectroscopy

    KAUST Repository

    Shi, Feifei

    2014-07-10

    The formation of passive films on electrodes due to electrolyte decomposition significantly affects the reversibility of Li-ion batteries (LIBs); however, understanding of the electrolyte decomposition process is still lacking. The decomposition products of ethylene carbonate (EC)-based electrolytes on Sn and Ni electrodes are investigated in this study by Fourier transform infrared (FTIR) spectroscopy. The reference compounds, diethyl 2,5-dioxahexane dicarboxylate (DEDOHC) and polyethylene carbonate (poly-EC), were synthesized, and their chemical structures were characterized by FTIR spectroscopy and nuclear magnetic resonance (NMR). Assignment of the vibration frequencies of these compounds was assisted by quantum chemical (Hartree-Fock) calculations. The effect of Li-ion solvation on the FTIR spectra was studied by introducing the synthesized reference compounds into the electrolyte. EC decomposition products formed on Sn and Ni electrodes were identified as DEDOHC and poly-EC by matching the features of surface species formed on the electrodes with reference spectra. The results of this study demonstrate the importance of accounting for the solvation effect in FTIR analysis of the decomposition products forming on LIB electrodes. © 2014 American Chemical Society.

  12. A solution approach based on Benders decomposition for the preventive maintenance scheduling problem of a stochastic large-scale energy system

    DEFF Research Database (Denmark)

    Lusby, Richard Martin; Muller, Laurent Flindt; Petersen, Bjørn

    2013-01-01

    This paper describes a Benders decomposition-based framework for solving the large scale energy management problem that was posed for the ROADEF 2010 challenge. The problem was taken from the power industry and entailed scheduling the outage dates for a set of nuclear power plants, which need...... to be regularly taken down for refueling and maintenance, in such away that the expected cost of meeting the power demand in a number of potential scenarios is minimized. We show that the problem structure naturally lends itself to Benders decomposition; however, not all constraints can be included in the mixed...

  13. Practical mathematical optimization basic optimization theory and gradient-based algorithms

    CERN Document Server

    Snyman, Jan A

    2018-01-01

    This textbook presents a wide range of tools for a course in mathematical optimization for upper undergraduate and graduate students in mathematics, engineering, computer science, and other applied sciences. Basic optimization principles are presented with emphasis on gradient-based numerical optimization strategies and algorithms for solving both smooth and noisy discontinuous optimization problems. Attention is also paid to the difficulties of expense of function evaluations and the existence of multiple minima that often unnecessarily inhibit the use of gradient-based methods. This second edition addresses further advancements of gradient-only optimization strategies to handle discontinuities in objective functions. New chapters discuss the construction of surrogate models as well as new gradient-only solution strategies and numerical optimization using Python. A special Python module is electronically available (via springerlink) that makes the new algorithms featured in the text easily accessible and dir...

  14. Proper orthogonal decomposition-based estimations of the flow field from particle image velocimetry wall-gradient measurements in the backward-facing step flow

    International Nuclear Information System (INIS)

    Nguyen, Thien Duy; Wells, John Craig; Mokhasi, Paritosh; Rempfer, Dietmar

    2010-01-01

    In this paper, particle image velocimetry (PIV) results from the recirculation zone of a backward-facing step flow, of which the Reynolds number is 2800 based on bulk velocity upstream of the step and step height (h = 16.5 mm), are used to demonstrate the capability of proper orthogonal decomposition (POD)-based measurement models. Three-component PIV velocity fields are decomposed by POD into a set of spatial basis functions and a set of temporal coefficients. The measurement models are built to relate the low-order POD coefficients, determined from an ensemble of 1050 PIV fields by the 'snapshot' method, to the time-resolved wall gradients, measured by a near-wall measurement technique called stereo interfacial PIV. These models are evaluated in terms of reconstruction and prediction of the low-order temporal POD coefficients of the velocity fields. In order to determine the estimation coefficients of the measurement models, linear stochastic estimation (LSE), quadratic stochastic estimation (QSE), principal component regression (PCR) and kernel ridge regression (KRR) are applied. We denote such approaches as LSE-POD, QSE-POD, PCR-POD and KRR-POD. In addition to comparing the accuracy of measurement models, we introduce multi-time POD-based estimations in which past and future information of the wall-gradient events is used separately or combined. The results show that the multi-time estimation approaches can improve the prediction process. Among these approaches, the proposed multi-time KRR-POD estimation with an optimized window of past wall-gradient information yields the best prediction. Such a multi-time KRR-POD approach offers a useful tool for real-time flow estimation of the velocity field based on wall-gradient data

  15. Topology optimization based on the harmony search method

    International Nuclear Information System (INIS)

    Lee, Seung-Min; Han, Seog-Young

    2017-01-01

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  16. Topology optimization based on the harmony search method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung-Min; Han, Seog-Young [Hanyang University, Seoul (Korea, Republic of)

    2017-06-15

    A new topology optimization scheme based on a Harmony search (HS) as a metaheuristic method was proposed and applied to static stiffness topology optimization problems. To apply the HS to topology optimization, the variables in HS were transformed to those in topology optimization. Compliance was used as an objective function, and harmony memory was defined as the set of the optimized topology. Also, a parametric study for Harmony memory considering rate (HMCR), Pitch adjusting rate (PAR), and Bandwidth (BW) was performed to find the appropriate range for topology optimization. Various techniques were employed such as a filtering scheme, simple average scheme and harmony rate. To provide a robust optimized topology, the concept of the harmony rate update rule was also implemented. Numerical examples are provided to verify the effectiveness of the HS by comparing the optimal layouts of the HS with those of Bidirectional evolutionary structural optimization (BESO) and Artificial bee colony algorithm (ABCA). The following conclu- sions could be made: (1) The proposed topology scheme is very effective for static stiffness topology optimization problems in terms of stability, robustness and convergence rate. (2) The suggested method provides a symmetric optimized topology despite the fact that the HS is a stochastic method like the ABCA. (3) The proposed scheme is applicable and practical in manufacturing since it produces a solid-void design of the optimized topology. (4) The suggested method appears to be very effective for large scale problems like topology optimization.

  17. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    1993-01-01

    Reliability-based design of structural systems is considered. In particular, systems where the reliability model is a series system of parallel systems are treated. A sensitivity analysis for this class of problems is presented. Optimization problems with series systems of parallel systems...... optimization of series systems of parallel systems, but it is also efficient in reliability-based optimization of series systems in general....

  18. Dose optimization for dual-energy contrast-enhanced digital mammography based on an energy-resolved photon-counting detector: A Monte Carlo simulation study

    International Nuclear Information System (INIS)

    Lee, Youngjin; Lee, Seungwan; Kang, Sooncheol; Eom, Jisoo

    2017-01-01

    Dual-energy contrast-enhanced digital mammography (CEDM) has been used to decompose breast images and improve diagnostic accuracy for tumor detection. However, this technique causes an increase of radiation dose and an inaccuracy in material decomposition due to the limitations of conventional X-ray detectors. In this study, we simulated the dual-energy CEDM with an energy-resolved photon-counting detector (ERPCD) for reducing radiation dose and improving the quantitative accuracy of material decomposition images. The ERPCD-based dual-energy CEDM was compared to the conventional dual-energy CEDM in terms of radiation dose and quantitative accuracy. The correlation between radiation dose and image quality was also evaluated for optimizing the ERPCD-based dual-energy CEDM technique. The results showed that the material decomposition errors of the ERPCD-based dual-energy CEDM were 0.56–0.67 times lower than those of the conventional dual-energy CEDM. The imaging performance of the proposed technique was optimized at the radiation dose of 1.09 mGy, which is a half of the MGD for a single view mammogram. It can be concluded that the ERPCD-based dual-energy CEDM with an optimal exposure level is able to improve the quality of material decomposition images as well as reduce radiation dose. - Highlights: • Dual-energy mammography based on a photon-counting detector was simulated. • Radiation dose and image quality were evaluated for optimizing the proposed technique. • The proposed technique reduced radiation dose as well as improved image quality. • The proposed technique was optimized at the radiation dose of 1.09 mGy.

  19. Geometrically based optimization for extracranial radiosurgery

    International Nuclear Information System (INIS)

    Liu Ruiguo; Wagner, Thomas H; Buatti, John M; Modrick, Joseph; Dill, John; Meeks, Sanford L

    2004-01-01

    For static beam conformal intracranial radiosurgery, geometry of the beam arrangement dominates overall dose distribution. Maximizing beam separation in three dimensions decreases beam overlap, thus maximizing dose conformality and gradient outside of the target volume. Webb proposed arrangements of isotropically convergent beams that could be used as the starting point for a radiotherapy optimization process. We have developed an extracranial radiosurgery optimization method by extending Webb's isotropic beam arrangements to deliverable beam arrangements. This method uses an arrangement of N maximally separated converging vectors within the space available for beam delivery. Each bouquet of isotropic beam vectors is generated by a random sampling process that iteratively maximizes beam separation. Next, beam arrangement is optimized for critical structure avoidance while maintaining minimal overlap between beam entrance and exit pathways. This geometrically optimized beam set can then be used as a template for either conformal beam or intensity modulated extracranial radiosurgery. Preliminary results suggest that using this technique with conformal beam planning provides high plan conformality, a steep dose gradient outside of the tumour volume and acceptable critical structure avoidance in the majority of clinical cases

  20. Optimal separable bases and molecular collisions

    International Nuclear Information System (INIS)

    Poirier, L.W.

    1997-12-01

    A new methodology is proposed for the efficient determination of Green's functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, are problems of reduced dimensionality for most systems of physical interest. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. These distorted waves give rise to a Born series with optimized convergence properties. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic oscillator system. The primary interest however, is quantum reactive scattering in molecular systems. For numerical calculations, the use of distorted waves corresponds to numerical preconditioning. The new methodology therefore gives rise to an optimized preconditioning scheme for the efficient calculation of reactive and inelastic scattering amplitudes, especially at intermediate energies. This scheme is particularly suited to discrete variable representations (DVR's) and iterative sparse matrix methods commonly employed in such calculations. State to state and cumulative reactive scattering results obtained via the optimized preconditioner are presented for the two-dimensional collinear H + H 2 → H 2 + H system. Computational time and memory requirements for this system are drastically reduced in comparison with other methods, and results are obtained for previously prohibitive energy regimes

  1. Ultra-High-Speed Travelling Wave Protection of Transmission Line Using Polarity Comparison Principle Based on Empirical Mode Decomposition

    Directory of Open Access Journals (Sweden)

    Dong Wang

    2015-01-01

    Full Text Available The traditional polarity comparison based travelling wave protection, using the initial wave information, is affected by initial fault angle, bus structure, and external fault. And the relationship between the magnitude and polarity of travelling wave is ignored. Because of the protection tripping and malfunction, the further application of this protection principle is affected. Therefore, this paper presents an ultra-high-speed travelling wave protection using integral based polarity comparison principle. After empirical mode decomposition of the original travelling wave, the first-order intrinsic mode function is used as protection object. Based on the relationship between the magnitude and polarity of travelling wave, this paper demonstrates the feasibility of using travelling wave magnitude which contains polar information as direction criterion. And the paper integrates the direction criterion in a period after fault to avoid wave head detection failure. Through PSCAD simulation with the typical 500 kV transmission system, the reliability and sensitivity of travelling wave protection were verified under different factors’ affection.

  2. Application of spectral decomposition of LIDAR-based headwind profiles in windshear detection at the Hong Kong International Airport

    Directory of Open Access Journals (Sweden)

    Tsz-Chun Wu

    2018-01-01

    Full Text Available In aviation, rapidly fluctuating headwind/tailwind may lead to high horizontal windshear, posing potential safety hazards to aircraft. So far, windshear alerts are issued by considering directly the headwind differences measured along the aircraft flight path (e.g. based on Doppler velocities from remote-sensing. In this paper, we propose and demonstrate a new methodology for windshear alerting with the technique of spectral decomposition. Through Fourier transformation of the LIDAR-based headwind profiles in 2012 and 2014 at arrival corridors 07LA and 25RA of the Hong Kong International Airport (HKIA, we study the occurrence of windshear in the spectral domain. Using a threshold-based approach, we investigate performance of single and multiple channel detection algorithms and validate the results against pilot reports. With the receiver operating characteristic (ROC diagram, we successfully demonstrate feasibility of this approach to alert windshear by showing a comparable performance of the triple channel detection algorithm and a consistent hit rate gain (07LA in particular of 4.5 to 8 % in quadruple channel detection against GLYGA, which is the currently operational algorithm in HKIA. We also observe that some length scales are particularly sensitive to windshear events which may be closely related to the local geography of HKIA. This study serves to open a new door for the methodology of windshear detection in the spectral domain for the aviation community.

  3. The Optimal Wavelengths for Light Absorption Spectroscopy Measurements Based on Genetic Algorithm-Particle Swarm Optimization

    Science.gov (United States)

    Tang, Ge; Wei, Biao; Wu, Decao; Feng, Peng; Liu, Juan; Tang, Yuan; Xiong, Shuangfei; Zhang, Zheng

    2018-03-01

    To select the optimal wavelengths in the light extinction spectroscopy measurement, genetic algorithm-particle swarm optimization (GAPSO) based on genetic algorithm (GA) and particle swarm optimization (PSO) is adopted. The change of the optimal wavelength positions in different feature size parameters and distribution parameters is evaluated. Moreover, the Monte Carlo method based on random probability is used to identify the number of optimal wavelengths, and good inversion effects of the particle size distribution are obtained. The method proved to have the advantage of resisting noise. In order to verify the feasibility of the algorithm, spectra with bands ranging from 200 to 1000 nm are computed. Based on this, the measured data of standard particles are used to verify the algorithm.

  4. Experimental investigation of the catalytic decomposition and combustion characteristics of a non-toxic ammonium dinitramide (ADN)-based monopropellant thruster

    Science.gov (United States)

    Chen, Jun; Li, Guoxiu; Zhang, Tao; Wang, Meng; Yu, Yusong

    2016-12-01

    Low toxicity ammonium dinitramide (ADN)-based aerospace propulsion systems currently show promise with regard to applications such as controlling satellite attitude. In the present work, the decomposition and combustion processes of an ADN-based monopropellant thruster were systematically studied, using a thermally stable catalyst to promote the decomposition reaction. The performance of the ADN propulsion system was investigated using a ground test system under vacuum, and the physical properties of the ADN-based propellant were also examined. Using this system, the effects of the preheating temperature and feed pressure on the combustion characteristics and thruster performance during steady state operation were observed. The results indicate that the propellant and catalyst employed during this work, as well as the design and manufacture of the thruster, met performance requirements. Moreover, the 1 N ADN thruster generated a specific impulse of 223 s, demonstrating the efficacy of the new catalyst. The thruster operational parameters (specifically, the preheating temperature and feed pressure) were found to have a significant effect on the decomposition and combustion processes within the thruster, and the performance of the thruster was demonstrated to improve at higher feed pressures and elevated preheating temperatures. A lower temperature of 140 °C was determined to activate the catalytic decomposition and combustion processes more effectively compared with the results obtained using other conditions. The data obtained in this study should be beneficial to future systematic and in-depth investigations of the combustion mechanism and characteristics within an ADN thruster.

  5. Physical bases for diffusion welding processes optimization

    International Nuclear Information System (INIS)

    Bulygina, S.M.; Berber, N.N.; Mukhambetov, D.G.

    1999-01-01

    One of wide-spread method of different materials joint is diffusion welding. It has being brought off at the expense of mutual diffusion of atoms of contacting surfaces under long-duration curing at its heating and compression. Welding regime in dependence from properties of welding details is defining of three parameters: temperature, pressure, time. Problem of diffusion welding optimization concludes in determination less values of these parameters, complying with requirements for quality of welded joint. In the work experiments on diffusion welding for calculated temperature and for given surface's roughness were carried out. Tests conduct on samples of iron and iron-nickel alloy with size 1·1·1 cm 3 . Optimal regime of diffusion welding of examined samples in vacuum is defined. It includes compression of welding samples, heating, isothermal holding at temperature 650 deg C during 0.5 h and affords the required homogeneity of joint

  6. Interleaver Optimization using Population-Based Metaheuristics

    Czech Academy of Sciences Publication Activity Database

    Snášel, V.; Platoš, J.; Krömer, P.; Abraham, A.; Ouddane, N.; Húsek, Dušan

    2010-01-01

    Roč. 20, č. 5 (2010), s. 591-608 ISSN 1210-0552 R&D Projects: GA ČR GA205/09/1079 Grant - others:GA ČR(CZ) GA102/09/1494 Institutional research plan: CEZ:AV0Z10300504 Keywords : turbo codes * global optimization * genetic algorithms * differential evolution * noisy communication channel Subject RIV: IN - Informatics, Computer Science Impact factor: 0.511, year: 2010

  7. Defining a region of optimization based on engine usage data

    Science.gov (United States)

    Jiang, Li; Lee, Donghoon; Yilmaz, Hakan; Stefanopoulou, Anna

    2015-08-04

    Methods and systems for engine control optimization are provided. One or more operating conditions of a vehicle engine are detected. A value for each of a plurality of engine control parameters is determined based on the detected one or more operating conditions of the vehicle engine. A range of the most commonly detected operating conditions of the vehicle engine is identified and a region of optimization is defined based on the range of the most commonly detected operating conditions of the vehicle engine. The engine control optimization routine is initiated when the one or more operating conditions of the vehicle engine are within the defined region of optimization.

  8. Optimal design of the heat pipe using TLBO (teaching–learning-based optimization) algorithm

    International Nuclear Information System (INIS)

    Rao, R.V.; More, K.C.

    2015-01-01

    Heat pipe is a highly efficient and reliable heat transfer component. It is a closed container designed to transfer a large amount of heat in system. Since the heat pipe operates on a closed two-phase cycle, the heat transfer capacity is greater than for solid conductors. Also, the thermal response time is less than with solid conductors. The three major elemental parts of the rotating heat pipe are: a cylindrical evaporator, a truncated cone condenser, and a fixed amount of working fluid. In this paper, a recently proposed new stochastic advanced optimization algorithm called TLBO (Teaching–Learning-Based Optimization) algorithm is used for single objective as well as multi-objective design optimization of heat pipe. It is easy to implement, does not make use of derivatives and it can be applied to unconstrained or constrained problems. Two examples of heat pipe are presented in this paper. The results of application of TLBO algorithm for the design optimization of heat pipe are compared with the NPGA (Niched Pareto Genetic Algorithm), GEM (Grenade Explosion Method) and GEO (Generalized External optimization). It is found that the TLBO algorithm has produced better results as compared to those obtained by using NPGA, GEM and GEO algorithms. - Highlights: • The TLBO (Teaching–Learning-Based Optimization) algorithm is used for the design and optimization of a heat pipe. • Two examples of heat pipe design and optimization are presented. • The TLBO algorithm is proved better than the other optimization algorithms in terms of results and the convergence

  9. A Novel Optimal Control Method for Impulsive-Correction Projectile Based on Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Ruisheng Sun

    2016-01-01

    Full Text Available This paper presents a new parametric optimization approach based on a modified particle swarm optimization (PSO to design a class of impulsive-correction projectiles with discrete, flexible-time interval, and finite-energy control. In terms of optimal control theory, the task is described as the formulation of minimum working number of impulses and minimum control error, which involves reference model linearization, boundary conditions, and discontinuous objective function. These result in difficulties in finding the global optimum solution by directly utilizing any other optimization approaches, for example, Hp-adaptive pseudospectral method. Consequently, PSO mechanism is employed for optimal setting of impulsive control by considering the time intervals between two neighboring lateral impulses as design variables, which makes the briefness of the optimization process. A modification on basic PSO algorithm is developed to improve the convergence speed of this optimization through linearly decreasing the inertial weight. In addition, a suboptimal control and guidance law based on PSO technique are put forward for the real-time consideration of the online design in practice. Finally, a simulation case coupled with a nonlinear flight dynamic model is applied to validate the modified PSO control algorithm. The results of comparative study illustrate that the proposed optimal control algorithm has a good performance in obtaining the optimal control efficiently and accurately and provides a reference approach to handling such impulsive-correction problem.

  10. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  11. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    International Nuclear Information System (INIS)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun; Park, Jin Kyun

    2012-01-01

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  12. Empty tracks optimization based on Z-Map model

    Science.gov (United States)

    Liu, Le; Yan, Guangrong; Wang, Zaijun; Zang, Genao

    2017-12-01

    For parts with many features, there are more empty tracks during machining. If these tracks are not optimized, the machining efficiency will be seriously affected. In this paper, the characteristics of the empty tracks are studied in detail. Combining with the existing optimization algorithm, a new tracks optimization method based on Z-Map model is proposed. In this method, the tool tracks are divided into the unit processing section, and then the Z-Map model simulation technique is used to analyze the order constraint between the unit segments. The empty stroke optimization problem is transformed into the TSP with sequential constraints, and then through the genetic algorithm solves the established TSP problem. This kind of optimization method can not only optimize the simple structural parts, but also optimize the complex structural parts, so as to effectively plan the empty tracks and greatly improve the processing efficiency.

  13. Measurement and decomposition of energy efficiency of Northeast China-based on super efficiency DEA model and Malmquist index.

    Science.gov (United States)

    Ma, Xiaojun; Liu, Yan; Wei, Xiaoxue; Li, Yifan; Zheng, Mengchen; Li, Yudong; Cheng, Chaochao; Wu, Yumei; Liu, Zhaonan; Yu, Yuanbo

    2017-08-01

    Nowadays, environment problem has become the international hot issue. Experts and scholars pay more and more attention to the energy efficiency. Unlike most studies, which analyze the changes of TFEE in inter-provincial or regional cities, TFEE is calculated with the ratio of target energy value and actual energy input based on data in cities of prefecture levels, which would be more accurate. Many researches regard TFP as TFEE to do analysis from the provincial perspective. This paper is intended to calculate more reliably by super efficiency DEA, observe the changes of TFEE, and analyze its relation with TFP, and it proves that TFP is not equal to TFEE. Additionally, the internal influences of the TFEE are obtained via the Malmquist index decomposition. The external influences of the TFFE are analyzed afterward based on the Tobit models. Analysis results demonstrate that Heilongjiang has the highest TFEE followed by Jilin, and Liaoning has the lowest TFEE. Eventually, some policy suggestions are proposed for the influences of energy efficiency and study results.

  14. A Cutting Pattern Recognition Method for Shearers Based on Improved Ensemble Empirical Mode Decomposition and a Probabilistic Neural Network

    Directory of Open Access Journals (Sweden)

    Jing Xu

    2015-10-01

    Full Text Available In order to guarantee the stable operation of shearers and promote construction of an automatic coal mining working face, an online cutting pattern recognition method with high accuracy and speed based on Improved Ensemble Empirical Mode Decomposition (IEEMD and Probabilistic Neural Network (PNN is proposed. An industrial microphone is installed on the shearer and the cutting sound is collected as the recognition criterion to overcome the disadvantages of giant size, contact measurement and low identification rate of traditional detectors. To avoid end-point effects and get rid of undesirable intrinsic mode function (IMF components in the initial signal, IEEMD is conducted on the sound. The end-point continuation based on the practical storage data is performed first to overcome the end-point effect. Next the average correlation coefficient, which is calculated by the correlation of the first IMF with others, is introduced to select essential IMFs. Then the energy and standard deviation of the reminder IMFs are extracted as features and PNN is applied to classify the cutting patterns. Finally, a simulation example, with an accuracy of 92.67%, and an industrial application prove the efficiency and correctness of the proposed method.

  15. An optimized digital watermarking algorithm in wavelet domain based on differential evolution for color image.

    Science.gov (United States)

    Cui, Xinchun; Niu, Yuying; Zheng, Xiangwei; Han, Yingshuai

    2018-01-01

    In this paper, a new color watermarking algorithm based on differential evolution is proposed. A color host image is first converted from RGB space to YIQ space, which is more suitable for the human visual system. Then, apply three-level discrete wavelet transformation to luminance component Y and generate four different frequency sub-bands. After that, perform singular value decomposition on these sub-bands. In the watermark embedding process, apply discrete wavelet transformation to a watermark image after the scrambling encryption processing. Our new algorithm uses differential evolution algorithm with adaptive optimization to choose the right scaling factors. Experimental results show that the proposed algorithm has a better performance in terms of invisibility and robustness.

  16. Intelligent Fault Diagnosis of HVCB with Feature Space Optimization-Based Random Forest.

    Science.gov (United States)

    Ma, Suliang; Chen, Mingxuan; Wu, Jianwen; Wang, Yuhao; Jia, Bowen; Jiang, Yuan

    2018-04-16

    Mechanical faults of high-voltage circuit breakers (HVCBs) always happen over long-term operation, so extracting the fault features and identifying the fault type have become a key issue for ensuring the security and reliability of power supply. Based on wavelet packet decomposition technology and random forest algorithm, an effective identification system was developed in this paper. First, compared with the incomplete description of Shannon entropy, the wavelet packet time-frequency energy rate (WTFER) was adopted as the input vector for the classifier model in the feature selection procedure. Then, a random forest classifier was used to diagnose the HVCB fault, assess the importance of the feature variable and optimize the feature space. Finally, the approach was verified based on actual HVCB vibration signals by considering six typical fault classes. The comparative experiment results show that the classification accuracy of the proposed method with the origin feature space reached 93.33% and reached up to 95.56% with optimized input feature vector of classifier. This indicates that feature optimization procedure is successful, and the proposed diagnosis algorithm has higher efficiency and robustness than traditional methods.

  17. TRUST MODEL FOR SOCIAL NETWORK USING SINGULAR VALUE DECOMPOSITION

    Directory of Open Access Journals (Sweden)

    Davis Bundi Ntwiga

    2016-06-01

    Full Text Available For effective interactions to take place in a social network, trust is important. We model trust of agents using the peer to peer reputation ratings in the network that forms a real valued matrix. Singular value decomposition discounts the reputation ratings to estimate the trust levels as trust is the subjective probability of future expectations based on current reputation ratings. Reputation and trust are closely related and singular value decomposition can estimate trust using the real valued matrix of the reputation ratings of the agents in the network. Singular value decomposition is an ideal technique in error elimination when estimating trust from reputation ratings. Reputation estimation of trust is optimal at the discounting of 20 %.

  18. Elitism set based particle swarm optimization and its application

    Directory of Open Access Journals (Sweden)

    Yanxia Sun

    2017-01-01

    Full Text Available Topology plays an important role for Particle Swarm Optimization (PSO to achieve good optimization performance. It is difficult to find one topology structure for the particles to achieve better optimization performance than the others since the optimization performance not only depends on the searching abilities of the particles, also depends on the type of the optimization problems. Three elitist set based PSO algorithm without using explicit topology structure is proposed in this paper. An elitist set, which is based on the individual best experience, is used to communicate among the particles. Moreover, to avoid the premature of the particles, different statistical methods have been used in these three proposed methods. The performance of the proposed PSOs is compared with the results of the standard PSO 2011 and several PSO with different topologies, and the simulation results and comparisons demonstrate that the proposed PSO with adaptive probabilistic preference can achieve good optimization performance.

  19. Shape signature based on Ricci flow and optimal mass transportation

    Science.gov (United States)

    Luo, Wei; Su, Zengyu; Zhang, Min; Zeng, Wei; Dai, Junfei; Gu, Xianfeng

    2014-11-01

    A shape signature based on surface Ricci flow and optimal mass transportation is introduced for the purpose of surface comparison. First, the surface is conformally mapped onto plane by Ricci flow, which induces a measure on the planar domain. Second, the unique optimal mass transport map is computed that transports the new measure to the canonical measure on the plane. The map is obtained by a convex optimization process. This optimal transport map encodes all the information of the Riemannian metric on the surface. The shape signature consists of the optimal transport map, together with the mean curvature, which can fully recover the original surface. The discrete theories of surface Ricci flow and optimal mass transportation are explained thoroughly. The algorithms are given in detail. The signature is tested on human facial surfaces with different expressions accquired by structured light 3-D scanner based on phase-shifting method. The experimental results demonstrate the efficiency and efficacy of the method.

  20. Optimal, Reliability-Based Code Calibration

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    2002-01-01

    Reliability based code calibration is considered in this paper. It is described how the results of FORM based reliability analysis may be related to the partial safety factors and characteristic values. The code calibration problem is presented in a decision theoretical form and it is discussed how...... of reliability based code calibration of LRFD based design codes....

  1. Support vector machines optimization based theory, algorithms, and extensions

    CERN Document Server

    Deng, Naiyang; Zhang, Chunhua

    2013-01-01

    Support Vector Machines: Optimization Based Theory, Algorithms, and Extensions presents an accessible treatment of the two main components of support vector machines (SVMs)-classification problems and regression problems. The book emphasizes the close connection between optimization theory and SVMs since optimization is one of the pillars on which SVMs are built.The authors share insight on many of their research achievements. They give a precise interpretation of statistical leaning theory for C-support vector classification. They also discuss regularized twi

  2. Optimal design of RTCs in digital circuit fault self-repair based on global signal optimization

    Institute of Scientific and Technical Information of China (English)

    Zhang Junbin; Cai Jinyan; Meng Yafeng

    2016-01-01

    Since digital circuits have been widely and thoroughly applied in various fields, electronic systems are increasingly more complicated and require greater reliability. Faults may occur in elec-tronic systems in complicated environments. If immediate field repairs are not made on the faults, elec-tronic systems will not run normally, and this will lead to serious losses. The traditional method for improving system reliability based on redundant fault-tolerant technique has been unable to meet the requirements. Therefore, on the basis of (evolvable hardware)-based and (reparation balance technology)-based electronic circuit fault self-repair strategy proposed in our preliminary work, the optimal design of rectification circuits (RTCs) in electronic circuit fault self-repair based on global sig-nal optimization is deeply researched in this paper. First of all, the basic theory of RTC optimal design based on global signal optimization is proposed. Secondly, relevant considerations and suitable ranges are analyzed. Then, the basic flow of RTC optimal design is researched. Eventually, a typical circuit is selected for simulation verification, and detailed simulated analysis is made on five circumstances that occur during RTC evolution. The simulation results prove that compared with the conventional design method based RTC, the global signal optimization design method based RTC is lower in hardware cost, faster in circuit evolution, higher in convergent precision, and higher in circuit evolution success rate. Therefore, the global signal optimization based RTC optimal design method applied in the elec-tronic circuit fault self-repair technology is proven to be feasible, effective, and advantageous.

  3. CONFAC Decomposition Approach to Blind Identification of Underdetermined Mixtures Based on Generating Function Derivatives

    NARCIS (Netherlands)

    de Almeida, Andre L. F.; Luciani, Xavier; Stegeman, Alwin; Comon, Pierre

    This work proposes a new tensor-based approach to solve the problem of blind identification of underdetermined mixtures of complex-valued sources exploiting the cumulant generating function (CGF) of the observations. We show that a collection of second-order derivatives of the CGF of the

  4. Kinetics of thermal decomposition and kinetics of substitution reaction of nano uranyl Schiff base complexes

    Czech Academy of Sciences Publication Activity Database

    Asadi, Z.; Zeinali, A.; Dušek, Michal; Eigner, Václav

    2014-01-01

    Roč. 46, č. 12 (2014), s. 718-729 ISSN 0538-8066 R&D Projects: GA ČR(CZ) GAP204/11/0809 Institutional support: RVO:68378271 Keywords : uranyl * Schiff base * kinetics * anticancer activity Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 1.517, year: 2014

  5. portfolio optimization based on nonparametric estimation methods

    Directory of Open Access Journals (Sweden)

    mahsa ghandehari

    2017-03-01

    Full Text Available One of the major issues investors are facing with in capital markets is decision making about select an appropriate stock exchange for investing and selecting an optimal portfolio. This process is done through the risk and expected return assessment. On the other hand in portfolio selection problem if the assets expected returns are normally distributed, variance and standard deviation are used as a risk measure. But, the expected returns on assets are not necessarily normal and sometimes have dramatic differences from normal distribution. This paper with the introduction of conditional value at risk ( CVaR, as a measure of risk in a nonparametric framework, for a given expected return, offers the optimal portfolio and this method is compared with the linear programming method. The data used in this study consists of monthly returns of 15 companies selected from the top 50 companies in Tehran Stock Exchange during the winter of 1392 which is considered from April of 1388 to June of 1393. The results of this study show the superiority of nonparametric method over the linear programming method and the nonparametric method is much faster than the linear programming method.

  6. Optimal separable bases and series expansions

    International Nuclear Information System (INIS)

    Poirier, B.

    1997-01-01

    A method is proposed for the efficient calculation of the Green close-quote s functions and eigenstates for quantum systems of two or more dimensions. For a given Hamiltonian, the best possible separable approximation is obtained from the set of all Hilbert-space operators. It is shown that this determination itself, as well as the solution of the resultant approximation, is a problem of reduced dimensionality. Moreover, the approximate eigenstates constitute the optimal separable basis, in the sense of self-consistent field theory. The full solution is obtained from the approximation via iterative expansion. In the time-independent perturbation expansion for instance, all of the first-order energy corrections are zero. In the Green close-quote s function case, we have a distorted-wave Born series with optimized convergence properties. This series may converge even when the usual Born series diverges. Analytical results are presented for an application of the method to the two-dimensional shifted harmonic-oscillator system, in the course of which the quantum tanh 2 potential problem is solved exactly. The universal presence of bound states in the latter is shown to imply long-lived resonances in the former. In a comparison with other theoretical methods, we find that the reaction path Hamiltonian fails to predict such resonances. copyright 1997 The American Physical Society

  7. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  8. An integrated reliability-based design optimization of offshore towers

    International Nuclear Information System (INIS)

    Karadeniz, Halil; Togan, Vedat; Vrouwenvelder, Ton

    2009-01-01

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  9. An integrated reliability-based design optimization of offshore towers

    Energy Technology Data Exchange (ETDEWEB)

    Karadeniz, Halil [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)], E-mail: h.karadeniz@tudelft.nl; Togan, Vedat [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey); Vrouwenvelder, Ton [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands)

    2009-10-15

    After recognizing the uncertainty in the parameters such as material, loading, geometry and so on in contrast with the conventional optimization, the reliability-based design optimization (RBDO) concept has become more meaningful to perform an economical design implementation, which includes a reliability analysis and an optimization algorithm. RBDO procedures include structural analysis, reliability analysis and sensitivity analysis both for optimization and for reliability. The efficiency of the RBDO system depends on the mentioned numerical algorithms. In this work, an integrated algorithms system is proposed to implement the RBDO of the offshore towers, which are subjected to the extreme wave loading. The numerical strategies interacting with each other to fulfill the RBDO of towers are as follows: (a) a structural analysis program, SAPOS, (b) an optimization program, SQP and (c) a reliability analysis program based on FORM. A demonstration of an example tripod tower under the reliability constraints based on limit states of the critical stress, buckling and the natural frequency is presented.

  10. A Blind Adaptive Color Image Watermarking Scheme Based on Principal Component Analysis, Singular Value Decomposition and Human Visual System

    Directory of Open Access Journals (Sweden)

    M. Imran

    2017-09-01

    Full Text Available A blind adaptive color image watermarking scheme based on principal component analysis, singular value decomposition, and human visual system is proposed. The use of principal component analysis to decorrelate the three color channels of host image, improves the perceptual quality of watermarked image. Whereas, human visual system and fuzzy inference system helped to improve both imperceptibility and robustness by selecting adaptive scaling factor, so that, areas more prone to noise can be added with more information as compared to less prone areas. To achieve security, location of watermark embedding is kept secret and used as key at the time of watermark extraction, whereas, for capacity both singular values and vectors are involved in watermark embedding process. As a result, four contradictory requirements; imperceptibility, robustness, security and capacity are achieved as suggested by results. Both subjective and objective methods are acquired to examine the performance of proposed schemes. For subjective analysis the watermarked images and watermarks extracted from attacked watermarked images are shown. For objective analysis of proposed scheme in terms of imperceptibility, peak signal to noise ratio, structural similarity index, visual information fidelity and normalized color difference are used. Whereas, for objective analysis in terms of robustness, normalized correlation, bit error rate, normalized hamming distance and global authentication rate are used. Security is checked by using different keys to extract the watermark. The proposed schemes are compared with state-of-the-art watermarking techniques and found better performance as suggested by results.

  11. Improving forecasting accuracy of medium and long-term runoff using artificial neural network based on EEMD decomposition.

    Science.gov (United States)

    Wang, Wen-chuan; Chau, Kwok-wing; Qiu, Lin; Chen, Yang-bo

    2015-05-01

    Hydrological time series forecasting is one of the most important applications in modern hydrology, especially for the effective reservoir management. In this research, an artificial neural network (ANN) model coupled with the ensemble empirical mode decomposition (EEMD) is presented for forecasting medium and long-term runoff time series. First, the original runoff time series is decomposed into a finite and often small number of intrinsic mode functions (IMFs) and a residual series using EEMD technique for attaining deeper insight into the data characteristics. Then all IMF components and residue are predicted, respectively, through appropriate ANN models. Finally, the forecasted results of the modeled IMFs and residual series are summed to formulate an ensemble forecast for the original annual runoff series. Two annual reservoir runoff time series from Biuliuhe and Mopanshan in China, are investigated using the developed model based on four performance evaluation measures (RMSE, MAPE, R and NSEC). The results obtained in this work indicate that EEMD can effectively enhance forecasting accuracy and the proposed EEMD-ANN model can attain significant improvement over ANN approach in medium and long-term runoff time series forecasting. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Seismic spectral decomposition and analysis based on Wigner–Ville distribution for sandstone reservoir characterization in West Sichuan depression

    International Nuclear Information System (INIS)

    Wu, Xiaoyang; Liu, Tianyou

    2010-01-01

    Reflections from a hydrocarbon-saturated zone are generally expected to have a tendency to be low frequency. Previous work has shown the application of seismic spectral decomposition for low-frequency shadow detection. In this paper, we further analyse the characteristics of spectral amplitude in fractured sandstone reservoirs with different fluid saturations using the Wigner–Ville distribution (WVD)-based method. We give a description of the geometric structure of cross-terms due to the bilinear nature of WVD and eliminate cross-terms using smoothed pseudo-WVD (SPWVD) with time- and frequency-independent Gaussian kernels as smoothing windows. SPWVD is finally applied to seismic data from West Sichuan depression. We focus our study on the comparison of SPWVD spectral amplitudes resulting from different fluid contents. It shows that prolific gas reservoirs feature higher peak spectral amplitude at higher peak frequency, which attenuate faster than low-quality gas reservoirs and dry or wet reservoirs. This can be regarded as a spectral attenuation signature for future exploration in the study area

  13. General filtering method for electronic speckle pattern interferometry fringe images with various densities based on variational image decomposition.

    Science.gov (United States)

    Li, Biyuan; Tang, Chen; Gao, Guannan; Chen, Mingming; Tang, Shuwei; Lei, Zhenkun

    2017-06-01

    Filtering off speckle noise from a fringe image is one of the key tasks in electronic speckle pattern interferometry (ESPI). In general, ESPI fringe images can be divided into three categories: low-density fringe images, high-density fringe images, and variable-density fringe images. In this paper, we first present a general filtering method based on variational image decomposition that can filter speckle noise for ESPI fringe images with various densities. In our method, a variable-density ESPI fringe image is decomposed into low-density fringes, high-density fringes, and noise. A low-density fringe image is decomposed into low-density fringes and noise. A high-density fringe image is decomposed into high-density fringes and noise. We give some suitable function spaces to describe low-density fringes, high-density fringes, and noise, respectively. Then we construct several models and numerical algorithms for ESPI fringe images with various densities. And we investigate the performance of these models via our extensive experiments. Finally, we compare our proposed models with the windowed Fourier transform method and coherence enhancing diffusion partial differential equation filter. These two methods may be the most effective filtering methods at present. Furthermore, we use the proposed method to filter a collection of the experimentally obtained ESPI fringe images with poor quality. The experimental results demonstrate the performance of our proposed method.

  14. A Combined Methodology to Eliminate Artifacts in Multichannel Electrogastrogram Based on Independent Component Analysis and Ensemble Empirical Mode Decomposition.

    Science.gov (United States)

    Sengottuvel, S; Khan, Pathan Fayaz; Mariyappa, N; Patel, Rajesh; Saipriya, S; Gireesan, K

    2018-06-01

    Cutaneous measurements of electrogastrogram (EGG) signals are heavily contaminated by artifacts due to cardiac activity, breathing, motion artifacts, and electrode drifts whose effective elimination remains an open problem. A common methodology is proposed by combining independent component analysis (ICA) and ensemble empirical mode decomposition (EEMD) to denoise gastric slow-wave signals in multichannel EGG data. Sixteen electrodes are fixed over the upper abdomen to measure the EGG signals under three gastric conditions, namely, preprandial, postprandial immediately, and postprandial 2 h after food for three healthy subjects and a subject with a gastric disorder. Instantaneous frequencies of intrinsic mode functions that are obtained by applying the EEMD technique are analyzed to individually identify and remove each of the artifacts. A critical investigation on the proposed ICA-EEMD method reveals its ability to provide a higher attenuation of artifacts and lower distortion than those obtained by the ICA-EMD method and conventional techniques, like bandpass and adaptive filtering. Characteristic changes in the slow-wave frequencies across the three gastric conditions could be determined from the denoised signals for all the cases. The results therefore encourage the use of the EEMD-based technique for denoising gastric signals to be used in clinical practice.

  15. Assessment of autonomic nervous system by using empirical mode decomposition-based reflection wave analysis during non-stationary conditions

    International Nuclear Information System (INIS)

    Chang, C C; Hsiao, T C; Kao, S C; Hsu, H Y

    2014-01-01

    Arterial blood pressure (ABP) is an important indicator of cardiovascular circulation and presents various intrinsic regulations. It has been found that the intrinsic characteristics of blood vessels can be assessed quantitatively by ABP analysis (called reflection wave analysis (RWA)), but conventional RWA is insufficient for assessment during non-stationary conditions, such as the Valsalva maneuver. Recently, a novel adaptive method called empirical mode decomposition (EMD) was proposed for non-stationary data analysis. This study proposed a RWA algorithm based on EMD (EMD-RWA). A total of 51 subjects participated in this study, including 39 healthy subjects and 12 patients with autonomic nervous system (ANS) dysfunction. The results showed that EMD-RWA provided a reliable estimation of reflection time in baseline and head-up tilt (HUT). Moreover, the estimated reflection time is able to assess the ANS function non-invasively, both in normal, healthy subjects and in the patients with ANS dysfunction. EMD-RWA provides a new approach for reflection time estimation in non-stationary conditions, and also helps with non-invasive ANS assessment. (paper)

  16. Single-Trial Decoding of Bistable Perception Based on Sparse Nonnegative Tensor Decomposition

    Science.gov (United States)

    Wang, Zhisong; Maier, Alexander; Logothetis, Nikos K.; Liang, Hualou

    2008-01-01

    The study of the neuronal correlates of the spontaneous alternation in perception elicited by bistable visual stimuli is promising for understanding the mechanism of neural information processing and the neural basis of visual perception and perceptual decision-making. In this paper, we develop a sparse nonnegative tensor factorization-(NTF)-based method to extract features from the local field potential (LFP), collected from the middle temporal (MT) visual cortex in a macaque monkey, for decoding its bistable structure-from-motion (SFM) perception. We apply the feature extraction approach to the multichannel time-frequency representation of the intracortical LFP data. The advantages of the sparse NTF-based feature extraction approach lies in its capability to yield components common across the space, time, and frequency domains yet discriminative across different conditions without prior knowledge of the discriminating frequency bands and temporal windows for a specific subject. We employ the support vector machines (SVMs) classifier based on the features of the NTF components for single-trial decoding the reported perception. Our results suggest that although other bands also have certain discriminability, the gamma band feature carries the most discriminative information for bistable perception, and that imposing the sparseness constraints on the nonnegative tensor factorization improves extraction of this feature. PMID:18528515

  17. An optimal FFT-based anisotropic power spectrum estimator

    Energy Technology Data Exchange (ETDEWEB)

    Hand, Nick; Seljak, Uroš [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Li, Yin; Slepian, Zachary, E-mail: nhand@berkeley.edu, E-mail: yin.li@berkeley.edu, E-mail: zslepian@lbl.gov, E-mail: useljak@berkeley.edu [Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States)

    2017-07-01

    Measurements of line-of-sight dependent clustering via the galaxy power spectrum's multipole moments constitute a powerful tool for testing theoretical models in large-scale structure. Recent work shows that this measurement, including a moving line-of-sight, can be accelerated using Fast Fourier Transforms (FFTs) by decomposing the Legendre polynomials into products of Cartesian vectors. Here, we present a faster, optimal means of using FFTs for this measurement. We avoid redundancy present in the Cartesian decomposition by using a spherical harmonic decomposition of the Legendre polynomials. With this method, a given multipole of order ℓ requires only 2ℓ+1 FFTs rather than the (ℓ+1)(ℓ+2)/2 FFTs of the Cartesian approach. For the hexadecapole (ℓ = 4), this translates to 40% fewer FFTs, with increased savings for higher ℓ. The reduction in wall-clock time enables the calculation of finely-binned wedges in P ( k ,μ), obtained by computing multipoles up to a large ℓ{sub max} and combining them. This transformation has a number of advantages. We demonstrate that by using non-uniform bins in μ, we can isolate plane-of-sky (angular) systematics to a narrow bin at 0μ ≅ while eliminating the contamination from all other bins. We also show that the covariance matrix of clustering wedges binned uniformly in μ becomes ill-conditioned when combining multipoles up to large values of ℓ{sub max}, but that the problem can be avoided with non-uniform binning. As an example, we present results using ℓ{sub max}=16, for which our procedure requires a factor of 3.4 fewer FFTs than the Cartesian method, while removing the first μ bin leads only to a 7% increase in statistical error on f σ{sub 8}, as compared to a 54% increase with ℓ{sub max}=4.

  18. An optimal FFT-based anisotropic power spectrum estimator

    Science.gov (United States)

    Hand, Nick; Li, Yin; Slepian, Zachary; Seljak, Uroš

    2017-07-01

    Measurements of line-of-sight dependent clustering via the galaxy power spectrum's multipole moments constitute a powerful tool for testing theoretical models in large-scale structure. Recent work shows that this measurement, including a moving line-of-sight, can be accelerated using Fast Fourier Transforms (FFTs) by decomposing the Legendre polynomials into products of Cartesian vectors. Here, we present a faster, optimal means of using FFTs for this measurement. We avoid redundancy present in the Cartesian decomposition by using a spherical harmonic decomposition of the Legendre polynomials. With this method, a given multipole of order l requires only 2l+1 FFTs rather than the (l+1)(l+2)/2 FFTs of the Cartesian approach. For the hexadecapole (l = 4), this translates to 40% fewer FFTs, with increased savings for higher l. The reduction in wall-clock time enables the calculation of finely-binned wedges in P(k,μ), obtained by computing multipoles up to a large lmax and combining them. This transformation has a number of advantages. We demonstrate that by using non-uniform bins in μ, we can isolate plane-of-sky (angular) systematics to a narrow bin at 0μ simeq while eliminating the contamination from all other bins. We also show that the covariance matrix of clustering wedges binned uniformly in μ becomes ill-conditioned when combining multipoles up to large values of lmax, but that the problem can be avoided with non-uniform binning. As an example, we present results using lmax=16, for which our procedure requires a factor of 3.4 fewer FFTs than the Cartesian method, while removing the first μ bin leads only to a 7% increase in statistical error on f σ8, as compared to a 54% increase with lmax=4.

  19. OPF-Based Optimal Location of Two Systems Two Terminal HVDC to Power System Optimal Operation

    Directory of Open Access Journals (Sweden)

    Mehdi Abolfazli

    2013-04-01

    Full Text Available In this paper a suitable mathematical model of the two terminal HVDC system is provided for optimal power flow (OPF and optimal location based on OPF such power injection model. The ability of voltage source converter (VSC-based HVDC to independently control active and reactive power is well represented by the model. The model is used to develop an OPF-based optimal location algorithm of two systems two terminal HVDC to minimize the total fuel cost and active power losses as objective function. The optimization framework is modeled as non-linear programming (NLP and solved by Matlab and GAMS softwares. The proposed algorithm is implemented on the IEEE 14- and 30-bus test systems. The simulation results show ability of two systems two terminal HVDC in improving the power system operation. Furthermore, two systems two terminal HVDC is compared by PST and OUPFC in the power system operation from economical and technical aspects.

  20. Optimizing block-based maintenance under random machine usage

    NARCIS (Netherlands)

    de Jonge, Bram; Jakobsons, Edgars

    Existing studies on maintenance optimization generally assume that machines are either used continuously, or that times until failure do not depend on the actual usage. In practice, however, these assumptions are often not realistic. In this paper, we consider block-based maintenance optimization

  1. Reliability-Based Optimization of Series Systems of Parallel Systems

    DEFF Research Database (Denmark)

    Enevoldsen, I.; Sørensen, John Dalsgaard

    Reliability-based design of structural systems is considered. Especially systems where the reliability model is a series system of parallel systems are analysed. A sensitivity analysis for this class of problems is presented. Direct and sequential optimization procedures to solve the optimization...

  2. Optimization of microgrids based on controller designing for ...

    African Journals Online (AJOL)

    The power quality of microgrid during islanded operation is strongly related with the controller performance of DGs. Therefore a new optimal control strategy for distributed generation based inverter to connect to the generalized microgrid is proposed. This work shows developing optimal control algorithms for the DG ...

  3. Fast matrix factorization algorithm for DOSY based on the eigenvalue decomposition and the difference approximation focusing on the size of observed matrix

    International Nuclear Information System (INIS)

    Tanaka, Yuho; Uruma, Kazunori; Furukawa, Toshihiro; Nakao, Tomoki; Izumi, Kenya; Utsumi, Hiroaki

    2017-01-01

    This paper deals with an analysis problem for diffusion-ordered NMR spectroscopy (DOSY). DOSY is formulated as a matrix factorization problem of a given observed matrix. In order to solve this problem, a direct exponential curve resolution algorithm (DECRA) is well known. DECRA is based on singular value decomposition; the advantage of this algorithm is that the initial value is not required. However, DECRA requires a long calculating time, depending on the size of the given observed matrix due to the singular value decomposition, and this is a serious problem in practical use. Thus, this paper proposes a new analysis algorithm for DOSY to achieve a short calculating time. In order to solve matrix factorization for DOSY without using singular value decomposition, this paper focuses on the size of the given observed matrix. The observed matrix in DOSY is also a rectangular matrix with more columns than rows, due to limitation of the measuring time; thus, the proposed algorithm transforms the given observed matrix into a small observed matrix. The proposed algorithm applies the eigenvalue decomposition and the difference approximation to the small observed matrix, and the matrix factorization problem for DOSY is solved. The simulation and a data analysis show that the proposed algorithm achieves a lower calculating time than DECRA as well as similar analysis result results to DECRA. (author)

  4. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    KAUST Repository

    Zheng, Xiang

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors. © 2015 Elsevier Inc.

  5. A parallel domain decomposition-based implicit method for the Cahn–Hilliard–Cook phase-field equation in 3D

    International Nuclear Information System (INIS)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-01-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn–Hilliard–Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton–Krylov–Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors

  6. A parallel domain decomposition-based implicit method for the Cahn-Hilliard-Cook phase-field equation in 3D

    Science.gov (United States)

    Zheng, Xiang; Yang, Chao; Cai, Xiao-Chuan; Keyes, David

    2015-03-01

    We present a numerical algorithm for simulating the spinodal decomposition described by the three dimensional Cahn-Hilliard-Cook (CHC) equation, which is a fourth-order stochastic partial differential equation with a noise term. The equation is discretized in space and time based on a fully implicit, cell-centered finite difference scheme, with an adaptive time-stepping strategy designed to accelerate the progress to equilibrium. At each time step, a parallel Newton-Krylov-Schwarz algorithm is used to solve the nonlinear system. We discuss various numerical and computational challenges associated with the method. The numerical scheme is validated by a comparison with an explicit scheme of high accuracy (and unreasonably high cost). We present steady state solutions of the CHC equation in two and three dimensions. The effect of the thermal fluctuation on the spinodal decomposition process is studied. We show that the existence of the thermal fluctuation accelerates the spinodal decomposition process and that the final steady morphology is sensitive to the stochastic noise. We also show the evolution of the energies and statistical moments. In terms of the parallel performance, it is found that the implicit domain decomposition approach scales well on supercomputers with a large number of processors.

  7. Iron-based Nanocomposite Synthesised by Microwave Plasma Decomposition of Iron Pentacarbonyl

    Czech Academy of Sciences Publication Activity Database

    David, Bohumil; Pizúrová, Naděžda; Schneeweiss, Oldřich; Hoder, T.; Kudrle, V.; Janča, J.

    2007-01-01

    Roč. 263, - (2007), s. 147-152 ISSN 1012-0386. [Diffusion and Thermodynamics of Materials /IX/. Brno, 13.09.2006-15.09.2006] R&D Projects: GA ČR GA202/04/0221 Institutional research plan: CEZ:AV0Z20410507 Keywords : iron-based nanopowder * synthesis * microwave plasma method Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.483, year: 2005 http://www.scientific.net/3-908451-35-3/3.html

  8. Optimization algorithm based on densification and dynamic canonical descent

    Science.gov (United States)

    Bousson, K.; Correia, S. D.

    2006-07-01

    Stochastic methods have gained some popularity in global optimization in that most of them do not assume the cost functions to be differentiable. They have capabilities to avoid being trapped by local optima, and may converge even faster than gradient-based optimization methods on some problems. The present paper proposes an optimization method, which reduces the search space by means of densification curves, coupled with the dynamic canonical descent algorithm. The performances of the new method are shown on several known problems classically used for testing optimization algorithms, and proved to outperform competitive algorithms such as simulated annealing and genetic algorithms.

  9. Optimization Research of Generation Investment Based on Linear Programming Model

    Science.gov (United States)

    Wu, Juan; Ge, Xueqian

    Linear programming is an important branch of operational research and it is a mathematical method to assist the people to carry out scientific management. GAMS is an advanced simulation and optimization modeling language and it will combine a large number of complex mathematical programming, such as linear programming LP, nonlinear programming NLP, MIP and other mixed-integer programming with the system simulation. In this paper, based on the linear programming model, the optimized investment decision-making of generation is simulated and analyzed. At last, the optimal installed capacity of power plants and the final total cost are got, which provides the rational decision-making basis for optimized investments.

  10. A 16-Channel Nonparametric Spike Detection ASIC Based on EC-PC Decomposition.

    Science.gov (United States)

    Wu, Tong; Xu, Jian; Lian, Yong; Khalili, Azam; Rastegarnia, Amir; Guan, Cuntai; Yang, Zhi

    2016-02-01

    In extracellular neural recording experiments, detecting neural spikes is an important step for reliable information decoding. A successful implementation in integrated circuits can achieve substantial data volume reduction, potentially enabling a wireless operation and closed-loop system. In this paper, we report a 16-channel neural spike detection chip based on a customized spike detection method named as exponential component-polynomial component (EC-PC) algorithm. This algorithm features a reliable prediction of spikes by applying a probability threshold. The chip takes raw data as input and outputs three data streams simultaneously: field potentials, band-pass filtered neural data, and spiking probability maps. The algorithm parameters are on-chip configured automatically based on input data, which avoids manual parameter tuning. The chip has been tested with both in vivo experiments for functional verification and bench-top experiments for quantitative performance assessment. The system has a total power consumption of 1.36 mW and occupies an area of 6.71 mm (2) for 16 channels. When tested on synthesized datasets with spikes and noise segments extracted from in vivo preparations and scaled according to required precisions, the chip outperforms other detectors. A credit card sized prototype board is developed to provide power and data management through a USB port.

  11. Nonlinear Prediction Model for Hydrologic Time Series Based on Wavelet Decomposition

    Science.gov (United States)

    Kwon, H.; Khalil, A.; Brown, C.; Lall, U.; Ahn, H.; Moon, Y.

    2005-12-01

    Traditionally forecasting and characterizations of hydrologic systems is performed utilizing many techniques. Stochastic linear methods such as AR and ARIMA and nonlinear ones such as statistical learning theory based tools have been extensively used. The common difficulty to all methods is the determination of sufficient and necessary information and predictors for a successful prediction. Relationships between hydrologic variables are often highly nonlinear and interrelated across the temporal scale. A new hybrid approach is proposed for the simulation of hydrologic time series combining both the wavelet transform and the nonlinear model. The present model employs some merits of wavelet transform and nonlinear time series model. The Wavelet Transform is adopted to decompose a hydrologic nonlinear process into a set of mono-component signals, which are simulated by nonlinear model. The hybrid methodology is formulated in a manner to improve the accuracy of a long term forecasting. The proposed hybrid model yields much better results in terms of capturing and reproducing the time-frequency properties of the system at hand. Prediction results are promising when compared to traditional univariate time series models. An application of the plausibility of the proposed methodology is provided and the results conclude that wavelet based time series model can be utilized for simulating and forecasting of hydrologic variable reasonably well. This will ultimately serve the purpose of integrated water resources planning and management.

  12. Dynamic Regulatory Network Reconstruction for Alzheimer’s Disease Based on Matrix Decomposition Techniques

    Directory of Open Access Journals (Sweden)

    Wei Kong

    2014-01-01

    Full Text Available Alzheimer’s disease (AD is the most common form of dementia and leads to irreversible neurodegenerative damage of the brain. Finding the dynamic responses of genes, signaling proteins, transcription factor (TF activities, and regulatory networks of the progressively deteriorative progress of AD would represent a significant advance in discovering the pathogenesis of AD. However, the high throughput technologies of measuring TF activities are not yet available on a genome-wide scale. In this study, based on DNA microarray gene expression data and a priori information of TFs, network component analysis (NCA algorithm is applied to determining the TF activities and regulatory influences on TGs of incipient, moderate, and severe AD. Based on that, the dynamical gene regulatory networks of the deteriorative courses of AD were reconstructed. To select significant genes which are differentially expressed in different courses of AD, independent component analysis (ICA, which is better than the traditional clustering methods and can successfully group one gene in different meaningful biological processes, was used. The molecular biological analysis showed that the changes of TF activities and interactions of signaling proteins in mitosis, cell cycle, immune response, and inflammation play an important role in the deterioration of AD.

  13. Shared Reed-Muller Decision Diagram Based Thermal-Aware AND-XOR Decomposition of Logic Circuits

    Directory of Open Access Journals (Sweden)

    Apangshu Das

    2016-01-01

    Full Text Available The increased number of complex functional units exerts high power-density within a very-large-scale integration (VLSI chip which results in overheating. Power-densities directly converge into temperature which reduces the yield of the circuit. An adverse effect of power-density reduction is the increase in area. So, there is a trade-off between area and power-density. In this paper, we introduce a Shared Reed-Muller Decision Diagram (SRMDD based on fixed polarity AND-XOR decomposition to represent multioutput Boolean functions. By recursively applying transformations and reductions, we obtained a compact SRMDD. A heuristic based on Genetic Algorithm (GA increases the sharing of product terms by judicious choice of polarity of input variables in SRMDD expansion and a suitable area and power-density trade-off has been enumerated. This is the first effort ever to incorporate the power-density as a measure of temperature estimation in AND-XOR expansion process. The results of logic synthesis are incorporated with physical design in CADENCE digital synthesis tool to obtain the floor-plan silicon area and power profile. The proposed thermal-aware synthesis has been validated by obtaining absolute temperature of the synthesized circuits using HotSpot tool. We have experimented with 29 benchmark circuits. The minimized AND-XOR circuit realization shows average savings up to 15.23% improvement in silicon area and up to 17.02% improvement in temperature over the sum-of-product (SOP based logic minimization.

  14. Novel Verification Method for Timing Optimization Based on DPSO

    Directory of Open Access Journals (Sweden)

    Chuandong Chen

    2018-01-01

    Full Text Available Timing optimization for logic circuits is one of the key steps in logic synthesis. Extant research data are mainly proposed based on various intelligence algorithms. Hence, they are neither comparable with timing optimization data collected by the mainstream electronic design automation (EDA tool nor able to verify the superiority of intelligence algorithms to the EDA tool in terms of optimization ability. To address these shortcomings, a novel verification method is proposed in this study. First, a discrete particle swarm optimization (DPSO algorithm was applied to optimize the timing of the mixed polarity Reed-Muller (MPRM logic circuit. Second, the Design Compiler (DC algorithm was used to optimize the timing of the same MPRM logic circuit through special settings and constraints. Finally, the timing optimization results of the two algorithms were compared based on MCNC benchmark circuits. The timing optimization results obtained using DPSO are compared with those obtained from DC, and DPSO demonstrates an average reduction of 9.7% in the timing delays of critical paths for a number of MCNC benchmark circuits. The proposed verification method directly ascertains whether the intelligence algorithm has a better timing optimization ability than DC.

  15. A comparison of physically and radiobiologically based optimization for IMRT

    International Nuclear Information System (INIS)

    Jones, Lois; Hoban, Peter

    2002-01-01

    Many optimization techniques for intensity modulated radiotherapy have now been developed. The majority of these techniques including all the commercial systems that are available are based on physical dose methods of assessment. Some techniques have also been based on radiobiological models. None of the radiobiological optimization techniques however have assessed the clinically realistic situation of considering both tumor and normal cells within the target volume. This study considers a ratio-based fluence optimizing technique to compare a dose-based optimization method described previously and two biologically based models. The biologically based methods use the values of equivalent uniform dose calculated for the tumor cells and integral biological effective dose for normal cells. The first biologically based method includes only tumor cells in the target volume while the second considers both tumor and normal cells in the target volume. All three methods achieve good conformation to the target volume. The biologically based optimization without the normal tissue in the target volume shows a high dose region in the center of the target volume while this is reduced when the normal tissues are also considered in the target volume. This effect occurs because the normal tissues in the target volume require the optimization to reduce the dose and therefore limit the maximum dose to that volume

  16. FABRICATION OF CNTS BY TOLUENE DECOMPOSITION IN A NEW REACTOR BASED ON AN ATMOSPHERIC PRESSURE PLASMA JET COUPLED TO A CVD SYSTEM

    Directory of Open Access Journals (Sweden)

    FELIPE RAMÍREZ-HERNÁNDEZ

    2017-03-01

    Full Text Available Here, we present a method to produce carbon nanotubes (CNTs based on the coupling between two conventional techniques used for the preparation of nanostructures: an arc-jet as a source of plasma and a chemical vapour deposition (CVD system. We call this system as an “atmospheric pressure plasma (APP-enhanced CVD” (APPE-CVD. This reactor was used to grow CNTs on non-flat aluminosilicate substrates by the decomposition of toluene (carbon source in the presence of ferrocene (as a catalyst. Both, CNTs and by-products of carbon were collected at three different temperatures (780, 820 and 860 °C in different regions of the APPE-CVD system. These samples were analysed by thermogravimetric analysis (TGA and DTG, scanning electron microscopy (SEM and Raman spectroscopy in order to determine the effect of APP on the thermal stability of the as-grown CNTs. It was found that the amount of metal catalyst in the synthesised CNTs is reduced by applying APP, being 820 °C the optimal temperature to produce CNTs with a high yield and carbon purity (95 wt. %. In contrast, when the synthesis temperature was fixed at 780 °C or 860 °C, amorphous carbon or CNTs with different structural defects, respectively, was formed through APEE-CVD reactor. We recommended the use of non-flat aluminosilicate particles as supports to increase CNT yield and facilitate the removal of deposits from the substrate surface. The approach that we implemented (to synthesise CNTs by using the APPE-CVD reactor may be useful to produce these nanostructures on a gram-scale for use in basic studies. The approach may also be scaled up for mass production.

  17. Gold Redox Catalysis through Base-Initiated Diazonium Decomposition toward Alkene, Alkyne, and Allene Activation.

    Science.gov (United States)

    Dong, Boliang; Peng, Haihui; Motika, Stephen E; Shi, Xiaodong

    2017-08-16

    The discovery of photoassisted diazonium activation toward gold(I) oxidation greatly extended the scope of gold redox catalysis by avoiding the use of a strong oxidant. Some practical issues that limit the application of this new type of chemistry are the relative low efficiency (long reaction time and low conversion) and the strict reaction condition control that is necessary (degassing and inert reaction environment). Herein, an alternative photofree condition has been developed through Lewis base induced diazonium activation. With this method, an unreactive Au I catalyst was used in combination with Na 2 CO 3 and diazonium salts to produce a Au III intermediate. The efficient activation of various substrates, including alkyne, alkene and allene was achieved, followed by rapid Au III reductive elimination, which yielded the C-C coupling products with good to excellent yields. Relative to the previously reported photoactivation method, our approach offered greater efficiency and versatility through faster reaction rates and broader reaction scope. Challenging substrates such as electron rich/neutral allenes, which could not be activated under the photoinitiation conditions (<5 % yield), could be activated to subsequently yield the desired coupling products in good to excellent yield. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  18. A new physics-based method for detecting weak nuclear signals via spectral decomposition

    International Nuclear Information System (INIS)

    Chan, Kung-Sik; Li, Jinzheng; Eichinger, William; Bai, Erwei

    2012-01-01

    We propose a new physics-based method to determine the presence of the spectral signature of one or more nuclides from a poorly resolved spectra with weak signatures. The method is different from traditional methods that rely primarily on peak finding algorithms. The new approach considers each of the signatures in the library to be a linear combination of subspectra. These subspectra are obtained by assuming a signature consisting of just one of the unique gamma rays emitted by the nuclei. We propose a Poisson regression model for deducing which nuclei are present in the observed spectrum. In recognition that a radiation source generally comprises few nuclear materials, the underlying Poisson model is sparse, i.e. most of the regression coefficients are zero (positive coefficients correspond to the presence of nuclear materials). We develop an iterative algorithm for a penalized likelihood estimation that prompts sparsity. We illustrate the efficacy of the proposed method by simulations using a variety of poorly resolved, low signal-to-noise ratio (SNR) situations, which show that the proposed approach enjoys excellent empirical performance even with SNR as low as to -15 db.

  19. Intelligent fault recognition strategy based on adaptive optimized multiple centers

    Science.gov (United States)

    Zheng, Bo; Li, Yan-Feng; Huang, Hong-Zhong

    2018-06-01

    For the recognition principle based optimized single center, one important issue is that the data with nonlinear separatrix cannot be recognized accurately. In order to solve this problem, a novel recognition strategy based on adaptive optimized multiple centers is proposed in this paper. This strategy recognizes the data sets with nonlinear separatrix by the multiple centers. Meanwhile, the priority levels are introduced into the multi-objective optimization, including recognition accuracy, the quantity of optimized centers, and distance relationship. According to the characteristics of various data, the priority levels are adjusted to ensure the quantity of optimized centers adaptively and to keep the original accuracy. The proposed method is compared with other methods, including support vector machine (SVM), neural network, and Bayesian classifier. The results demonstrate that the proposed strategy has the same or even better recognition ability on different distribution characteristics of data.

  20. Integrated Reliability-Based Optimal Design of Structures

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle

    1987-01-01

    In conventional optimal design of structural systems the weight or the initial cost of the structure is usually used as objective function. Further, the constraints require that the stresses and/or strains at some critical points have to be less than some given values. Finally, all variables......-based optimal design is discussed. Next, an optimal inspection and repair strategy for existing structural systems is presented. An optimization problem is formulated , where the objective is to minimize the expected total future cost of inspection and repair subject to the constraint that the reliability...... value. The reliability can be measured from an element and/or a systems point of view. A number of methods to solve reliability-based optimization problems has been suggested, see e.g. Frangopol [I]. Murotsu et al. (2], Thoft-Christensen & Sørensen (3] and Sørensen (4). For structures where...

  1. Topology Optimization of Passive Micromixers Based on Lagrangian Mapping Method

    Directory of Open Access Journals (Sweden)

    Yuchen Guo

    2018-03-01

    Full Text Available This paper presents an optimization-based design method of passive micromixers for immiscible fluids, which means that the Peclet number infinitely large. Based on topology optimization method, an optimization model is constructed to find the optimal layout of the passive micromixers. Being different from the topology optimization methods with Eulerian description of the convection-diffusion dynamics, this proposed method considers the extreme case, where the mixing is dominated completely by the convection with negligible diffusion. In this method, the mixing dynamics is modeled by the mapping method, a Lagrangian description that can deal with the case with convection-dominance. Several numerical examples have been presented to demonstrate the validity of the proposed method.

  2. Bi-objective optimization for multi-modal transportation routing planning problem based on Pareto optimality

    Directory of Open Access Journals (Sweden)

    Yan Sun

    2015-09-01

    Full Text Available Purpose: The purpose of study is to solve the multi-modal transportation routing planning problem that aims to select an optimal route to move a consignment of goods from its origin to its destination through the multi-modal transportation network. And the optimization is from two viewpoints including cost and time. Design/methodology/approach: In this study, a bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. Minimizing the total transportation cost and the total transportation time are set as the optimization objectives of the model. In order to balance the benefit between the two objectives, Pareto optimality is utilized to solve the model by gaining its Pareto frontier. The Pareto frontier of the model can provide the multi-modal transportation operator (MTO and customers with better decision support and it is gained by the normalized normal constraint method. Then, an experimental case study is designed to verify the feasibility of the model and Pareto optimality by using the mathematical programming software Lingo. Finally, the sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case. Findings: The calculation results indicate that the proposed model and Pareto optimality have good performance in dealing with the bi-objective optimization. The sensitivity analysis also shows the influence of the variation of the demand and supply on the multi-modal transportation organization clearly. Therefore, this method can be further promoted to the practice. Originality/value: A bi-objective mixed integer linear programming model is proposed to optimize the multi-modal transportation routing planning problem. The Pareto frontier based sensitivity analysis of the demand and supply in the multi-modal transportation organization is performed based on the designed case.

  3. optimization of object tracking based on enhanced imperialist ...

    African Journals Online (AJOL)

    Damuut and Dogara

    A typical example is the Roman Empire which had influence or control over ... the Enhance Imperialist Competitive Algorithm (EICA) in optimizing the generated ... segment the video frame into a number of regions based on visual features like ...

  4. Optimizing ring-based CSR sources

    International Nuclear Information System (INIS)

    Byrd, J.M.; De Santis, S.; Hao, Z.; Martin, M.C.; Munson, D.V.; Li, D.; Nishimura, H.; Robin, D.S.; Sannibale, F.; Schlueter, R.D.; Schoenlein, R.; Jung, J.Y.; Venturini, M.; Wan, W.; Zholents, A.A.; Zolotorev, M.

    2004-01-01

    Coherent synchrotron radiation (CSR) is a fascinating phenomenon recently observed in electron storage rings and shows tremendous promise as a high power source of radiation at terahertz frequencies. However, because of the properties of the radiation and the electron beams needed to produce it, there are a number of interesting features of the storage ring that can be optimized for CSR. Furthermore, CSR has been observed in three distinct forms: as steady pulses from short bunches, bursts from growth of spontaneous modulations in high current bunches, and from micro modulations imposed on a bunch from laser slicing. These processes have their relative merits as sources and can be improved via the ring design. The terahertz (THz) and sub-THz region of the electromagnetic spectrum lies between the infrared and the microwave . This boundary region is beyond the normal reach of optical and electronic measurement techniques and sources associated with these better-known neighbors. Recent research has demonstrated a relatively high power source of THz radiation from electron storage rings: coherent synchrotron radiation (CSR). Besides offering high power, CSR enables broadband optical techniques to be extended to nearly the microwave region, and has inherently sub-picosecond pulses. As a result, new opportunities for scientific research and applications are enabled across a diverse array of disciplines: condensed matter physics, medicine, manufacturing, and space and defense industries. CSR will have a strong impact on THz imaging, spectroscopy, femtosecond dynamics, and driving novel non-linear processes. CSR is emitted by bunches of accelerated charged particles when the bunch length is shorter than the wavelength being emitted. When this criterion is met, all the particles emit in phase, and a single-cycle electromagnetic pulse results with an intensity proportional to the square of the number of particles in the bunch. It is this quadratic dependence that can

  5. Speckle imaging using the principle value decomposition method

    International Nuclear Information System (INIS)

    Sherman, J.W.

    1978-01-01

    Obtaining diffraction-limited images in the presence of atmospheric turbulence is a topic of current interest. Two types of approaches have evolved: real-time correction and speckle imaging. A speckle imaging reconstruction method was developed by use of an ''optimal'' filtering approach. This method is based on a nonlinear integral equation which is solved by principle value decomposition. The method was implemented on a CDC 7600 for study. The restoration algorithm is discussed and its performance is illustrated. 7 figures

  6. Optimal portfolio model based on WVAR

    OpenAIRE

    Hao, Tianyu

    2012-01-01

    This article is focused on using a new measurement of risk-- Weighted Value at Risk to develop a new method of constructing initiate from the TVAR solving problem, based on MATLAB software, using the historical simulation method (avoiding income distribution will be assumed to be normal), the results of previous studies also based on, study the U.S. Nasdaq composite index, combining the Simpson formula for the solution of TVAR and its deeply study; then, through the representation of WVAR for...

  7. Reliability-Based Structural Optimization of Wave Energy Converters

    DEFF Research Database (Denmark)

    Ambühl, Simon; Kramer, Morten; Sørensen, John Dalsgaard

    2014-01-01

    More and more wave energy converter (WEC) concepts are reaching prototype level. Once the prototype level is reached, the next step in order to further decrease the levelized cost of energy (LCOE) is optimizing the overall system with a focus on structural and maintenance (inspection) costs......, as well as on the harvested power from the waves. The target of a fully-developed WEC technology is not maximizing its power output, but minimizing the resulting LCOE. This paper presents a methodology to optimize the structural design of WECs based on a reliability-based optimization problem...

  8. An Improved Real-Coded Population-Based Extremal Optimization Method for Continuous Unconstrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Guo-Qiang Zeng

    2014-01-01

    Full Text Available As a novel evolutionary optimization method, extremal optimization (EO has been successfully applied to a variety of combinatorial optimization problems. However, the applications of EO in continuous optimization problems are relatively rare. This paper proposes an improved real-coded population-based EO method (IRPEO for continuous unconstrained optimization problems. The key operations of IRPEO include generation of real-coded random initial population, evaluation of individual and population fitness, selection of bad elements according to power-law probability distribution, generation of new population based on uniform random mutation, and updating the population by accepting the new population unconditionally. The experimental results on 10 benchmark test functions with the dimension N=30 have shown that IRPEO is competitive or even better than the recently reported various genetic algorithm (GA versions with different mutation operations in terms of simplicity, effectiveness, and efficiency. Furthermore, the superiority of IRPEO to other evolutionary algorithms such as original population-based EO, particle swarm optimization (PSO, and the hybrid PSO-EO is also demonstrated by the experimental results on some benchmark functions.

  9. Optimal perturbations for nonlinear systems using graph-based optimal transport

    Science.gov (United States)

    Grover, Piyush; Elamvazhuthi, Karthik

    2018-06-01

    We formulate and solve a class of finite-time transport and mixing problems in the set-oriented framework. The aim is to obtain optimal discrete-time perturbations in nonlinear dynamical systems to transport a specified initial measure on the phase space to a final measure in finite time. The measure is propagated under system dynamics in between the perturbations via the associated transfer operator. Each perturbation is described by a deterministic map in the measure space that implements a version of Monge-Kantorovich optimal transport with quadratic cost. Hence, the optimal solution minimizes a sum of quadratic costs on phase space transport due to the perturbations applied at specified times. The action of the transport map is approximated by a continuous pseudo-time flow on a graph, resulting in a tractable convex optimization problem. This problem is solved via state-of-the-art solvers to global optimality. We apply this algorithm to a problem of transport between measures supported on two disjoint almost-invariant sets in a chaotic fluid system, and to a finite-time optimal mixing problem by choosing the final measure to be uniform. In both cases, the optimal perturbations are found to exploit the phase space structures, such as lobe dynamics, leading to efficient global transport. As the time-horizon of the problem is increased, the optimal perturbations become increasingly localized. Hence, by combining the transfer operator approach with ideas from the theory of optimal mass transportation, we obtain a discrete-time graph-based algorithm for optimal transport and mixing in nonlinear systems.

  10. Joint global optimization of tomographic data based on particle swarm optimization and decision theory

    Science.gov (United States)

    Paasche, H.; Tronicke, J.

    2012-04-01

    In many near surface geophysical applications multiple tomographic data sets are routinely acquired to explore subsurface structures and parameters. Linking the model generation process of multi-method geophysical data sets can significantly reduce ambiguities in geophysical data analysis and model interpretation. Most geophysical inversion approaches rely on local search optimization methods used to find an optimal model in the vicinity of a user-given starting model. The final solution may critically depend on the initial model. Alternatively, global optimization (GO) methods have been used to invert geophysical data. They explore the solution space in more detail and determine the optimal model independently from the starting model. Additionally, they can be used to find sets of optimal models allowing a further analysis of model parameter uncertainties. Here we employ particle swarm optimization (PSO) to realize the global optimization of tomographic data. PSO is an emergent methods based on swarm intelligence characterized by fast and robust convergence towards optimal solutions. The fundamental principle of PSO is inspired by nature, since the algorithm mimics the behavior of a flock of birds searching food in a search space. In PSO, a number of particles cruise a multi-dimensional solution space striving to find optimal model solutions explaining the acquired data. The particles communicate their positions and success and direct their movement according to the position of the currently most successful particle of the swarm. The success of a particle, i.e. the quality of the currently found model by a particle, must be uniquely quantifiable to identify the swarm leader. When jointly inverting disparate data sets, the optimization solution has to satisfy multiple optimization objectives, at least one for each data set. Unique determination of the most successful particle currently leading the swarm is not possible. Instead, only statements about the Pareto

  11. Cooperative Bacterial Foraging Optimization

    Directory of Open Access Journals (Sweden)

    Hanning Chen

    2009-01-01

    Full Text Available Bacterial Foraging Optimization (BFO is a novel optimization algorithm based on the social foraging behavior of E. coli bacteria. This paper presents a variation on the original BFO algorithm, namely, the Cooperative Bacterial Foraging Optimization (CBFO, which significantly improve the original BFO in solving complex optimization problems. This significant improvement is achieved by applying two cooperative approaches to the original BFO, namely, the serial heterogeneous cooperation on the implicit space decomposition level and the serial heterogeneous cooperation on the hybrid space decomposition level. The experiments compare the performance of two CBFO variants with the original BFO, the standard PSO and a real-coded GA on four widely used benchmark functions. The new method shows a marked improvement in performance over the original BFO and appears to be comparable with the PSO and GA.

  12. Optimal policy for value-based decision-making.

    Science.gov (United States)

    Tajima, Satohiro; Drugowitsch, Jan; Pouget, Alexandre

    2016-08-18

    For decades now, normative theories of perceptual decisions, and their implementation as drift diffusion models, have driven and significantly improved our understanding of human and animal behaviour and the underlying neural processes. While similar processes seem to govern value-based decisions, we still lack the theoretical understanding of why this ought to be the case. Here, we show that, similar to perceptual decisions, drift diffusion models implement the optimal strategy for value-based decisions. Such optimal decisions require the models' decision boundaries to collapse over time, and to depend on the a priori knowledge about reward contingencies. Diffusion models only implement the optimal strategy under specific task assumptions, and cease to be optimal once we start relaxing these assumptions, by, for example, using non-linear utility functions. Our findings thus provide the much-needed theory for value-based decisions, explain the apparent similarity to perceptual decisions, and predict conditions under which this similarity should break down.

  13. Hydrothermal decomposition of liquid crystal in subcritical water

    International Nuclear Information System (INIS)

    Zhuang, Xuning; He, Wenzhi; Li, Guangming; Huang, Juwen; Lu, Shangming; Hou, Lianjiao

    2014-01-01

    Highlights: • Hydrothermal technology can effectively decompose the liquid crystal of 4-octoxy-4'-cyanobiphenyl. • The decomposition rate reached 97.6% under the optimized condition. • Octoxy-4'-cyanobiphenyl was mainly decomposed into simple and innocuous products. • The mechanism analysis reveals the decomposition reaction process. - Abstract: Treatment of liquid crystal has important significance for the environment protection and human health. This study proposed a hydrothermal process to decompose the liquid crystal of 4-octoxy-4′-cyanobiphenyl. Experiments were conducted with a 5.7 mL stainless tube reactor and heated by a salt-bath. Factors affecting the decomposition rate of 4-octoxy-4′-cyanobiphenyl were evaluated with HPLC. The decomposed liquid products were characterized by GC-MS. Under optimized conditions i.e., 0.2 mL H 2 O 2 supply, pH value 6, temperature 275 °C and reaction time 5 min, 97.6% of 4-octoxy-4′-cyanobiphenyl was decomposed into simple and environment-friendly products. Based on the mechanism analysis and products characterization, a possible hydrothermal decomposition pathway was proposed. The results indicate that hydrothermal technology is a promising choice for liquid crystal treatment

  14. Enstrophy-based proper orthogonal decomposition of flow past rotating cylinder at super-critical rotating rate

    Science.gov (United States)

    Sengupta, Tapan K.; Gullapalli, Atchyut

    2016-11-01

    Spinning cylinder rotating about its axis experiences a transverse force/lift, an account of this basic aerodynamic phenomenon is known as the Robins-Magnus effect in text books. Prandtl studied this flow by an inviscid irrotational model and postulated an upper limit of the lift experienced by the cylinder for a critical rotation rate. This non-dimensional rate is the ratio of oncoming free stream speed and the surface speed due to rotation. Prandtl predicted a maximum lift coefficient as CLmax = 4π for the critical rotation rate of two. In recent times, evidences show the violation of this upper limit, as in the experiments of Tokumaru and Dimotakis ["The lift of a cylinder executing rotary motions in a uniform flow," J. Fluid Mech. 255, 1-10 (1993)] and in the computed solution in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)]. In the latter reference, this was explained as the temporal instability affecting the flow at higher Reynolds number and rotation rates (>2). Here, we analyze the flow past a rotating cylinder at a super-critical rotation rate (=2.5) by the enstrophy-based proper orthogonal decomposition (POD) of direct simulation results. POD identifies the most energetic modes and helps flow field reconstruction by reduced number of modes. One of the motivations for the present study is to explain the shedding of puffs of vortices at low Reynolds number (Re = 60), for the high rotation rate, due to an instability originating in the vicinity of the cylinder, using the computed Navier-Stokes equation (NSE) from t = 0 to t = 300 following an impulsive start. This instability is also explained through the disturbance mechanical energy equation, which has been established earlier in Sengupta et al. ["Temporal flow instability for Magnus-robins effect at high rotation rates," J. Fluids Struct. 17, 941-953 (2003)].

  15. Sources of energy productivity change in China during 1997–2012: A decomposition analysis based on the Luenberger productivity indicator

    International Nuclear Information System (INIS)

    Wang, Ke; Wei, Yi-Ming

    2016-01-01

    Given that different energy inputs play different roles in production and that energy policy decision making requires an evaluation of productivity change in individual energy input to provide insight into the scope for improvement of the utilization of specific energy input, this study develops, based on the Luenberger productivity indicator and data envelopment analysis models, an aggregated specific energy productivity indicator combining the individual energy input productivity indicators that account for the contributions of each specific energy input toward energy productivity change. In addition, these indicators can be further decomposed into four factors: pure efficiency change, scale efficiency change, pure technology change, and scale of technology change. These decompositions enable a determination of which specific energy input is the driving force of energy productivity change and which of the four factors is the primary contributor of energy productivity change. An empirical analysis of China's energy productivity change over the period 1997–2012 indicates that (i) China's energy productivity growth may be overestimated if energy consumption structure is omitted; (ii) in regard to the contribution of specific energy input toward energy productivity growth, oil and electricity show positive contributions, but coal and natural gas show negative contributions; (iii) energy-specific productivity changes are mainly caused by technical changes rather than efficiency changes; and (iv) the Porter Hypothesis is partially supported in China that carbon emissions control regulations may lead to energy productivity growth. - Highlights: • An energy input specific Luenberger productivity indicator is proposed. • It enables to examine the contribution of specific energy input productivity change. • It can be decomposed for identifying pure and scale efficiency changes, as well as pure and scale technical changes. • China's energy productivity growth may

  16. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  17. Modified Chaos Particle Swarm Optimization-Based Optimized Operation Model for Stand-Alone CCHP Microgrid

    Directory of Open Access Journals (Sweden)

    Fei Wang

    2017-07-01

    Full Text Available The optimized dispatch of different distributed generations (DGs in stand-alone microgrid (MG is of great significance to the operation’s reliability and economy, especially for energy crisis and environmental pollution. Based on controllable load (CL and combined cooling-heating-power (CCHP model of micro-gas turbine (MT, a multi-objective optimization model with relevant constraints to optimize the generation cost, load cut compensation and environmental benefit is proposed in this paper. The MG studied in this paper consists of photovoltaic (PV, wind turbine (WT, fuel cell (FC, diesel engine (DE, MT and energy storage (ES. Four typical scenarios were designed according to different day types (work day or weekend and weather conditions (sunny or rainy in view of the uncertainty of renewable energy in variable situations and load fluctuation. A modified dispatch strategy for CCHP is presented to further improve the operation economy without reducing the consumers’ comfort feeling. Chaotic optimization and elite retention strategy are introduced into basic particle swarm optimization (PSO to propose modified chaos particle swarm optimization (MCPSO whose search capability and convergence speed are improved greatly. Simulation results validate the correctness of the proposed model and the effectiveness of MCPSO algorithm in the optimized operation application of stand-alone MG.

  18. GENETIC ALGORITHM BASED CONCEPT DESIGN TO OPTIMIZE NETWORK LOAD BALANCE

    Directory of Open Access Journals (Sweden)

    Ashish Jain

    2012-07-01

    Full Text Available Multiconstraints optimal network load balancing is an NP-hard problem and it is an important part of traffic engineering. In this research we balance the network load using classical method (brute force approach and dynamic programming is used but result shows the limitation of this method but at a certain level we recognized that the optimization of balanced network load with increased number of nodes and demands is intractable using the classical method because the solution set increases exponentially. In such case the optimization techniques like evolutionary techniques can employ for optimizing network load balance. In this paper we analyzed proposed classical algorithm and evolutionary based genetic approach is devise as well as proposed in this paper for optimizing the balance network load.

  19. Interactive Reliability-Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Pedersen, Claus

    In order to introduce the basic concepts within the field of reliability-based structural optimization problems, this chapter is devoted to a brief outline of the basic theories. Therefore, this chapter is of a more formal nature and used as a basis for the remaining parts of the thesis. In section...... 2.2 a general non-linear optimization problem and corresponding terminology are presented whereupon optimality conditions and the standard form of an iterative optimization algorithm are outlined. Subsequently, the special properties and characteristics concerning structural optimization problems...... are treated in section 2.3. With respect to the reliability evalutation, the basic theory behind a reliability analysis and estimation of probability of failure by the First-Order Reliability Method (FORM) and the iterative Rackwitz-Fiessler (RF) algorithm are considered in section 2.5 in which...

  20. Robust optimization-based DC optimal power flow for managing wind generation uncertainty

    Science.gov (United States)

    Boonchuay, Chanwit; Tomsovic, Kevin; Li, Fangxing; Ongsakul, Weerakorn

    2012-11-01

    Integrating wind generation into the wider grid causes a number of challenges to traditional power system operation. Given the relatively large wind forecast errors, congestion management tools based on optimal power flow (OPF) need to be improved. In this paper, a robust optimization (RO)-based DCOPF is proposed to determine the optimal generation dispatch and locational marginal prices (LMPs) for a day-ahead competitive electricity market considering the risk of dispatch cost variation. The basic concept is to use the dispatch to hedge against the possibility of reduced or increased wind generation. The proposed RO-based DCOPF is compared with a stochastic non-linear programming (SNP) approach on a modified PJM 5-bus system. Primary test results show that the proposed DCOPF model can provide lower dispatch cost than the SNP approach.

  1. An elitist teaching-learning-based optimization algorithm for solving complex constrained optimization problems

    Directory of Open Access Journals (Sweden)

    Vivek Patel

    2012-08-01

    Full Text Available Nature inspired population based algorithms is a research field which simulates different natural phenomena to solve a wide range of problems. Researchers have proposed several algorithms considering different natural phenomena. Teaching-Learning-based optimization (TLBO is one of the recently proposed population based algorithm which simulates the teaching-learning process of the class room. This algorithm does not require any algorithm-specific control parameters. In this paper, elitism concept is introduced in the TLBO algorithm and its effect on the performance of the algorithm is investigated. The effects of common controlling parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 35 constrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. The proposed algorithm can be applied to various optimization problems of the industrial environment.

  2. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    International Nuclear Information System (INIS)

    Chaudhary, Kailash; Chaudhary, Himanshu

    2015-01-01

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  3. Optimal design of planar slider-crank mechanism using teaching-learning-based optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Chaudhary, Kailash; Chaudhary, Himanshu [Malaviya National Institute of Technology, Jaipur (Malaysia)

    2015-11-15

    In this paper, a two stage optimization technique is presented for optimum design of planar slider-crank mechanism. The slider crank mechanism needs to be dynamically balanced to reduce vibrations and noise in the engine and to improve the vehicle performance. For dynamic balancing, minimization of the shaking force and the shaking moment is achieved by finding optimum mass distribution of crank and connecting rod using the equipemental system of point-masses in the first stage of the optimization. In the second stage, their shapes are synthesized systematically by closed parametric curve, i.e., cubic B-spline curve corresponding to the optimum inertial parameters found in the first stage. The multi-objective optimization problem to minimize both the shaking force and the shaking moment is solved using Teaching-learning-based optimization algorithm (TLBO) and its computational performance is compared with Genetic algorithm (GA).

  4. Functional decomposition with an efficient input support selection for sub-functions based on information relationship measures

    NARCIS (Netherlands)

    Rawski, M.; Jozwiak, L.; Luba, T.

    2001-01-01

    The functional decomposition of binary and multi-valued discrete functions and relations has been gaining more and more recognition. It has important applications in many fields of modern digital system engineering, such as combinational and sequential logic synthesis for VLSI systems, pattern

  5. Optimization of a Fuzzy-Logic-Control-Based MPPT Algorithm Using the Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Po-Chen Cheng

    2015-06-01

    Full Text Available In this paper, an asymmetrical fuzzy-logic-control (FLC-based maximum power point tracking (MPPT algorithm for photovoltaic (PV systems is presented. Two membership function (MF design methodologies that can improve the effectiveness of the proposed asymmetrical FLC-based MPPT methods are then proposed. The first method can quickly determine the input MF setting values via the power–voltage (P–V curve of solar cells under standard test conditions (STC. The second method uses the particle swarm optimization (PSO technique to optimize the input MF setting values. Because the PSO approach must target and optimize a cost function, a cost function design methodology that meets the performance requirements of practical photovoltaic generation systems (PGSs is also proposed. According to the simulated and experimental results, the proposed asymmetrical FLC-based MPPT method has the highest fitness value, therefore, it can successfully address the tracking speed/tracking accuracy dilemma compared with the traditional perturb and observe (P&O and symmetrical FLC-based MPPT algorithms. Compared to the conventional FLC-based MPPT method, the obtained optimal asymmetrical FLC-based MPPT can improve the transient time and the MPPT tracking accuracy by 25.8% and 0.98% under STC, respectively.

  6. Optimal Dispatching of Active Distribution Networks Based on Load Equilibrium

    Directory of Open Access Journals (Sweden)

    Xiao Han

    2017-12-01

    Full Text Available This paper focuses on the optimal intraday scheduling of a distribution system that includes renewable energy (RE generation, energy storage systems (ESSs, and thermostatically controlled loads (TCLs. This system also provides time-of-use pricing to customers. Unlike previous studies, this study attempts to examine how to optimize the allocation of electric energy and to improve the equilibrium of the load curve. Accordingly, we propose a concept of load equilibrium entropy to quantify the overall equilibrium of the load curve and reflect the allocation optimization of electric energy. Based on this entropy, we built a novel multi-objective optimal dispatching model to minimize the operational cost and maximize the load curve equilibrium. To aggregate TCLs into the optimization objective, we introduced the concept of a virtual power plant (VPP and proposed a calculation method for VPP operating characteristics based on the equivalent thermal parameter model and the state-queue control method. The Particle Swarm Optimization algorithm was employed to solve the optimization problems. The simulation results illustrated that the proposed dispatching model can achieve cost reductions of system operations, peak load curtailment, and efficiency improvements, and also verified that the load equilibrium entropy can be used as a novel index of load characteristics.

  7. Orthogonal Analysis Based Performance Optimization for Vertical Axis Wind Turbine

    Directory of Open Access Journals (Sweden)

    Lei Song

    2016-01-01

    Full Text Available Geometrical shape of a vertical axis wind turbine (VAWT is composed of multiple structural parameters. Since there are interactions among the structural parameters, traditional research approaches, which usually focus on one parameter at a time, cannot obtain performance of the wind turbine accurately. In order to exploit overall effect of a novel VAWT, we firstly use a single parameter optimization method to obtain optimal values of the structural parameters, respectively, by Computational Fluid Dynamics (CFD method; based on the results, we then use an orthogonal analysis method to investigate the influence of interactions of the structural parameters on performance of the wind turbine and to obtain optimization combination of the structural parameters considering the interactions. Results of analysis of variance indicate that interactions among the structural parameters have influence on performance of the wind turbine, and optimization results based on orthogonal analysis have higher wind energy utilization than that of traditional research approaches.

  8. Optimization-based topology identification of complex networks

    International Nuclear Information System (INIS)

    Tang Sheng-Xue; Chen Li; He Yi-Gang

    2011-01-01

    In many cases, the topological structures of a complex network are unknown or uncertain, and it is of significance to identify the exact topological structure. An optimization-based method of identifying the topological structure of a complex network is proposed in this paper. Identification of the exact network topological structure is converted into a minimal optimization problem by using the estimated network. Then, an improved quantum-behaved particle swarm optimization algorithm is used to solve the optimization problem. Compared with the previous adaptive synchronization-based method, the proposed method is simple and effective and is particularly valid to identify the topological structure of synchronization complex networks. In some cases where the states of a complex network are only partially observable, the exact topological structure of a network can also be identified by using the proposed method. Finally, numerical simulations are provided to show the effectiveness of the proposed method. (general)

  9. Length scale and manufacturability in density-based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen; Sigmund, Ole

    2016-01-01

    Since its original introduction in structural design, density-based topology optimization has been applied to a number of other fields such as microelectromechanical systems, photonics, acoustics and fluid mechanics. The methodology has been well accepted in industrial design processes where it can...... provide competitive designs in terms of cost, materials and functionality under a wide set of constraints. However, the optimized topologies are often considered as conceptual due to loosely defined topologies and the need of postprocessing. Subsequent amendments can affect the optimized design...

  10. Cooperative Game Study of Airlines Based on Flight Frequency Optimization

    Directory of Open Access Journals (Sweden)

    Wanming Liu

    2014-01-01

    Full Text Available By applying the game theory, the relationship between airline ticket price and optimal flight frequency is analyzed. The paper establishes the payoff matrix of the flight frequency in noncooperation scenario and flight frequency optimization model in cooperation scenario. The airline alliance profit distribution is converted into profit distribution game based on the cooperation game theory. The profit distribution game is proved to be convex, and there exists an optimal distribution strategy. The results show that joining the airline alliance can increase airline whole profit, the change of negotiated prices and cost is beneficial to profit distribution of large airlines, and the distribution result is in accordance with aviation development.

  11. Genetic-evolution-based optimization methods for engineering design

    Science.gov (United States)

    Rao, S. S.; Pan, T. S.; Dhingra, A. K.; Venkayya, V. B.; Kumar, V.

    1990-01-01

    This paper presents the applicability of a biological model, based on genetic evolution, for engineering design optimization. Algorithms embodying the ideas of reproduction, crossover, and mutation are developed and applied to solve different types of structural optimization problems. Both continuous and discrete variable optimization problems are solved. A two-bay truss for maximum fundamental frequency is considered to demonstrate the continuous variable case. The selection of locations of actuators in an actively controlled structure, for minimum energy dissipation, is considered to illustrate the discrete variable case.

  12. Maximum length scale in density based topology optimization

    DEFF Research Database (Denmark)

    Lazarov, Boyan Stefanov; Wang, Fengwen

    2017-01-01

    The focus of this work is on two new techniques for imposing maximum length scale in topology optimization. Restrictions on the maximum length scale provide designers with full control over the optimized structure and open possibilities to tailor the optimized design for broader range...... of manufacturing processes by fulfilling the associated technological constraints. One of the proposed methods is based on combination of several filters and builds on top of the classical density filtering which can be viewed as a low pass filter applied to the design parametrization. The main idea...

  13. Comparison Between Wind Power Prediction Models Based on Wavelet Decomposition with Least-Squares Support Vector Machine (LS-SVM and Artificial Neural Network (ANN

    Directory of Open Access Journals (Sweden)

    Maria Grazia De Giorgi

    2014-08-01

    Full Text Available A high penetration of wind energy into the electricity market requires a parallel development of efficient wind power forecasting models. Different hybrid forecasting methods were applied to wind power prediction, using historical data and numerical weather predictions (NWP. A comparative study was carried out for the prediction of the power production of a wind farm located in complex terrain. The performances of Least-Squares Support Vector Machine (LS-SVM with Wavelet Decomposition (WD were evaluated at different time horizons and compared to hybrid Artificial Neural Network (ANN-based methods. It is acknowledged that hybrid methods based on LS-SVM with WD mostly outperform other methods. A decomposition of the commonly known root mean square error was beneficial for a better understanding of the origin of the differences between prediction and measurement and to compare the accuracy of the different models. A sensitivity analysis was also carried out in order to underline the impact that each input had in the network training process for ANN. In the case of ANN with the WD technique, the sensitivity analysis was repeated on each component obtained by the decomposition.

  14. Clustering via Kernel Decomposition

    DEFF Research Database (Denmark)

    Have, Anna Szynkowiak; Girolami, Mark A.; Larsen, Jan

    2006-01-01

    Methods for spectral clustering have been proposed recently which rely on the eigenvalue decomposition of an affinity matrix. In this work it is proposed that the affinity matrix is created based on the elements of a non-parametric density estimator. This matrix is then decomposed to obtain...... posterior probabilities of class membership using an appropriate form of nonnegative matrix factorization. The troublesome selection of hyperparameters such as kernel width and number of clusters can be obtained using standard cross-validation methods as is demonstrated on a number of diverse data sets....

  15. An opinion formation based binary optimization approach for feature selection

    Science.gov (United States)

    Hamedmoghadam, Homayoun; Jalili, Mahdi; Yu, Xinghuo

    2018-02-01

    This paper proposed a novel optimization method based on opinion formation in complex network systems. The proposed optimization technique mimics human-human interaction mechanism based on a mathematical model derived from social sciences. Our method encodes a subset of selected features to the opinion of an artificial agent and simulates the opinion formation process among a population of agents to solve the feature selection problem. The agents interact using an underlying interaction network structure and get into consensus in their opinions, while finding better solutions to the problem. A number of mechanisms are employed to avoid getting trapped in local minima. We compare the performance of the proposed method with a number of classical population-based optimization methods and a state-of-the-art opinion formation based method. Our experiments on a number of high dimensional datasets reveal outperformance of the proposed algorithm over others.

  16. Discounted cost model for condition-based maintenance optimization

    International Nuclear Information System (INIS)

    Weide, J.A.M. van der; Pandey, M.D.; Noortwijk, J.M. van

    2010-01-01

    This paper presents methods to evaluate the reliability and optimize the maintenance of engineering systems that are damaged by shocks or transients arriving randomly in time and overall degradation is modeled as a cumulative stochastic point process. The paper presents a conceptually clear and comprehensive derivation of formulas for computing the discounted cost associated with a maintenance policy combining both condition-based and age-based criteria for preventive maintenance. The proposed discounted cost model provides a more realistic basis for optimizing the maintenance policies than those based on the asymptotic, non-discounted cost rate criterion.

  17. Mathematical modelling of the decomposition of explosives

    International Nuclear Information System (INIS)

    Smirnov, Lev P

    2010-01-01

    Studies on mathematical modelling of the molecular and supramolecular structures of explosives and the elementary steps and overall processes of their decomposition are analyzed. Investigations on the modelling of combustion and detonation taking into account the decomposition of explosives are also considered. It is shown that solution of problems related to the decomposition kinetics of explosives requires the use of a complex strategy based on the methods and concepts of chemical physics, solid state physics and theoretical chemistry instead of empirical approach.

  18. Cancer Classification Based on Support Vector Machine Optimized by Particle Swarm Optimization and Artificial Bee Colony.

    Science.gov (United States)

    Gao, Lingyun; Ye, Mingquan; Wu, Changrong

    2017-11-29

    Intelligent optimization algorithms have advantages in dealing with complex nonlinear problems accompanied by good flexibility and adaptability. In this paper, the FCBF (Fast Correlation-Based Feature selection) method is used to filter irrelevant and redundant features in order to improve the quality of cancer classification. Then, we perform classification based on SVM (Support Vector Machine) optimized by PSO (Particle Swarm Optimization) combined with ABC (Artificial Bee Colony) approaches, which is represented as PA-SVM. The proposed PA-SVM method is applied to nine cancer datasets, including five datasets of outcome prediction and a protein dataset of ovarian cancer. By comparison with other classification methods, the results demonstrate the effectiveness and the robustness of the proposed PA-SVM method in handling various types of data for cancer classification.

  19. Global Optimization Based on the Hybridization of Harmony Search and Particle Swarm Optimization Methods

    Directory of Open Access Journals (Sweden)

    A. P. Karpenko

    2014-01-01

    Full Text Available We consider a class of stochastic search algorithms of global optimization which in various publications are called behavioural, intellectual, metaheuristic, inspired by the nature, swarm, multi-agent, population, etc. We use the last term.Experience in using the population algorithms to solve challenges of global optimization shows that application of one such algorithm may not always effective. Therefore now great attention is paid to hybridization of population algorithms of global optimization. Hybrid algorithms unite various algorithms or identical algorithms, but with various values of free parameters. Thus efficiency of one algorithm can compensate weakness of another.The purposes of the work are development of hybrid algorithm of global optimization based on known algorithms of harmony search (HS and swarm of particles (PSO, software implementation of algorithm, study of its efficiency using a number of known benchmark problems, and a problem of dimensional optimization of truss structure.We set a problem of global optimization, consider basic algorithms of HS and PSO, give a flow chart of the offered hybrid algorithm called PSO HS , present results of computing experiments with developed algorithm and software, formulate main results of work and prospects of its development.

  20. Multi-Objective Optimization of a Hybrid ESS Based on Optimal Energy Management Strategy for LHDs

    Directory of Open Access Journals (Sweden)

    Jiajun Liu

    2017-10-01

    Full Text Available Energy storage systems (ESS play an important role in the performance of mining vehicles. A hybrid ESS combining both batteries (BTs and supercapacitors (SCs is one of the most promising solutions. As a case study, this paper discusses the optimal hybrid ESS sizing and energy management strategy (EMS of 14-ton underground load-haul-dump vehicles (LHDs. Three novel contributions are added to the relevant literature. First, a multi-objective optimization is formulated regarding energy consumption and the total cost of a hybrid ESS, which are the key factors of LHDs, and a battery capacity degradation model is used. During the process, dynamic programming (DP-based EMS is employed to obtain the optimal energy consumption and hybrid ESS power profiles. Second, a 10-year life cycle cost model of a hybrid ESS for LHDs is established to calculate the total cost, including capital cost, operating cost, and replacement cost. According to the optimization results, three solutions chosen from the Pareto front are compared comprehensively, and the optimal one is selected. Finally, the optimal and battery-only options are compared quantitatively using the same objectives, and the hybrid ESS is found to be a more economical and efficient option.

  1. Cover crop-based ecological weed management: exploration and optimization

    NARCIS (Netherlands)

    Kruidhof, H.M.

    2008-01-01

    Keywords: organic farming, ecologically-based weed management, cover crops, green manure, allelopathy, Secale cereale, Brassica napus, Medicago sativa

    Cover crop-based ecological weed management: exploration and optimization. In organic farming systems, weed control is recognized as one

  2. GPU-Monte Carlo based fast IMRT plan optimization

    Directory of Open Access Journals (Sweden)

    Yongbao Li

    2014-03-01

    Full Text Available Purpose: Intensity-modulated radiation treatment (IMRT plan optimization needs pre-calculated beamlet dose distribution. Pencil-beam or superposition/convolution type algorithms are typically used because of high computation speed. However, inaccurate beamlet dose distributions, particularly in cases with high levels of inhomogeneity, may mislead optimization, hindering the resulting plan quality. It is desire to use Monte Carlo (MC methods for beamlet dose calculations. Yet, the long computational time from repeated dose calculations for a number of beamlets prevents this application. It is our objective to integrate a GPU-based MC dose engine in lung IMRT optimization using a novel two-steps workflow.Methods: A GPU-based MC code gDPM is used. Each particle is tagged with an index of a beamlet where the source particle is from. Deposit dose are stored separately for beamlets based on the index. Due to limited GPU memory size, a pyramid space is allocated for each beamlet, and dose outside the space is neglected. A two-steps optimization workflow is proposed for fast MC-based optimization. At first step, a rough dose calculation is conducted with only a few number of particle per beamlet. Plan optimization is followed to get an approximated fluence map. In the second step, more accurate beamlet doses are calculated, where sampled number of particles for a beamlet is proportional to the intensity determined previously. A second-round optimization is conducted, yielding the final result.Results: For a lung case with 5317 beamlets, 105 particles per beamlet in the first round, and 108 particles per beam in the second round are enough to get a good plan quality. The total simulation time is 96.4 sec.Conclusion: A fast GPU-based MC dose calculation method along with a novel two-step optimization workflow are developed. The high efficiency allows the use of MC for IMRT optimizations.--------------------------------Cite this article as: Li Y, Tian Z

  3. Discrete Biogeography Based Optimization for Feature Selection in Molecular Signatures.

    Science.gov (United States)

    Liu, Bo; Tian, Meihong; Zhang, Chunhua; Li, Xiangtao

    2015-04-01

    Biomarker discovery from high-dimensional data is a complex task in the development of efficient cancer diagnoses and classification. However, these data are usually redundant and noisy, and only a subset of them present distinct profiles for different classes of samples. Thus, selecting high discriminative genes from gene expression data has become increasingly interesting in the field of bioinformatics. In this paper, a discrete biogeography based optimization is proposed to select the good subset of informative gene relevant to the classification. In the proposed algorithm, firstly, the fisher-markov selector is used to choose fixed number of gene data. Secondly, to make biogeography based optimization suitable for the feature selection problem; discrete migration model and discrete mutation model are proposed to balance the exploration and exploitation ability. Then, discrete biogeography based optimization, as we called DBBO, is proposed by integrating discrete migration model and discrete mutation model. Finally, the DBBO method is used for feature selection, and three classifiers are used as the classifier with the 10 fold cross-validation method. In order to show the effective and efficiency of the algorithm, the proposed algorithm is tested on four breast cancer dataset benchmarks. Comparison with genetic algorithm, particle swarm optimization, differential evolution algorithm and hybrid biogeography based optimization, experimental results demonstrate that the proposed method is better or at least comparable with previous method from literature when considering the quality of the solutions obtained. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Trust regions in Kriging-based optimization with expected improvement

    Science.gov (United States)

    Regis, Rommel G.

    2016-06-01

    The Kriging-based Efficient Global Optimization (EGO) method works well on many expensive black-box optimization problems. However, it does not seem to perform well on problems with steep and narrow global minimum basins and on high-dimensional problems. This article develops a new Kriging-based optimization method called TRIKE (Trust Region Implementation in Kriging-based optimization with Expected improvement) that implements a trust-region-like approach where each iterate is obtained by maximizing an Expected Improvement (EI) function within some trust region. This trust region is adjusted depending on the ratio of the actual improvement to the EI. This article also develops the Kriging-based CYCLONE (CYClic Local search in OptimizatioN using Expected improvement) method that uses a cyclic pattern to determine the search regions where the EI is maximized. TRIKE and CYCLONE are compared with EGO on 28 test problems with up to 32 dimensions and on a 36-dimensional groundwater bioremediation application in appendices supplied as an online supplement available at http://dx.doi.org/10.1080/0305215X.2015.1082350. The results show that both algorithms yield substantial improvements over EGO and they are competitive with a radial basis function method.

  5. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    Science.gov (United States)

    Abdulameer, Mohammed Hasan; Othman, Zulaiha Ali

    2014-01-01

    Existing face recognition methods utilize particle swarm optimizer (PSO) and opposition based particle swarm optimizer (OPSO) to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO) technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM). In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented. PMID:24790584

  6. Support Vector Machine Based on Adaptive Acceleration Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Mohammed Hasan Abdulameer

    2014-01-01

    Full Text Available Existing face recognition methods utilize particle swarm optimizer (PSO and opposition based particle swarm optimizer (OPSO to optimize the parameters of SVM. However, the utilization of random values in the velocity calculation decreases the performance of these techniques; that is, during the velocity computation, we normally use random values for the acceleration coefficients and this creates randomness in the solution. To address this problem, an adaptive acceleration particle swarm optimization (AAPSO technique is proposed. To evaluate our proposed method, we employ both face and iris recognition based on AAPSO with SVM (AAPSO-SVM. In the face and iris recognition systems, performance is evaluated using two human face databases, YALE and CASIA, and the UBiris dataset. In this method, we initially perform feature extraction and then recognition on the extracted features. In the recognition process, the extracted features are used for SVM training and testing. During the training and testing, the SVM parameters are optimized with the AAPSO technique, and in AAPSO, the acceleration coefficients are computed using the particle fitness values. The parameters in SVM, which are optimized by AAPSO, perform efficiently for both face and iris recognition. A comparative analysis between our proposed AAPSO-SVM and the PSO-SVM technique is presented.

  7. Identification method for gas-liquid two-phase flow regime based on singular value decomposition and least square support vector machine

    International Nuclear Information System (INIS)

    Sun Bin; Zhou Yunlong; Zhao Peng; Guan Yuebo

    2007-01-01

    Aiming at the non-stationary characteristics of differential pressure fluctuation signals of gas-liquid two-phase flow, and the slow convergence of learning and liability of dropping into local minima for BP neural networks, flow regime identification method based on Singular Value Decomposition (SVD) and Least Square Support Vector Machine (LS-SVM) is presented. First of all, the Empirical Mode Decomposition (EMD) method is used to decompose the differential pressure fluctuation signals of gas-liquid two-phase flow into a number of stationary Intrinsic Mode Functions (IMFs) components from which the initial feature vector matrix is formed. By applying the singular vale decomposition technique to the initial feature vector matrixes, the singular values are obtained. Finally, the singular values serve as the flow regime characteristic vector to be LS-SVM classifier and flow regimes are identified by the output of the classifier. The identification result of four typical flow regimes of air-water two-phase flow in horizontal pipe has shown that this method achieves a higher identification rate. (authors)

  8. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    Science.gov (United States)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-09-01

    A new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss-Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm-1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  9. Low-rank canonical-tensor decomposition of potential energy surfaces: application to grid-based diagrammatic vibrational Green's function theory

    International Nuclear Information System (INIS)

    Rai, Prashant; Sargsyan, Khachik; Najm, Habib; Hermes, Matthew R.; Hirata, So

    2017-01-01

    Here, a new method is proposed for a fast evaluation of high-dimensional integrals of potential energy surfaces (PES) that arise in many areas of quantum dynamics. It decomposes a PES into a canonical low-rank tensor format, reducing its integral into a relatively short sum of products of low-dimensional integrals. The decomposition is achieved by the alternating least squares (ALS) algorithm, requiring only a small number of single-point energy evaluations. Therefore, it eradicates a force-constant evaluation as the hotspot of many quantum dynamics simulations and also possibly lifts the curse of dimensionality. This general method is applied to the anharmonic vibrational zero-point and transition energy calculations of molecules using the second-order diagrammatic vibrational many-body Green's function (XVH2) theory with a harmonic-approximation reference. In this application, high dimensional PES and Green's functions are both subjected to a low-rank decomposition. Evaluating the molecular integrals over a low-rank PES and Green's functions as sums of low-dimensional integrals using the Gauss–Hermite quadrature, this canonical-tensor-decomposition-based XVH2 (CT-XVH2) achieves an accuracy of 0.1 cm -1 or higher and nearly an order of magnitude speedup as compared with the original algorithm using force constants for water and formaldehyde.

  10. Fault feature extraction method based on local mean decomposition Shannon entropy and improved kernel principal component analysis model

    Directory of Open Access Journals (Sweden)

    Jinlu Sheng

    2016-07-01

    Full Text Available To effectively extract the typical features of the bearing, a new method that related the local mean decomposition Shannon entropy and improved kernel principal component analysis model was proposed. First, the features are extracted by time–frequency domain method, local mean decomposition, and using the Shannon entropy to process the original separated product functions, so as to get the original features. However, the features been extracted still contain superfluous information; the nonlinear multi-features process technique, kernel principal component analysis, is introduced to fuse the characters. The kernel principal component analysis is improved by the weight factor. The extracted characteristic features were inputted in the Morlet wavelet kernel support vector machine to get the bearing running state classification model, bearing running state was thereby identified. Cases of test and actual were analyzed.

  11. A systematic optimization for graphene-based supercapacitors

    Science.gov (United States)

    Deuk Lee, Sung; Lee, Han Sung; Kim, Jin Young; Jeong, Jaesik; Kahng, Yung Ho

    2017-08-01

    Increasing the energy-storage density for supercapacitors is critical for their applications. Many researchers have attempted to identify optimal candidate component materials to achieve this goal, but investigations into systematically optimizing their mixing rate for maximizing the performance of each candidate material have been insufficient, which hinders the progress in their technology. In this study, we employ a statistically systematic method to determine the optimum mixing ratio of three components that constitute graphene-based supercapacitor electrodes: reduced graphene oxide (rGO), acetylene black (AB), and polyvinylidene fluoride (PVDF). By using the extreme-vertices design, the optimized proportion is determined to be (rGO: AB: PVDF  =  0.95: 0.00: 0.05). The corresponding energy-storage density increases by a factor of 2 compared with that of non-optimized electrodes. Electrochemical and microscopic analyses are performed to determine the reason for the performance improvements.

  12. Teaching learning based optimization algorithm and its engineering applications

    CERN Document Server

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  13. EUD-based biological optimization for carbon ion therapy

    International Nuclear Information System (INIS)

    Brüningk, Sarah C.; Kamp, Florian; Wilkens, Jan J.

    2015-01-01

    Purpose: Treatment planning for carbon ion therapy requires an accurate modeling of the biological response of each tissue to estimate the clinical outcome of a treatment. The relative biological effectiveness (RBE) accounts for this biological response on a cellular level but does not refer to the actual impact on the organ as a whole. For photon therapy, the concept of equivalent uniform dose (EUD) represents a simple model to take the organ response into account, yet so far no formulation of EUD has been reported that is suitable to carbon ion therapy. The authors introduce the concept of an equivalent uniform effect (EUE) that is directly applicable to both ion and photon therapies and exemplarily implemented it as a basis for biological treatment plan optimization for carbon ion therapy. Methods: In addition to a classical EUD concept, which calculates a generalized mean over the RBE-weighted dose distribution, the authors propose the EUE to simplify the optimization process of carbon ion therapy plans. The EUE is defined as the biologically equivalent uniform effect that yields the same probability of injury as the inhomogeneous effect distribution in an organ. Its mathematical formulation is based on the generalized mean effect using an effect-volume parameter to account for different organ architectures and is thus independent of a reference radiation. For both EUD concepts, quadratic and logistic objective functions are implemented into a research treatment planning system. A flexible implementation allows choosing for each structure between biological effect constraints per voxel and EUD constraints per structure. Exemplary treatment plans are calculated for a head-and-neck patient for multiple combinations of objective functions and optimization parameters. Results: Treatment plans optimized using an EUE-based objective function were comparable to those optimized with an RBE-weighted EUD-based approach. In agreement with previous results from photon

  14. A Novel Consensus-Based Particle Swarm Optimization-Assisted Trust-Tech Methodology for Large-Scale Global Optimization.

    Science.gov (United States)

    Zhang, Yong-Feng; Chiang, Hsiao-Dong

    2017-09-01

    A novel three-stage methodology, termed the "consensus-based particle swarm optimization (PSO)-assisted Trust-Tech methodology," to find global optimal solutions for nonlinear optimization problems is presented. It is composed of Trust-Tech methods, consensus-based PSO, and local optimization methods that are integrated to compute a set of high-quality local optimal solutions that can contain the global optimal solution. The proposed methodology compares very favorably with several recently developed PSO algorithms based on a set of small-dimension benchmark optimization problems and 20 large-dimension test functions from the CEC 2010 competition. The analytical basis for the proposed methodology is also provided. Experimental results demonstrate that the proposed methodology can rapidly obtain high-quality optimal solutions that can contain the global optimal solution. The scalability of the proposed methodology is promising.

  15. Pareto-Ranking Based Quantum-Behaved Particle Swarm Optimization for Multiobjective Optimization

    Directory of Open Access Journals (Sweden)

    Na Tian

    2015-01-01

    Full Text Available A study on pareto-ranking based quantum-behaved particle swarm optimization (QPSO for multiobjective optimization problems is presented in this paper. During the iteration, an external repository is maintained to remember the nondominated solutions, from which the global best position is chosen. The comparison between different elitist selection strategies (preference order, sigma value, and random selection is performed on four benchmark functions and two metrics. The results demonstrate that QPSO with preference order has comparative performance with sigma value according to different number of objectives. Finally, QPSO with sigma value is applied to solve multiobjective flexible job-shop scheduling problems.

  16. Wavelet optimization for content-based image retrieval in medical databases.

    Science.gov (United States)

    Quellec, G; Lamard, M; Cazuguel, G; Cochener, B; Roux, C

    2010-04-01

    We propose in this article a content-based image retrieval (CBIR) method for diagnosis aid in medical fields. In the proposed system, images are indexed in a generic fashion, without extracting domain-specific features: a signature is built for each image from its wavelet transform. These image signatures characterize the distribution of wavelet coefficients in each subband of the decomposition. A distance measure is then defined to compare two image signatures and thus retrieve the most similar images in a database when a query image is submitted by a physician. To retrieve relevant images from a medical database, the signatures and the distance measure must be related to the medical interpretation of images. As a consequence, we introduce several degrees of freedom in the system so that it can be tuned to any pathology and image modality. In particular, we propose to adapt the wavelet basis, within the lifting scheme framework, and to use a custom decomposition scheme. Weights are also introduced between subbands. All these parameters are tuned by an optimization procedure, using the medical grading of each image in the database to define a performance measure. The system is assessed on two medical image databases: one for diabetic retinopathy follow up and one for screening mammography, as well as a general purpose database. Results are promising: a mean precision of 56.50%, 70.91% and 96.10% is achieved for these three databases, when five images are returned by the system. Copyright 2009 Elsevier B.V. All rights reserved.

  17. On the hadron mass decomposition

    Science.gov (United States)

    Lorcé, Cédric

    2018-02-01

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force.

  18. On the hadron mass decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Lorce, Cedric [Universite Paris-Saclay, Centre de Physique Theorique, Ecole Polytechnique, CNRS, Palaiseau (France)

    2018-02-15

    We argue that the standard decompositions of the hadron mass overlook pressure effects, and hence should be interpreted with great care. Based on the semiclassical picture, we propose a new decomposition that properly accounts for these pressure effects. Because of Lorentz covariance, we stress that the hadron mass decomposition automatically comes along with a stability constraint, which we discuss for the first time. We show also that if a hadron is seen as made of quarks and gluons, one cannot decompose its mass into more than two contributions without running into trouble with the consistency of the physical interpretation. In particular, the so-called quark mass and trace anomaly contributions appear to be purely conventional. Based on the current phenomenological values, we find that in average quarks exert a repulsive force inside nucleons, balanced exactly by the gluon attractive force. (orig.)

  19. Simulation-based optimization of sustainable national energy systems

    International Nuclear Information System (INIS)

    Batas Bjelić, Ilija; Rajaković, Nikola

    2015-01-01

    The goals of the EU2030 energy policy should be achieved cost-effectively by employing the optimal mix of supply and demand side technical measures, including energy efficiency, renewable energy and structural measures. In this paper, the achievement of these goals is modeled by introducing an innovative method of soft-linking of EnergyPLAN with the generic optimization program (GenOpt). This soft-link enables simulation-based optimization, guided with the chosen optimization algorithm, rather than manual adjustments of the decision vectors. In order to obtain EnergyPLAN simulations within the optimization loop of GenOpt, the decision vectors should be chosen and explained in GenOpt for scenarios created in EnergyPLAN. The result of the optimization loop is an optimal national energy master plan (as a case study, energy policy in Serbia was taken), followed with sensitivity analysis of the exogenous assumptions and with focus on the contribution of the smart electricity grid to the achievement of EU2030 goals. It is shown that the increase in the policy-induced total costs of less than 3% is not significant. This general method could be further improved and used worldwide in the optimal planning of sustainable national energy systems. - Highlights: • Innovative method of soft-linking of EnergyPLAN with GenOpt has been introduced. • Optimal national energy master plan has been developed (the case study for Serbia). • Sensitivity analysis on the exogenous world energy and emission price development outlook. • Focus on the contribution of smart energy systems to the EU2030 goals. • Innovative soft-linking methodology could be further improved and used worldwide.

  20. Limited-memory adaptive snapshot selection for proper orthogonal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Oxberry, Geoffrey M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kostova-Vassilevska, Tanya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Arrighi, Bill [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chand, Kyle [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-04-02

    Reduced order models are useful for accelerating simulations in many-query contexts, such as optimization, uncertainty quantification, and sensitivity analysis. However, offline training of reduced order models can have prohibitively expensive memory and floating-point operation costs in high-performance computing applications, where memory per core is limited. To overcome this limitation for proper orthogonal decomposition, we propose a novel adaptive selection method for snapshots in time that limits offline training costs by selecting snapshots according an error control mechanism similar to that found in adaptive time-stepping ordinary differential equation solvers. The error estimator used in this work is related to theory bounding the approximation error in time of proper orthogonal decomposition-based reduced order models, and memory usage is minimized by computing the singular value decomposition using a single-pass incremental algorithm. Results for a viscous Burgers’ test problem demonstrate convergence in the limit as the algorithm error tolerances go to zero; in this limit, the full order model is recovered to within discretization error. The resulting method can be used on supercomputers to generate proper orthogonal decomposition-based reduced order models, or as a subroutine within hyperreduction algorithms that require taking snapshots in time, or within greedy algorithms for sampling parameter space.

  1. An Image Morphing Technique Based on Optimal Mass Preserving Mapping

    Science.gov (United States)

    Zhu, Lei; Yang, Yan; Haker, Steven; Tannenbaum, Allen

    2013-01-01

    Image morphing, or image interpolation in the time domain, deals with the metamorphosis of one image into another. In this paper, a new class of image morphing algorithms is proposed based on the theory of optimal mass transport. The L2 mass moving energy functional is modified by adding an intensity penalizing term, in order to reduce the undesired double exposure effect. It is an intensity-based approach and, thus, is parameter free. The optimal warping function is computed using an iterative gradient descent approach. This proposed morphing method is also extended to doubly connected domains using a harmonic parameterization technique, along with finite-element methods. PMID:17547128

  2. A Framework for Constrained Optimization Problems Based on a Modified Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Biwei Tang

    2016-01-01

    Full Text Available This paper develops a particle swarm optimization (PSO based framework for constrained optimization problems (COPs. Aiming at enhancing the performance of PSO, a modified PSO algorithm, named SASPSO 2011, is proposed by adding a newly developed self-adaptive strategy to the standard particle swarm optimization 2011 (SPSO 2011 algorithm. Since the convergence of PSO is of great importance and significantly influences the performance of PSO, this paper first theoretically investigates the convergence of SASPSO 2011. Then, a parameter selection principle guaranteeing the convergence of SASPSO 2011 is provided. Subsequently, a SASPSO 2011-based framework is established to solve COPs. Attempting to increase the diversity of solutions and decrease optimization difficulties, the adaptive relaxation method, which is combined with the feasibility-based rule, is applied to handle constraints of COPs and evaluate candidate solutions in the developed framework. Finally, the proposed method is verified through 4 benchmark test functions and 2 real-world engineering problems against six PSO variants and some well-known methods proposed in the literature. Simulation results confirm that the proposed method is highly competitive in terms of the solution quality and can be considered as a vital alternative to solve COPs.

  3. Optimization of DNA Sensor Model Based Nanostructured Graphene Using Particle Swarm Optimization Technique

    Directory of Open Access Journals (Sweden)

    Hediyeh Karimi

    2013-01-01

    Full Text Available It has been predicted that the nanomaterials of graphene will be among the candidate materials for postsilicon electronics due to their astonishing properties such as high carrier mobility, thermal conductivity, and biocompatibility. Graphene is a semimetal zero gap nanomaterial with demonstrated ability to be employed as an excellent candidate for DNA sensing. Graphene-based DNA sensors have been used to detect the DNA adsorption to examine a DNA concentration in an analyte solution. In particular, there is an essential need for developing the cost-effective DNA sensors holding the fact that it is suitable for the diagnosis of genetic or pathogenic diseases. In this paper, particle swarm optimization technique is employed to optimize the analytical model of a graphene-based DNA sensor which is used for electrical detection of DNA molecules. The results are reported for 5 different concentrations, covering a range from 0.01 nM to 500 nM. The comparison of the optimized model with the experimental data shows an accuracy of more than 95% which verifies that the optimized model is reliable for being used in any application of the graphene-based DNA sensor.

  4. Rapid Optimal Generation Algorithm for Terrain Following Trajectory Based on Optimal Control

    Institute of Scientific and Technical Information of China (English)

    杨剑影; 张海; 谢邦荣; 尹健

    2004-01-01

    Based on the optimal control theory, a 3-dimensionnal direct generation algorithm is proposed for anti-ground low altitude penetration tasks under complex terrain. By optimizing the terrain following(TF) objective function,terrain coordinate system, missile dynamic model and control vector, the TF issue is turning into the improved optimal control problem whose mathmatical model is simple and need not solve the second order terrain derivative. Simulation results prove that this method is reasonable and feasible. The TF precision is in the scope from 0.3 m to 3.0 m,and the planning time is less than 30 min. This method have the strongpionts such as rapidness, precision and has great application value.

  5. Trafficability Analysis at Traffic Crossing and Parameters Optimization Based on Particle Swarm Optimization Method

    Directory of Open Access Journals (Sweden)

    Bin He

    2014-01-01

    Full Text Available In city traffic, it is important to improve transportation efficiency and the spacing of platoon should be shortened when crossing the street. The best method to deal with this problem is automatic control of vehicles. In this paper, a mathematical model is established for the platoon’s longitudinal movement. A systematic analysis of longitudinal control law is presented for the platoon of vehicles. However, the parameter calibration for the platoon model is relatively difficult because the platoon model is complex and the parameters are coupled with each other. In this paper, the particle swarm optimization method is introduced to effectively optimize the parameters of platoon. The proposed method effectively finds the optimal parameters based on simulations and makes the spacing of platoon shorter.

  6. Sizing optimization of skeletal structures using teaching-learning based optimization

    Directory of Open Access Journals (Sweden)

    Vedat Toğan

    2017-03-01

    Full Text Available Teaching Learning Based Optimization (TLBO is one of the non-traditional techniques to simulate natural phenomena into a numerical algorithm. TLBO mimics teaching learning process occurring between a teacher and students in a classroom. A parameter named as teaching factor, TF, seems to be the only tuning parameter in TLBO. Although the value of the teaching factor, TF, is determined by an equation, the value of 1 or 2 has been used by the researchers for TF. This study intends to explore the effect of the variation of teaching factor TF on the performances of TLBO. This effect is demonstrated in solving structural optimization problems including truss and frame structures under the stress and displacement constraints. The results indicate that the variation of TF in the TLBO process does not change the results obtained at the end of the optimization procedure when the computational cost of TLBO is ignored.

  7. Grey Wolf Optimizer Based on Powell Local Optimization Method for Clustering Analysis

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2015-01-01

    Full Text Available One heuristic evolutionary algorithm recently proposed is the grey wolf optimizer (GWO, inspired by the leadership hierarchy and hunting mechanism of grey wolves in nature. This paper presents an extended GWO algorithm based on Powell local optimization method, and we call it PGWO. PGWO algorithm significantly improves the original GWO in solving complex optimization problems. Clustering is a popular data analysis and data mining technique. Hence, the PGWO could be applied in solving clustering problems. In this study, first the PGWO algorithm is tested on seven benchmark functions. Second, the PGWO algorithm is used for data clustering on nine data sets. Compared to other state-of-the-art evolutionary algorithms, the results of benchmark and data clustering demonstrate the superior performance of PGWO algorithm.

  8. Cooperative Coevolution with Formula-Based Variable Grouping for Large-Scale Global Optimization.

    Science.gov (United States)

    Wang, Yuping; Liu, Haiyan; Wei, Fei; Zong, Tingting; Li, Xiaodong

    2017-08-09

    For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations "[Formula: see text]", "[Formula: see text]", "[Formula: see text]", "[Formula: see text]" and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem

  9. Reliability-based performance simulation for optimized pavement maintenance

    International Nuclear Information System (INIS)

    Chou, Jui-Sheng; Le, Thanh-Son

    2011-01-01

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: →A novel algorithm using multi-objective particle swarm optimization technique. → Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. → A probabilistic model for regression parameters is employed to assess reliability-based performance. → The proposed approach can help decision makers to optimize roadway maintenance plans.

  10. Reliability-based performance simulation for optimized pavement maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Chou, Jui-Sheng, E-mail: jschou@mail.ntust.edu.tw [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China); Le, Thanh-Son [Department of Construction Engineering, National Taiwan University of Science and Technology (Taiwan Tech), 43 Sec. 4, Keelung Rd., Taipei 106, Taiwan (China)

    2011-10-15

    Roadway pavement maintenance is essential for driver safety and highway infrastructure efficiency. However, regular preventive maintenance and rehabilitation (M and R) activities are extremely costly. Unfortunately, the funds available for the M and R of highway pavement are often given lower priority compared to other national development policies, therefore, available funds must be allocated wisely. Maintenance strategies are typically implemented by optimizing only the cost whilst the reliability of facility performance is neglected. This study proposes a novel algorithm using multi-objective particle swarm optimization (MOPSO) technique to evaluate the cost-reliability tradeoff in a flexible maintenance strategy based on non-dominant solutions. Moreover, a probabilistic model for regression parameters is employed to assess reliability-based performance. A numerical example of a highway pavement project is illustrated to demonstrate the efficacy of the proposed MOPSO algorithms. The analytical results show that the proposed approach can help decision makers to optimize roadway maintenance plans. - Highlights: > A novel algorithm using multi-objective particle swarm optimization technique. > Evaluation of the cost-reliability tradeoff in a flexible maintenance strategy. > A probabilistic model for regression parameters is employed to assess reliability-based performance. > The proposed approach can help decision makers to optimize roadway maintenance plans.

  11. Protection from wintertime rainfall reduces nutrient losses and greenhouse gas emissions during the decomposition of poultry and horse manure-based amendments.

    Science.gov (United States)

    Maltais-Landry, Gabriel; Neufeld, Katarina; Poon, David; Grant, Nicholas; Nesic, Zoran; Smukler, Sean

    2018-04-01

    Manure-based soil amendments (herein "amendments") are important fertility sources, but differences among amendment types and management can significantly affect their nutrient value and environmental impacts. A 6-month in situ decomposition experiment was conducted to determine how protection from wintertime rainfall affected nutrient losses and greenhouse gas (GHG) emissions in poultry (broiler chicken and turkey) and horse amendments. Changes in total nutrient concentration were measured every 3 months, changes in ammonium (NH 4 + ) and nitrate (NO 3 - ) concentrations every month, and GHG emissions of carbon dioxide (CO 2 ), methane (CH 4 ), and nitrous oxide (N 2 O) every 7-14 days. Poultry amendments maintained higher nutrient concentrations (except for K), higher emissions of CO 2 and N 2 O, and lower CH 4 emissions than horse amendments. Exposing amendments to rainfall increased total N and NH 4 + losses in poultry amendments, P losses in turkey and horse amendments, and K losses and cumulative N 2 O emissions for all amendments. However, it did not affect CO 2 or CH 4 emissions. Overall, rainfall exposure would decrease total N inputs by 37% (horse), 59% (broiler chicken), or 74% (turkey) for a given application rate (wet weight basis) after 6 months of decomposition, with similar losses for NH 4 + (69-96%), P (41-73%), and K (91-97%). This study confirms the benefits of facilities protected from rainfall to reduce nutrient losses and GHG emissions during amendment decomposition. The impact of rainfall protection on nutrient losses and GHG emissions was monitored during the decomposition of broiler chicken, turkey, and horse manure-based soil amendments. Amendments exposed to rainfall had large ammonium and potassium losses, resulting in a 37-74% decrease in N inputs when compared with amendments protected from rainfall. Nitrous oxide emissions were also higher with rainfall exposure, although it had no effect on carbon dioxide and methane emissions

  12. Optimization for PET imaging based on phantom study and NECdensity

    International Nuclear Information System (INIS)

    Daisaki, Hiromitsu; Shimada, Naoki; Shinohara, Hiroyuki

    2012-01-01

    In consideration of the requirement for global standardization and quality control of PET imaging, the present studies gave an outline of phantom study to decide both scan and reconstruction parameters based on FDG-PET/CT procedure guideline in Japan, and optimization of scan duration based on NEC density was performed continuously. In the phantom study, scan and reconstruction parameters were decided by visual assessment and physical indexes (N 10mm , NEC phantom , Q H,10mm /N 10mm ) to visualize hot spot of 10 mm diameter with standardized uptake value (SUV)=4 explicitly. Simultaneously, Recovery Coefficient (RC) was evaluated to recognize that PET images had enough quantifiably. Scan durations were optimized by Body Mass Index (BMI) based on retrospective analysis of NEC density . Correlation between visual score in clinical FDG-PET images and NEC density fell after the optimization of scan duration. Both Inter-institution and inter-patient variability were decreased by performing the phantom study based on the procedure guideline and the optimization of scan duration based on NEC density which seem finally useful to practice highly precise examination and promote high-quality controlled study. (author)

  13. Group search optimiser-based optimal bidding strategies with no Karush-Kuhn-Tucker optimality conditions

    Science.gov (United States)

    Yadav, Naresh Kumar; Kumar, Mukesh; Gupta, S. K.

    2017-03-01

    General strategic bidding procedure has been formulated in the literature as a bi-level searching problem, in which the offer curve tends to minimise the market clearing function and to maximise the profit. Computationally, this is complex and hence, the researchers have adopted Karush-Kuhn-Tucker (KKT) optimality conditions to transform the model into a single-level maximisation problem. However, the profit maximisation problem with KKT optimality conditions poses great challenge to the classical optimisation algorithms. The problem has become more complex after the inclusion of transmission constraints. This paper simplifies the profit maximisation problem as a minimisation function, in which the transmission constraints, the operating limits and the ISO market clearing functions are considered with no KKT optimality conditions. The derived function is solved using group search optimiser (GSO), a robust population-based optimisation algorithm. Experimental investigation is carried out on IEEE 14 as well as IEEE 30 bus systems and the performance is compared against differential evolution-based strategic bidding, genetic algorithm-based strategic bidding and particle swarm optimisation-based strategic bidding methods. The simulation results demonstrate that the obtained profit maximisation through GSO-based bidding strategies is higher than the other three methods.

  14. Radiation protection optimization using a knowledge based methodology

    International Nuclear Information System (INIS)

    Reyes-Jimenez, J.; Tsoukalas, L.H.

    1991-01-01

    This paper presents a knowledge based methodology for radiological planning and radiation protection optimization. The cost-benefit methodology described on International Commission of Radiation Protection Report No. 37 is employed within a knowledge based framework for the purpose of optimizing radiation protection and plan maintenance activities while optimizing radiation protection. 1, 2 The methodology is demonstrated through an application to a heating ventilation and air conditioning (HVAC) system. HVAC is used to reduce radioactivity concentration levels in selected contaminated multi-compartment models at nuclear power plants when higher than normal radiation levels are detected. The overall objective is to reduce personnel exposure resulting from airborne radioactivity, when routine or maintenance access is required in contaminated areas. 2 figs, 15 refs

  15. ENERGY OPTIMIZATION IN CLUSTER BASED WIRELESS SENSOR NETWORKS

    Directory of Open Access Journals (Sweden)

    T. SHANKAR

    2014-04-01

    Full Text Available Wireless sensor networks (WSN are made up of sensor nodes which are usually battery-operated devices, and hence energy saving of sensor nodes is a major design issue. To prolong the networks lifetime, minimization of energy consumption should be implemented at all layers of the network protocol stack starting from the physical to the application layer including cross-layer optimization. Optimizing energy consumption is the main concern for designing and planning the operation of the WSN. Clustering technique is one of the methods utilized to extend lifetime of the network by applying data aggregation and balancing energy consumption among sensor nodes of the network. This paper proposed new version of Low Energy Adaptive Clustering Hierarchy (LEACH, protocols called Advanced Optimized Low Energy Adaptive Clustering Hierarchy (AOLEACH, Optimal Deterministic Low Energy Adaptive Clustering Hierarchy (ODLEACH, and Varying Probability Distance Low Energy Adaptive Clustering Hierarchy (VPDL combination with Shuffled Frog Leap Algorithm (SFLA that enables selecting best optimal adaptive cluster heads using improved threshold energy distribution compared to LEACH protocol and rotating cluster head position for uniform energy dissipation based on energy levels. The proposed algorithm optimizing the life time of the network by increasing the first node death (FND time and number of alive nodes, thereby increasing the life time of the network.

  16. Enhancing product robustness in reliability-based design optimization

    International Nuclear Information System (INIS)

    Zhuang, Xiaotian; Pan, Rong; Du, Xiaoping

    2015-01-01

    Different types of uncertainties need to be addressed in a product design optimization process. In this paper, the uncertainties in both product design variables and environmental noise variables are considered. The reliability-based design optimization (RBDO) is integrated with robust product design (RPD) to concurrently reduce the production cost and the long-term operation cost, including quality loss, in the process of product design. This problem leads to a multi-objective optimization with probabilistic constraints. In addition, the model uncertainties associated with a surrogate model that is derived from numerical computation methods, such as finite element analysis, is addressed. A hierarchical experimental design approach, augmented by a sequential sampling strategy, is proposed to construct the response surface of product performance function for finding optimal design solutions. The proposed method is demonstrated through an engineering example. - Highlights: • A unifying framework for integrating RBDO and RPD is proposed. • Implicit product performance function is considered. • The design problem is solved by sequential optimization and reliability assessment. • A sequential sampling technique is developed for improving design optimization. • The comparison with traditional RBDO is provided

  17. Design Optimization of Mechanical Components Using an Enhanced Teaching-Learning Based Optimization Algorithm with Differential Operator

    Directory of Open Access Journals (Sweden)

    B. Thamaraikannan

    2014-01-01

    Full Text Available This paper studies in detail the background and implementation of a teaching-learning based optimization (TLBO algorithm with differential operator for optimization task of a few mechanical components, which are essential for most of the mechanical engineering applications. Like most of the other heuristic techniques, TLBO is also a population-based method and uses a population of solutions to proceed to the global solution. A differential operator is incorporated into the TLBO for effective search of better solutions. To validate the effectiveness of the proposed method, three typical optimization problems are considered in this research: firstly, to optimize the weight in a belt-pulley drive, secondly, to optimize the volume in a closed coil helical spring, and finally to optimize the weight in a hollow shaft. have been demonstrated. Simulation result on the optimization (mechanical components problems reveals the ability of the proposed methodology to find better optimal solutions compared to other optimization algorithms.

  18. A modified teaching–learning based optimization for multi-objective optimal power flow problem

    International Nuclear Information System (INIS)

    Shabanpour-Haghighi, Amin; Seifi, Ali Reza; Niknam, Taher

    2014-01-01

    Highlights: • A new modified teaching–learning based algorithm is proposed. • A self-adaptive wavelet mutation strategy is used to enhance the performance. • To avoid reaching a large repository size, a fuzzy clustering technique is used. • An efficiently smart population selection is utilized. • Simulations show the superiority of this algorithm compared with other ones. - Abstract: In this paper, a modified teaching–learning based optimization algorithm is analyzed to solve the multi-objective optimal power flow problem considering the total fuel cost and total emission of the units. The modified phase of the optimization algorithm utilizes a self-adapting wavelet mutation strategy. Moreover, a fuzzy clustering technique is proposed to avoid extremely large repository size besides a smart population selection for the next iteration. These techniques make the algorithm searching a larger space to find the optimal solutions while speed of the convergence remains good. The IEEE 30-Bus and 57-Bus systems are used to illustrate performance of the proposed algorithm and results are compared with those in literatures. It is verified that the proposed approach has better performance over other techniques

  19. MVMO-based approach for optimal placement and tuning of ...

    African Journals Online (AJOL)

    DR OKE

    differential evolution DE algorithm with adaptive crossover operator, .... x are assigned by using a sequential scheme which accounts for mean and ... the representative scenarios from probabilistic model based Monte Carlo ... Comparison of average convergence of MVMO-S with other metaheuristic optimization methods.

  20. Runtime Optimizations for Tree-Based Machine Learning Models

    NARCIS (Netherlands)

    N. Asadi; J.J.P. Lin (Jimmy); A.P. de Vries (Arjen)

    2014-01-01

    htmlabstractTree-based models have proven to be an effective solution for web ranking as well as other machine learning problems in diverse domains. This paper focuses on optimizing the runtime performance of applying such models to make predictions, specifically using gradient-boosted regression