WorldWideScience

Sample records for matrix optimization procedures

  1. Sequential optimization of matrix chain multiplication relative to different cost functions

    KAUST Repository

    Chikalov, Igor; Hussain, Shahid; Moshkov, Mikhail

    2011-01-01

    In this paper, we present a methodology to optimize matrix chain multiplication sequentially relative to different cost functions such as total number of scalar multiplications, communication overhead in a multiprocessor environment, etc. For n matrices our optimization procedure requires O(n 3) arithmetic operations per one cost function. This work is done in the framework of a dynamic programming extension that allows sequential optimization relative to different criteria. © 2011 Springer-Verlag Berlin Heidelberg.

  2. Optimized Projection Matrix for Compressive Sensing

    Directory of Open Access Journals (Sweden)

    Jianping Xu

    2010-01-01

    Full Text Available Compressive sensing (CS is mainly concerned with low-coherence pairs, since the number of samples needed to recover the signal is proportional to the mutual coherence between projection matrix and sparsifying matrix. Until now, papers on CS always assume the projection matrix to be a random matrix. In this paper, aiming at minimizing the mutual coherence, a method is proposed to optimize the projection matrix. This method is based on equiangular tight frame (ETF design because an ETF has minimum coherence. It is impossible to solve the problem exactly because of the complexity. Therefore, an alternating minimization type method is used to find a feasible solution. The optimally designed projection matrix can further reduce the necessary number of samples for recovery or improve the recovery accuracy. The proposed method demonstrates better performance than conventional optimization methods, which brings benefits to both basis pursuit and orthogonal matching pursuit.

  3. Application of mixture experimental design in the formulation and optimization of matrix tablets containing carbomer and hydroxy-propylmethylcellulose.

    Science.gov (United States)

    Petrovic, Aleksandra; Cvetkovic, Nebojsa; Ibric, Svetlana; Trajkovic, Svetlana; Djuric, Zorica; Popadic, Dragica; Popovic, Radmila

    2009-12-01

    Using mixture experimental design, the effect of carbomer (Carbopol((R)) 971P NF) and hydroxypropylmethylcellulose (Methocel((R)) K100M or Methocel((R)) K4M) combination on the release profile and on the mechanism of drug liberation from matrix tablet was investigated. The numerical optimization procedure was also applied to establish and obtain formulation with desired drug release. The amount of TP released, release rate and mechanism varied with carbomer ratio in total matrix and HPMC viscosity. Increasing carbomer fractions led to a decrease in drug release. Anomalous diffusion was found in all matrices containing carbomer, while Case - II transport was predominant for tablet based on HPMC only. The predicted and obtained profiles for optimized formulations showed similarity. Those results indicate that Simplex Lattice Mixture experimental design and numerical optimization procedure can be applied during development to obtain sustained release matrix formulation with desired release profile.

  4. Comparison of transition-matrix sampling procedures

    DEFF Research Database (Denmark)

    Yevick, D.; Reimer, M.; Tromborg, Bjarne

    2009-01-01

    We compare the accuracy of the multicanonical procedure with that of transition-matrix models of static and dynamic communication system properties incorporating different acceptance rules. We find that for appropriate ranges of the underlying numerical parameters, algorithmically simple yet high...... accurate procedures can be employed in place of the standard multicanonical sampling algorithm....

  5. Optimal pinnate leaf-like network/matrix structure for enhanced conductive cooling

    International Nuclear Information System (INIS)

    Hu, Liguo; Zhou, Han; Zhu, Hanxing; Fan, Tongxiang; Zhang, Di

    2015-01-01

    Highlights: • We present a pinnate leaf-like network/matrix structure for conductive cooling. • We study the effect of matrix thickness on network conductive cooling performance. • Matrix thickness determines optimal distance between collection channels in network. • We determine the optimal network architecture from a global perspective. • Optimal network greatly reduces the maximum temperature difference in the network. - Abstract: Heat generated in electronic devices has to be effectively removed because excessive temperature strongly impairs their performance and reliability. Embedding a high thermal conductivity network into an electronic device is an effective method to conduct the generated heat to the outside. In this study, inspired by the pinnate leaf, we present a pinnate leaf-like network embedded in the matrix (i.e., electronic device) to cool the matrix by conduction and develop a method to construct the optimal network. In this method, we first investigate the effect of the matrix thickness on the conductive cooling performance of the network, and then optimize the network architecture from a global perspective so that to minimize the maximum temperature difference between the heat sink and the matrix. The results indicate that the matrix thickness determines the optimal distance of the neighboring collection channels in the network, which minimizes the maximum temperature difference between the matrix and the network, and that the optimal network greatly reduces the maximum temperature difference in the network. The results can serve as a design guide for efficient conductive cooling of electronic devices

  6. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    International Nuclear Information System (INIS)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles

    2014-01-01

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr 2 ) than is the pentafluorostyrene component distribution

  7. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers

    Energy Technology Data Exchange (ETDEWEB)

    Tisdale, Evgenia; Kennedy, Devin; Wilkins, Charles, E-mail: cwilkins@uark.edu

    2014-01-15

    Graphical abstract: -- Highlights: •We optimized sample preparation for MALDI TOF poly(styrene-copentafluorostyrene) co-polymers. •Influence of matrix choice was investigated. •Influence of matrix/analyte ratio was examined. •Influence of analyte/salt ratio (for Ag+ salt) was studied. -- Abstract: The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr{sub 2}) than is the pentafluorostyrene component distribution.

  8. A surrogate based multistage-multilevel optimization procedure for multidisciplinary design optimization

    OpenAIRE

    Yao, W.; Chen, X.; Ouyang, Q.; Van Tooren, M.

    2011-01-01

    Optimization procedure is one of the key techniques to address the computational and organizational complexities of multidisciplinary design optimization (MDO). Motivated by the idea of synthetically exploiting the advantage of multiple existing optimization procedures and meanwhile complying with the general process of satellite system design optimization in conceptual design phase, a multistage-multilevel MDO procedure is proposed in this paper by integrating multiple-discipline-feasible (M...

  9. A surrogate based multistage-multilevel optimization procedure for multidisciplinary design optimization

    NARCIS (Netherlands)

    Yao, W.; Chen, X.; Ouyang, Q.; Van Tooren, M.

    2011-01-01

    Optimization procedure is one of the key techniques to address the computational and organizational complexities of multidisciplinary design optimization (MDO). Motivated by the idea of synthetically exploiting the advantage of multiple existing optimization procedures and meanwhile complying with

  10. Concurrent material-fabrication optimization of metal-matrix laminates under thermo-mechanical loading

    Science.gov (United States)

    Saravanos, D. A.; Morel, M. R.; Chamis, C. C.

    1991-01-01

    A methodology is developed to tailor fabrication and material parameters of metal-matrix laminates for maximum loading capacity under thermomechanical loads. The stresses during the thermomechanical response are minimized subject to failure constrains and bounds on the laminate properties. The thermomechanical response of the laminate is simulated using nonlinear composite mechanics. Evaluations of the method on a graphite/copper symmetric cross-ply laminate were performed. The cross-ply laminate required different optimum fabrication procedures than a unidirectional composite. Also, the consideration of the thermomechanical cycle had a significant effect on the predicted optimal process.

  11. Matrix-assisted laser desorption/ionization sample preparation optimization for structural characterization of poly(styrene-co-pentafluorostyrene) copolymers.

    Science.gov (United States)

    Tisdale, Evgenia; Kennedy, Devin; Xu, Xiaodong; Wilkins, Charles

    2014-01-15

    The influence of the sample preparation parameters (the choice of the matrix, matrix:analyte ratio, salt:analyte ratio) was investigated and optimal conditions were established for the MALDI time-of-flight mass spectrometry analysis of the poly(styrene-co-pentafluorostyrene) copolymers. These were synthesized by atom transfer radical polymerization. Use of 2,5-dihydroxybenzoic acid as matrix resulted in spectra with consistently high ion yields for all matrix:analyte:salt ratios tested. The optimized MALDI procedure was successfully applied to the characterization of three copolymers obtained by varying the conditions of polymerization reaction. It was possible to establish the nature of the end groups, calculate molecular weight distributions, and determine the individual length distributions for styrene and pentafluorostyrene monomers, contained in the resulting copolymers. Based on the data obtained, it was concluded that individual styrene chain length distributions are more sensitive to the change in the composition of the catalyst (the addition of small amount of CuBr2) than is the pentafluorostyrene component distribution. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. A computational technique to identify the optimal stiffness matrix for a discrete nuclear fuel assembly model

    International Nuclear Information System (INIS)

    Park, Nam-Gyu; Kim, Kyoung-Joo; Kim, Kyoung-Hong; Suh, Jung-Min

    2013-01-01

    Highlights: ► An identification method of the optimal stiffness matrix for a fuel assembly structure is discussed. ► The least squares optimization method is introduced, and a closed form solution of the problem is derived. ► The method can be expanded to the system with the limited number of modes. ► Identification error due to the perturbed mode shape matrix is analyzed. ► Verification examples show that the proposed procedure leads to a reliable solution. -- Abstract: A reactor core structural model which is used to evaluate the structural integrity of the core contains nuclear fuel assembly models. Since the reactor core consists of many nuclear fuel assemblies, the use of a refined fuel assembly model leads to a considerable amount of computing time for performing nonlinear analyses such as the prediction of seismic induced vibration behaviors. The computational time could be reduced by replacing the detailed fuel assembly model with a simplified model that has fewer degrees of freedom, but the dynamic characteristics of the detailed model must be maintained in the simplified model. Such a model based on an optimal design method is proposed in this paper. That is, when a mass matrix and a mode shape matrix are given, the optimal stiffness matrix of a discrete fuel assembly model can be estimated by applying the least squares minimization method. The verification of the method is completed by comparing test results and simulation results. This paper shows that the simplified model's dynamic behaviors are quite similar to experimental results and that the suggested method is suitable for identifying reliable mathematical model for fuel assemblies

  13. A Novel Measurement Matrix Optimization Approach for Hyperspectral Unmixing

    Directory of Open Access Journals (Sweden)

    Su Xu

    2017-01-01

    Full Text Available Each pixel in the hyperspectral unmixing process is modeled as a linear combination of endmembers, which can be expressed in the form of linear combinations of a number of pure spectral signatures that are known in advance. However, the limitation of Gaussian random variables on its computational complexity or sparsity affects the efficiency and accuracy. This paper proposes a novel approach for the optimization of measurement matrix in compressive sensing (CS theory for hyperspectral unmixing. Firstly, a new Toeplitz-structured chaotic measurement matrix (TSCMM is formed by pseudo-random chaotic elements, which can be implemented by a simple hardware; secondly, rank revealing QR factorization with eigenvalue decomposition is presented to speed up the measurement time; finally, orthogonal gradient descent method for measurement matrix optimization is used to achieve optimal incoherence. Experimental results demonstrate that the proposed approach can lead to better CS reconstruction performance with low extra computational cost in hyperspectral unmixing.

  14. Controller tuning with evolutionary multiobjective optimization a holistic multiobjective optimization design procedure

    CERN Document Server

    Reynoso Meza, Gilberto; Sanchis Saez, Javier; Herrero Durá, Juan Manuel

    2017-01-01

    This book is devoted to Multiobjective Optimization Design (MOOD) procedures for controller tuning applications, by means of Evolutionary Multiobjective Optimization (EMO). It presents developments in tools, procedures and guidelines to facilitate this process, covering the three fundamental steps in the procedure: problem definition, optimization and decision-making. The book is divided into four parts. The first part, Fundamentals, focuses on the necessary theoretical background and provides specific tools for practitioners. The second part, Basics, examines a range of basic examples regarding the MOOD procedure for controller tuning, while the third part, Benchmarking, demonstrates how the MOOD procedure can be employed in several control engineering problems. The fourth part, Applications, is dedicated to implementing the MOOD procedure for controller tuning in real processes.

  15. Random Matrix Approach for Primal-Dual Portfolio Optimization Problems

    Science.gov (United States)

    Tada, Daichi; Yamamoto, Hisashi; Shinzato, Takashi

    2017-12-01

    In this paper, we revisit the portfolio optimization problems of the minimization/maximization of investment risk under constraints of budget and investment concentration (primal problem) and the maximization/minimization of investment concentration under constraints of budget and investment risk (dual problem) for the case that the variances of the return rates of the assets are identical. We analyze both optimization problems by the Lagrange multiplier method and the random matrix approach. Thereafter, we compare the results obtained from our proposed approach with the results obtained in previous work. Moreover, we use numerical experiments to validate the results obtained from the replica approach and the random matrix approach as methods for analyzing both the primal and dual portfolio optimization problems.

  16. All-at-once Optimization for Coupled Matrix and Tensor Factorizations

    DEFF Research Database (Denmark)

    Evrim, Acar Ataman; Kolda, Tamara G.; Dunlavy, Daniel M.

    2011-01-01

    .g., the person by person social network matrix or the restaurant by category matrix, and higher-order tensors, e.g., the "ratings" tensor of the form restaurant by meal by person. In this paper, we are particularly interested in fusing data sets with the goal of capturing their underlying latent structures. We...... formulate this problem as a coupled matrix and tensor factorization (CMTF) problem where heterogeneous data sets are modeled by fitting outer-product models to higher-order tensors and matrices in a coupled manner. Unlike traditional approaches solving this problem using alternating algorithms, we propose...... an all-at-once optimization approach called CMTF-OPT (CMTF-OPTimization), which is a gradient-based optimization approach for joint analysis of matrices and higher-order tensors. We also extend the algorithm to handle coupled incomplete data sets. Using numerical experiments, we demonstrate...

  17. Constructing HVS-Based Optimal Substitution Matrix Using Enhanced Differential Evolution

    Directory of Open Access Journals (Sweden)

    Shu-Fen Tu

    2013-01-01

    Full Text Available Least significant bit (LSB substitution is a method of information hiding. The secret message is embedded into the last k bits of a cover-image in order to evade the notice of hackers. The security and stego-image quality are two main limitations of the LSB substitution method. Therefore, some researchers have proposed an LSB substitution matrix to address these two issues. Finding the optimal LSB substitution matrix can be conceptualized as a problem of combinatorial optimization. In this paper, we adopt a different heuristic method based on other researchers’ method, called enhanced differential evolution (EDE, to construct an optimal LSB substitution matrix. Differing from other researchers, we adopt an HVS-based measurement as a fitness function and embed the secret by modifying the pixel to a closest value rather than simply substituting the LSBs. Our scheme extracts the secret by modular operations as simple LSB substitution does. The experimental results show that the proposed embedding algorithm indeed improves imperceptibility of stego-images substantially.

  18. Efficient Reanalysis Procedures in Structural Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded

    This thesis examines efficient solution procedures for the structural analysis problem within topology optimization. The research is motivated by the observation that when the nested approach to structural optimization is applied, most of the computational effort is invested in repeated solutions...... on approximate reanalysis. For cases where memory limitations require the utilization of iterative equation solvers, we suggest efficient procedures based on alternative termination criteria for such solvers. These approaches are tested on two- and three-dimensional topology optimization problems including...

  19. Use of the contingency matrix in the TRACK-MATCH procedure for two projections

    International Nuclear Information System (INIS)

    Baginyan, S.A.; Moroz, V.I.

    1985-01-01

    When analysing the work of geometrical reconstruction programs it is noted that if the TRACK-MATCH procedure is successful, it garantes the event measurement correctness. This serves as a base for application of the TRACK-MATCH procedure as a test for event mask measurements. Such use of the procedure does not require point-to-point correspondence between track images in different projections. It is sufficient to establish that the TRACK-MATCH procedure admits a solusion. It is shown that the problem of point-to-point correspondence between track images in different projections is reduced to conting matrix analysis. It is stated that if the determinant of the contingency matrix is not equal to zero it is sufficient for the TRACK-MATCH procedure to be solved

  20. Optimization of MIMO Systems Capacity Using Large Random Matrix Methods

    Directory of Open Access Journals (Sweden)

    Philippe Loubaton

    2012-11-01

    Full Text Available This paper provides a comprehensive introduction of large random matrix methods for input covariance matrix optimization of mutual information of MIMO systems. It is first recalled informally how large system approximations of mutual information can be derived. Then, the optimization of the approximations is discussed, and important methodological points that are not necessarily covered by the existing literature are addressed, including the strict concavity of the approximation, the structure of the argument of its maximum, the accuracy of the large system approach with regard to the number of antennas, or the justification of iterative water-filling optimization algorithms. While the existing papers have developed methods adapted to a specific model, this contribution tries to provide a unified view of the large system approximation approach.

  1. Generalized canonical analysis based on optimizing matrix correlations and a relation with IDIOSCAL

    NARCIS (Netherlands)

    Kiers, Henk A.L.; Cléroux, R.; Ten Berge, Jos M.F.

    1994-01-01

    Carroll's method for generalized canonical analysis of two or more sets of variables is shown to optimize the sum of squared inner-product matrix correlations between a consensus matrix and matrices with canonical variates for each set of variables. In addition, the method that analogously optimizes

  2. Role of metastructural matrixes in optimization ecotourism

    Directory of Open Access Journals (Sweden)

    A. N. Leuchin

    2010-01-01

    Full Text Available In the article possibilities anthropocentric and ecocentric developing paradigms ecotourism are shown. The updating role institutional functions ecotourism an expert by metastructural matrixes of optimization tourist-institutional space (TIS is specified. Long-range directions of socially-ecological interaction in system of ecotourism are designated, measures on optimisation of this interaction are considered.

  3. Rank-Optimized Logistic Matrix Regression toward Improved Matrix Data Classification.

    Science.gov (United States)

    Zhang, Jianguang; Jiang, Jianmin

    2018-02-01

    While existing logistic regression suffers from overfitting and often fails in considering structural information, we propose a novel matrix-based logistic regression to overcome the weakness. In the proposed method, 2D matrices are directly used to learn two groups of parameter vectors along each dimension without vectorization, which allows the proposed method to fully exploit the underlying structural information embedded inside the 2D matrices. Further, we add a joint [Formula: see text]-norm on two parameter matrices, which are organized by aligning each group of parameter vectors in columns. This added co-regularization term has two roles-enhancing the effect of regularization and optimizing the rank during the learning process. With our proposed fast iterative solution, we carried out extensive experiments. The results show that in comparison to both the traditional tensor-based methods and the vector-based regression methods, our proposed solution achieves better performance for matrix data classifications.

  4. Procedure and methodology of Radiation Protection optimization

    International Nuclear Information System (INIS)

    Wang Hengde

    1995-01-01

    Optimization of Radiation Protection is one of the most important principles in the system of radiation protection. The paper introduces the basic principles of radiation protection optimization in general, and the procedure of implementing radiation protection optimization and methods of selecting the optimized radiation protection option in details, in accordance with ICRP 55. Finally, some economic concepts relating to estimation of costs are discussed briefly

  5. Prolonged release matrix tablet of pyridostigmine bromide: formulation and optimization using statistical methods.

    Science.gov (United States)

    Bolourchian, Noushin; Rangchian, Maryam; Foroutan, Seyed Mohsen

    2012-07-01

    The aim of this study was to design and optimize a prolonged release matrix formulation of pyridostigmine bromide, an effective drug in myasthenia gravis and poisoning with nerve gas, using hydrophilic - hydrophobic polymers via D-optimal experimental design. HPMC and carnauba wax as retarding agents as well as tricalcium phosphate were used in matrix formulation and considered as independent variables. Tablets were prepared by wet granulation technique and the percentage of drug released at 1 (Y(1)), 4 (Y(2)) and 8 (Y(3)) hours were considered as dependent variables (responses) in this investigation. These experimental responses were best fitted for the cubic, cubic and linear models, respectively. The optimal formulation obtained in this study, consisted of 12.8 % HPMC, 24.4 % carnauba wax and 26.7 % tricalcium phosphate, had a suitable prolonged release behavior followed by Higuchi model in which observed and predicted values were very close. The study revealed that D-optimal design could facilitate the optimization of prolonged release matrix tablet containing pyridostigmine bromide. Accelerated stability studies confirmed that the optimized formulation remains unchanged after exposing in stability conditions for six months.

  6. Analysis and optimization of blood-testing procedures.

    NARCIS (Netherlands)

    Bar-Lev, S.K.; Boxma, O.J.; Perry, D.; Vastazos, L.P.

    2017-01-01

    This paper is devoted to the performance analysis and optimization of blood testing procedures. We present a queueing model of two queues in series, representing the two stages of a blood-testing procedure. Service (testing) in stage 1 is performed in batches, whereas it is done individually in

  7. A novel variable selection approach that iteratively optimizes variable space using weighted binary matrix sampling.

    Science.gov (United States)

    Deng, Bai-chuan; Yun, Yong-huan; Liang, Yi-zeng; Yi, Lun-zhao

    2014-10-07

    In this study, a new optimization algorithm called the Variable Iterative Space Shrinkage Approach (VISSA) that is based on the idea of model population analysis (MPA) is proposed for variable selection. Unlike most of the existing optimization methods for variable selection, VISSA statistically evaluates the performance of variable space in each step of optimization. Weighted binary matrix sampling (WBMS) is proposed to generate sub-models that span the variable subspace. Two rules are highlighted during the optimization procedure. First, the variable space shrinks in each step. Second, the new variable space outperforms the previous one. The second rule, which is rarely satisfied in most of the existing methods, is the core of the VISSA strategy. Compared with some promising variable selection methods such as competitive adaptive reweighted sampling (CARS), Monte Carlo uninformative variable elimination (MCUVE) and iteratively retaining informative variables (IRIV), VISSA showed better prediction ability for the calibration of NIR data. In addition, VISSA is user-friendly; only a few insensitive parameters are needed, and the program terminates automatically without any additional conditions. The Matlab codes for implementing VISSA are freely available on the website: https://sourceforge.net/projects/multivariateanalysis/files/VISSA/.

  8. Matrix-based introduction to multivariate data analysis

    CERN Document Server

    Adachi, Kohei

    2016-01-01

    This book enables readers who may not be familiar with matrices to understand a variety of multivariate analysis procedures in matrix forms. Another feature of the book is that it emphasizes what model underlies a procedure and what objective function is optimized for fitting the model to data. The author believes that the matrix-based learning of such models and objective functions is the fastest way to comprehend multivariate data analysis. The text is arranged so that readers can intuitively capture the purposes for which multivariate analysis procedures are utilized: plain explanations of the purposes with numerical examples precede mathematical descriptions in almost every chapter. This volume is appropriate for undergraduate students who already have studied introductory statistics. Graduate students and researchers who are not familiar with matrix-intensive formulations of multivariate data analysis will also find the book useful, as it is based on modern matrix formulations with a special emphasis on ...

  9. Optimal matrix product states for the Heisenberg spin chain

    International Nuclear Information System (INIS)

    Latorre, Jose I; Pico, Vicent

    2009-01-01

    We present some exact results for the optimal matrix product state (MPS) approximation to the ground state of the infinite isotropic Heisenberg spin-1/2 chain. Our approach is based on the systematic use of Schmidt decompositions to reduce the problem of approximating for the ground state of a spin chain to an analytical minimization. This allows one to show that results of standard simulations, e.g. density matrix renormalization group and infinite time evolving block decimation, do correspond to the result obtained by this minimization strategy and, thus, both methods deliver optimal MPS with the same energy but, otherwise, different properties. We also find that translational and rotational symmetries cannot be maintained simultaneously by the MPS ansatz of minimum energy and present explicit constructions for each case. Furthermore, we analyze symmetry restoration and quantify it to uncover new scaling relations. The method we propose can be extended to any translational invariant Hamiltonian

  10. A procedure for multi-objective optimization of tire design parameters

    Directory of Open Access Journals (Sweden)

    Nikola Korunović

    2015-04-01

    Full Text Available The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zones inside the tire. It consists of four main stages: pre-analysis, design of experiment, mathematical modeling and multi-objective optimization. Advantage of the proposed procedure is reflected in the fact that multi-objective optimization is based on the Pareto concept, which enables design engineers to obtain a complete set of optimization solutions and choose a suitable tire design. Furthermore, modeling of the relationships between tire design parameters and objective functions based on multiple regression analysis minimizes computational and modeling effort. The adequacy of the proposed tire design multi-objective optimization procedure has been validated by performing experimental trials based on finite element method.

  11. Optimality criteria for the components of anisotropic constitutive matrix

    DEFF Research Database (Denmark)

    Pedersen, Pauli; Pedersen, Niels Leergaard

    2014-01-01

    densities is equal value for the weighted elastic energy densities, as a natural extension of the optimality criterion for a single load case. The second optimality criterion for the components of a constitutive matrix (of unit norm) is proportionality to corresponding weighted strain components...... with the same proportionality factor $widehat lambda $ for all the components, as shortly specified by $C_{i j k l} = widehat lambda sum _{n} eta _{n} (epsilon _{i j})_{n} (epsilon _{k l})_{n}$, in traditional notation (n indicate load case). These simple analytical results should be communicated, in spite...

  12. Combining kernel matrix optimization and regularization to improve particle size distribution retrieval

    Science.gov (United States)

    Ma, Qian; Xia, Houping; Xu, Qiang; Zhao, Lei

    2018-05-01

    A new method combining Tikhonov regularization and kernel matrix optimization by multi-wavelength incidence is proposed for retrieving particle size distribution (PSD) in an independent model with improved accuracy and stability. In comparison to individual regularization or multi-wavelength least squares, the proposed method exhibited better anti-noise capability, higher accuracy and stability. While standard regularization typically makes use of the unit matrix, it is not universal for different PSDs, particularly for Junge distributions. Thus, a suitable regularization matrix was chosen by numerical simulation, with the second-order differential matrix found to be appropriate for most PSD types.

  13. Optimization of matrix tablets controlled drug release using Elman dynamic neural networks and decision trees.

    Science.gov (United States)

    Petrović, Jelena; Ibrić, Svetlana; Betz, Gabriele; Đurić, Zorica

    2012-05-30

    The main objective of the study was to develop artificial intelligence methods for optimization of drug release from matrix tablets regardless of the matrix type. Static and dynamic artificial neural networks of the same topology were developed to model dissolution profiles of different matrix tablets types (hydrophilic/lipid) using formulation composition, compression force used for tableting and tablets porosity and tensile strength as input data. Potential application of decision trees in discovering knowledge from experimental data was also investigated. Polyethylene oxide polymer and glyceryl palmitostearate were used as matrix forming materials for hydrophilic and lipid matrix tablets, respectively whereas selected model drugs were diclofenac sodium and caffeine. Matrix tablets were prepared by direct compression method and tested for in vitro dissolution profiles. Optimization of static and dynamic neural networks used for modeling of drug release was performed using Monte Carlo simulations or genetic algorithms optimizer. Decision trees were constructed following discretization of data. Calculated difference (f(1)) and similarity (f(2)) factors for predicted and experimentally obtained dissolution profiles of test matrix tablets formulations indicate that Elman dynamic neural networks as well as decision trees are capable of accurate predictions of both hydrophilic and lipid matrix tablets dissolution profiles. Elman neural networks were compared to most frequently used static network, Multi-layered perceptron, and superiority of Elman networks have been demonstrated. Developed methods allow simple, yet very precise way of drug release predictions for both hydrophilic and lipid matrix tablets having controlled drug release. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Optimal fabrication processes for unidirectional metal-matrix composites: A computational simulation

    Science.gov (United States)

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with non-linear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  15. Optimal fabrication processes for unidirectional metal-matrix composites - A computational simulation

    Science.gov (United States)

    Saravanos, D. A.; Murthy, P. L. N.; Morel, M.

    1990-01-01

    A method is proposed for optimizing the fabrication process of unidirectional metal matrix composites. The temperature and pressure histories are optimized such that the residual microstresses of the composite at the end of the fabrication process are minimized and the material integrity throughout the process is ensured. The response of the composite during the fabrication is simulated based on a nonlinear micromechanics theory. The optimal fabrication problem is formulated and solved with nonlinear programming. Application cases regarding the optimization of the fabrication cool-down phases of unidirectional ultra-high modulus graphite/copper and silicon carbide/titanium composites are presented.

  16. Development and Application of a Tool for Optimizing Composite Matrix Viscoplastic Material Parameters

    Science.gov (United States)

    Murthy, Pappu L. N.; Naghipour Ghezeljeh, Paria; Bednarcyk, Brett A.

    2018-01-01

    This document describes a recently developed analysis tool that enhances the resident capabilities of the Micromechanics Analysis Code with the Generalized Method of Cells (MAC/GMC) and its application. MAC/GMC is a composite material and laminate analysis software package developed at NASA Glenn Research Center. The primary focus of the current effort is to provide a graphical user interface (GUI) capability that helps users optimize highly nonlinear viscoplastic constitutive law parameters by fitting experimentally observed/measured stress-strain responses under various thermo-mechanical conditions for braided composites. The tool has been developed utilizing the MATrix LABoratory (MATLAB) (The Mathworks, Inc., Natick, MA) programming language. Illustrative examples shown are for a specific braided composite system wherein the matrix viscoplastic behavior is represented by a constitutive law described by seven parameters. The tool is general enough to fit any number of experimentally observed stress-strain responses of the material. The number of parameters to be optimized, as well as the importance given to each stress-strain response, are user choice. Three different optimization algorithms are included: (1) Optimization based on gradient method, (2) Genetic algorithm (GA) based optimization and (3) Particle Swarm Optimization (PSO). The user can mix and match the three algorithms. For example, one can start optimization with either 2 or 3 and then use the optimized solution to further fine tune with approach 1. The secondary focus of this paper is to demonstrate the application of this tool to optimize/calibrate parameters for a nonlinear viscoplastic matrix to predict stress-strain curves (for constituent and composite levels) at different rates, temperatures and/or loading conditions utilizing the Generalized Method of Cells. After preliminary validation of the tool through comparison with experimental results, a detailed virtual parametric study is

  17. Improving the efficiency of aerodynamic shape optimization procedures

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay; Eleshaky, Mohamed E.

    1992-01-01

    The computational efficiency of an aerodynamic shape optimization procedure which is based on discrete sensitivity analysis is increased through the implementation of two improvements. The first improvement involves replacing a grid point-based approach for surface representation with a Bezier-Bernstein polynomial parameterization of the surface. Explicit analytical expressions for the grid sensitivity terms are developed for both approaches. The second improvement proposes the use of Newton's method in lieu of an alternating direction implicit (ADI) methodology to calculate the highly converged flow solutions which are required to compute the sensitivity coefficients. The modified design procedure is demonstrated by optimizing the shape of an internal-external nozzle configuration. A substantial factor of 8 decrease in computational time for the optimization process was achieved by implementing both of the design improvements.

  18. A PROCEDURE FOR DETERMINING OPTIMAL FACILITY LOCATION AND SUB-OPTIMAL POSITIONS

    Directory of Open Access Journals (Sweden)

    P.K. Dan

    2012-01-01

    Full Text Available

    ENGLISH ABSTRACT: This research presents a methodology for determining the optimal location of a new facility, having physical flow interaction of various degrees with other existing facilities in the presence of barriers impeding the shortest flow-path as well as the sub-optimal iso-cost positions. It also determines sub-optimal iso-cost positions with additional cost or penalty for not being able to site it at the computed optimal point. The proposed methodology considers all types of quadrilateral barrier or forbidden region configurations to generalize and by-pass such impenetrable obstacles, and adopts a scheme of searching through the vertices of the quadrilaterals to determine the alternative shortest flow-path. This procedure of obstacle avoidance is novel. Software has been developed to facilitate computations for the search algorithm to determine the optimal and iso-cost co-ordinates. The test results are presented.

    AFRIKAANSE OPSOMMING: Die navorsing behandel ‘n procedure vir die bepaling van optimum stigtingsposisie vir ‘n onderneming met vloei vanaf ander bestaande fasiliteite in die teenwoordigheid van ‘n verskeidenheid van randvoorwaardes. Die prodedure lewer as resultaat suboptimale isokoste-stigtingsplekke met bekendmaking van die koste wat onstaan a.g.v. afwyking van die randvoorwaardlose optimum oplossingskoste, die prosedure maak gebruik van ‘n vindingryke soekmetode wat toegepas word op niersydige meerkundige voorstellings vir die bepaling van korste roetes wat versperring omseil. Die prosedure word onderskei deur programmatuur. Toetsresultate word voorgehou.

  19. Optimization of the BLASTN substitution matrix for prediction of non-specific DNA microarray hybridization

    DEFF Research Database (Denmark)

    Eklund, Aron Charles; Friis, Pia; Wernersson, Rasmus

    2010-01-01

    BLASTN accuracy by modifying the substitution matrix and gap penalties. We generated gene expression microarray data for samples in which 1 or 10% of the target mass was an exogenous spike of known sequence. We found that the 10% spike induced 2-fold intensity changes in 3% of the probes, two......-third of which were decreases in intensity likely caused by bulk-hybridization. These changes were correlated with similarity between the spike and probe sequences. Interestingly, even very weak similarities tended to induce a change in probe intensity with the 10% spike. Using this data, we optimized the BLASTN...... substitution matrix to more accurately identify probes susceptible to non-specific hybridization with the spike. Relative to the default substitution matrix, the optimized matrix features a decreased score for A–T base pairs relative to G–C base pairs, resulting in a 5–15% increase in area under the ROC curve...

  20. Evaluation of relevant information for optimal reflector modeling through data assimilation procedures

    International Nuclear Information System (INIS)

    Argaud, J.P.; Bouriquet, B.; Clerc, T.; Lucet-Sanchez, F.; Poncot, A.

    2015-01-01

    The goal of this study is to look after the amount of information that is mandatory to get a relevant parameters optimisation by data assimilation for physical models in neutronic diffusion calculations, and to determine what is the best information to reach the optimum of accuracy at the cheapest cost. To evaluate the quality of the optimisation, we study the covariance matrix that represents the accuracy of the optimised parameter. This matrix is a classical output of the data assimilation procedure, and it is the main information about accuracy and sensitivity of the parameter optimal determination. We present some results collected in the field of neutronic simulation for PWR type reactor. We seek to optimise the reflector parameters that characterise the neutronic reflector surrounding the whole reactive core. On the basis of the configuration studies, it has been shown that with data assimilation we can determine a global strategy to optimise the quality of the result with respect to the amount of information provided. The consequence of this is a cost reduction in terms of measurement and/or computing time with respect to the basic approach. Another result is that using multi-campaign data rather data from a unique campaign significantly improves the efficiency of parameters optimisation

  1. The Optimization on Ranks and Inertias of a Quadratic Hermitian Matrix Function and Its Applications

    Directory of Open Access Journals (Sweden)

    Yirong Yao

    2013-01-01

    Full Text Available We solve optimization problems on the ranks and inertias of the quadratic Hermitian matrix function subject to a consistent system of matrix equations and . As applications, we derive necessary and sufficient conditions for the solvability to the systems of matrix equations and matrix inequalities , and in the Löwner partial ordering to be feasible, respectively. The findings of this paper widely extend the known results in the literature.

  2. Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform

    NARCIS (Netherlands)

    Xu, S.; Xue, W.; Lin, H.X.

    2011-01-01

    In this article, we discuss the performance modeling and optimization of Sparse Matrix-Vector Multiplication (SpMV) on NVIDIA GPUs using CUDA. SpMV has a very low computation-data ratio and its performance is mainly bound by the memory bandwidth. We propose optimization of SpMV based on ELLPACK from

  3. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  4. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  5. KBLAS: An Optimized Library for Dense Matrix-Vector Multiplication on GPU Accelerators

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-11

    KBLAS is an open-source, high-performance library that provides optimized kernels for a subset of Level 2 BLAS functionalities on CUDA-enabled GPUs. Since performance of dense matrix-vector multiplication is hindered by the overhead of memory accesses, a double-buffering optimization technique is employed to overlap data motion with computation. After identifying a proper set of tuning parameters, KBLAS efficiently runs on various GPU architectures while avoiding code rewriting and retaining compliance with the standard BLAS API. Another optimization technique allows ensuring coalesced memory access when dealing with submatrices, especially for high-level dense linear algebra algorithms. All KBLAS kernels have been leveraged to a multi-GPU environment, which requires the introduction of new APIs. Considering general matrices, KBLAS is very competitive with existing state-of-the-art kernels and provides a smoother performance across a wide range of matrix dimensions. Considering symmetric and Hermitian matrices, the KBLAS performance outperforms existing state-of-the-art implementations on all matrix sizes and achieves asymptotically up to 50% and 60% speedup against the best competitor on single GPU and multi-GPUs systems, respectively. Performance results also validate our performance model. A subset of KBLAS highperformance kernels have been integrated into NVIDIA\\'s standard BLAS implementation (cuBLAS) for larger dissemination, starting from version 6.0. © 2016 ACM.

  6. A Fast Reactive Power Optimization in Distribution Network Based on Large Random Matrix Theory and Data Analysis

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2016-05-01

    Full Text Available In this paper, a reactive power optimization method based on historical data is investigated to solve the dynamic reactive power optimization problem in distribution network. In order to reflect the variation of loads, network loads are represented in a form of random matrix. Load similarity (LS is defined to measure the degree of similarity between the loads in different days and the calculation method of the load similarity of load random matrix (LRM is presented. By calculating the load similarity between the forecasting random matrix and the random matrix of historical load, the historical reactive power optimization dispatching scheme that most matches the forecasting load can be found for reactive power control usage. The differences of daily load curves between working days and weekends in different seasons are considered in the proposed method. The proposed method is tested on a standard 14 nodes distribution network with three different types of load. The computational result demonstrates that the proposed method for reactive power optimization is fast, feasible and effective in distribution network.

  7. Optimal power transaction matrix rescheduling under multilateral open access environment

    International Nuclear Information System (INIS)

    Moghaddam, M.P.; Raoofat, M.; Haghifam, M.R.

    2004-01-01

    This paper addresses a new concept for determining optimal transactions between different entities in a multilateral environment while benefits of both buyer and seller entities are taken into account with respect to the rules of the system. At the same time, constraints of the network are met, which leads to an optimal power flow problem. A modified power transaction matrix is proposed for modeling the environment. The optimization method in this paper is the continuation method, which is suited for complex situations of power system studies. This complexity will become more serious when dual interaction between financial and electrical subsystems of competitive power system are taken into account. The proposed approach is tested on a typical network with satisfactory results. (author)

  8. A procedure for multi-objective optimization of tire design parameters

    OpenAIRE

    Nikola Korunović; Miloš Madić; Miroslav Trajanović; Miroslav Radovanović

    2015-01-01

    The identification of optimal tire design parameters for satisfying different requirements, i.e. tire performance characteristics, plays an essential role in tire design. In order to improve tire performance characteristics, formulation and solving of multi-objective optimization problem must be performed. This paper presents a multi-objective optimization procedure for determination of optimal tire design parameters for simultaneous minimization of strain energy density at two distinctive zo...

  9. Optimal broadband Mueller matrix ellipsometer using multi-waveplates with flexibly oriented axes

    International Nuclear Information System (INIS)

    Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Zhang, Chuanwei; Liu, Shiyuan

    2016-01-01

    Accurate measurement of the Mueller matrix over a broad band is highly desirable for the characterization of nanostructures and nanomaterials. In this paper, we propose a general composite waveplate (GCW) that consists of multiple waveplates with flexibly oriented axes as a polarization modulating component in the Mueller matrix ellipsometer (MME). Although it is a common practice to make achromatic retarders by combining multiple waveplates, the novelty of the GCW is that both the retardances and azimuths of fast axes of the single-waveplates in the GCW are flexible parameters to be optimized, which is different from the conventional design where single-waveplates are usually arranged in symmetrical layout or with their fast axes parallel or perpendicular to each other. Consequently, the GCW can provide many more flexibilities to adapt to the optimization of the MME over a broad band. A quartz triplate, as a concrete example of the GCW, is designed and used in a house-made MME. The experimental results on the air demonstrate that the house-made MME using the optimally designed quartz triplates has an accuracy better than 0.2% and a precision better than 0.1% in the Mueller matrix measurement over a broad spectral range of 200∼1000 nm. The house-made MME exhibits high measurement repeatability better than 0.004 nm in testing a series of standard SiO 2 /Si samples with nominal oxide layer thicknesses ranging from 2 nm to 1000 nm. (paper)

  10. MIMO-OFDM Chirp Waveform Diversity Design and Implementation Based on Sparse Matrix and Correlation Optimization

    Directory of Open Access Journals (Sweden)

    Wang Wen-qin

    2015-02-01

    Full Text Available The waveforms used in Multiple-Input Multiple-Output (MIMO Synthetic Aperture Radar (SAR should have a large time-bandwidth product and good ambiguity function performance. A scheme to design multiple orthogonal MIMO SAR Orthogonal Frequency Division Multiplexing (OFDM chirp waveforms by combinational sparse matrix and correlation optimization is proposed. First, the problem of MIMO SAR waveform design amounts to the associated design of hopping frequency and amplitudes. Then a iterative exhaustive search algorithm is adopted to optimally design the code matrix with the constraints minimizing the block correlation coefficient of sparse matrix and the sum of cross-correlation peaks. And the amplitudes matrix are adaptively designed by minimizing the cross-correlation peaks with the genetic algorithm. Additionally, the impacts of waveform number, hopping frequency interval and selectable frequency index are also analyzed. The simulation results verify the proposed scheme can design multiple orthogonal large time-bandwidth product OFDM chirp waveforms with low cross-correlation peak and sidelobes and it improves ambiguity performance.

  11. Enhanced MALDI-TOF MS Analysis of Phosphopeptides Using an Optimized DHAP/DAHC Matrix

    Science.gov (United States)

    Hou, Junjie; Xie, Zhensheng; Xue, Peng; Cui, Ziyou; Chen, Xiulan; Li, Jing; Cai, Tanxi; Wu, Peng; Yang, Fuquan

    2010-01-01

    Selecting an appropriate matrix solution is one of the most effective means of increasing the ionization efficiency of phosphopeptides in matrix-assisted laser-desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS). In this study, we systematically assessed matrix combinations of 2, 6-dihydroxyacetophenone (DHAP) and diammonium hydrogen citrate (DAHC), and demonstrated that the low ratio DHAP/DAHC matrix was more effective in enhancing the ionization of phosphopeptides. Low femtomole level of phosphopeptides from the tryptic digests of α-casein and β-casein was readily detected by MALDI-TOF-MS in both positive and negative ion mode without desalination or phosphopeptide enrichment. Compared with the DHB/PA matrix, the optimized DHAP/DAHC matrix yielded superior sample homogeneity and higher phosphopeptide measurement sensitivity, particularly when multiple phosphorylated peptides were assessed. Finally, the DHAP/DAHC matrix was applied to identify phosphorylation sites from α-casein and β-casein and to characterize two phosphorylation sites from the human histone H1 treated with Cyclin-Dependent Kinase-1 (CDK1) by MALDI-TOF/TOF MS. PMID:20339515

  12. Enhanced MALDI-TOF MS Analysis of Phosphopeptides Using an Optimized DHAP/DAHC Matrix

    Directory of Open Access Journals (Sweden)

    Junjie Hou

    2010-01-01

    Full Text Available Selecting an appropriate matrix solution is one of the most effective means of increasing the ionization efficiency of phosphopeptides in matrix-assisted laser-desorption/ionization time-of-flight mass spectrometry (MALDI-TOF-MS. In this study, we systematically assessed matrix combinations of 2, 6-dihydroxyacetophenone (DHAP and diammonium hydrogen citrate (DAHC, and demonstrated that the low ratio DHAP/DAHC matrix was more effective in enhancing the ionization of phosphopeptides. Low femtomole level of phosphopeptides from the tryptic digests of α-casein and β-casein was readily detected by MALDI-TOF-MS in both positive and negative ion mode without desalination or phosphopeptide enrichment. Compared with the DHB/PA matrix, the optimized DHAP/DAHC matrix yielded superior sample homogeneity and higher phosphopeptide measurement sensitivity, particularly when multiple phosphorylated peptides were assessed. Finally, the DHAP/DAHC matrix was applied to identify phosphorylation sites from α-casein and β-casein and to characterize two phosphorylation sites from the human histone H1 treated with Cyclin-Dependent Kinase-1 (CDK1 by MALDI-TOF/TOF MS.

  13. Optimization of procedure for calibration with radiometer/photometer

    International Nuclear Information System (INIS)

    Detilly, Isabelle

    2009-01-01

    A test procedure for the radiometer/photometer calibrations mark International Light at the Laboratorio de Fotometria y Tecnologia Laser (LAFTA) de la Escuela de Ingenieria Electrica de la Universidad de Costa Rica is established. Two photometric banks are used as experimental set and two calibrations were performed of the International Light. A basic procedure established in the laboratory, is used for calibration from measurements of illuminance and luminous intensity. Some dependent variations of photometric banks used in the calibration process, the programming of the radiometer/photometer and the applied methodology showed the results. The procedure for calibration with radiometer/photometer can be improved by optimizing the programming process of the measurement instrument and possible errors can be minimized by using the recommended procedure. (author) [es

  14. Optimization of the Dutch Matrix Test by Random Selection of Sentences From a Preselected Subset

    Directory of Open Access Journals (Sweden)

    Rolph Houben

    2015-04-01

    Full Text Available Matrix tests are available for speech recognition testing in many languages. For an accurate measurement, a steep psychometric function of the speech materials is required. For existing tests, it would be beneficial if it were possible to further optimize the available materials by increasing the function’s steepness. The objective is to show if the steepness of the psychometric function of an existing matrix test can be increased by selecting a homogeneous subset of recordings with the steepest sentence-based psychometric functions. We took data from a previous multicenter evaluation of the Dutch matrix test (45 normal-hearing listeners. Based on half of the data set, first the sentences (140 out of 311 with a similar speech reception threshold and with the steepest psychometric function (≥9.7%/dB were selected. Subsequently, the steepness of the psychometric function for this selection was calculated from the remaining (unused second half of the data set. The calculation showed that the slope increased from 10.2%/dB to 13.7%/dB. The resulting subset did not allow the construction of enough balanced test lists. Therefore, the measurement procedure was changed to randomly select the sentences during testing. Random selection may interfere with a representative occurrence of phonemes. However, in our material, the median phonemic occurrence remained close to that of the original test. This finding indicates that phonemic occurrence is not a critical factor. The work highlights the possibility that existing speech tests might be improved by selecting sentences with a steep psychometric function.

  15. Abdominoplasty for Ladd's procedure: optimizing access and esthetics

    African Journals Online (AJOL)

    Abdominoplasty for Ladd's procedure: optimizing access and esthetics. Rachel Aliotta, Neilendu Kundu, Anthony Stallion, Christi Cavaliere. Abstract. Rotational anomalies occur when there is an abnormal arrest of rotation in the embryonic gut during development. The characteristic population affected is considered to be ...

  16. On the employment of lambda carrageenan in a matrix system. III. Optimization of a lambda carrageenan-HPMC hydrophilic matrix

    NARCIS (Netherlands)

    Bonferoni, MC; Rossi, S; Ferrari, F; Bertoni, M; Bolhuis, GK; Caramella, C

    1998-01-01

    The lambda carrageenan/HPMC ratio in matrix tablets has been optimized in order to obtain pH-independent release profiles of chlorpheniramine maleate, a freely soluble drug. Release profiles in acidic (pH 1.2) and neutral (pH 6.8) media were fitted according to the Weibull and the power law models.

  17. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  18. Numerical Procedure for Optimizing Dye-Sensitized Solar Cells

    Directory of Open Access Journals (Sweden)

    Mihai Razvan Mitroi

    2014-01-01

    Full Text Available We propose a numerical procedure consisting of a simplified physical model and a numerical method with the aim of optimizing the performance parameters of dye-sensitized solar cells (DSSCs. We calculate the real rate of absorbed photons (in the dye spectral range Grealx by introducing a factor β<1 in order to simplify the light absorption and reflection on TCO electrode. We consider the electrical transport to be purely diffusive and the recombination process only to occur between electrons from the TiO2 conduction band and anions from the electrolyte. The used numerical method permits solving the system of differential equations resulting from the physical model. We apply the proposed numerical procedure on a classical DSSC based on Ruthenium dye in order to validate it. For this, we simulate the J-V characteristics and calculate the main parameters: short-circuit current density Jsc, open circuit voltage Voc, fill factor FF, and power conversion efficiency η. We analyze the influence of the nature of semiconductor (TiO2 and dye and also the influence of different technological parameters on the performance parameters of DSSCs. The obtained results show that the proposed numerical procedure is suitable for developing a numerical simulation platform for improving the DSSCs performance by choosing the optimal parameters.

  19. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  20. Aerial photogrammetry procedure optimized for micro uav

    Directory of Open Access Journals (Sweden)

    T. Anai

    2014-06-01

    Full Text Available This paper proposes the automatic aerial photogrammetry procedure optimized for Micro UAV that has ability of autonomous flight. The most important goal of our proposed method is the reducing the processing cost for fully automatic reconstruction of DSM from a large amount of image obtained from Micro UAV. For this goal, we have developed automatic corresponding point generation procedure using feature point tracking algorithm considering position and attitude information, which obtained from onboard GPS-IMU integrated on Micro UAV. In addition, we have developed the automatic exterior orientation and registration procedure from the automatic generated corresponding points on each image and position and attitude information from Micro UAV. Moreover, in order to reconstruct precise DSM, we have developed the area base matching process which considering edge information. In this paper, we describe processing flow of our automatic aerial photogrammetry. Moreover, the accuracy assessment is also described. Furthermore, some application of automatic reconstruction of DSM will be desired.

  1. Optimization of Coil Element Configurations for a Matrix Gradient Coil.

    Science.gov (United States)

    Kroboth, Stefan; Layton, Kelvin J; Jia, Feng; Littin, Sebastian; Yu, Huijun; Hennig, Jurgen; Zaitsev, Maxim

    2018-01-01

    Recently, matrix gradient coils (also termed multi-coils or multi-coil arrays) were introduced for imaging and B 0 shimming with 24, 48, and even 84 coil elements. However, in imaging applications, providing one amplifier per coil element is not always feasible due to high cost and technical complexity. In this simulation study, we show that an 84-channel matrix gradient coil (head insert for brain imaging) is able to create a wide variety of field shapes even if the number of amplifiers is reduced. An optimization algorithm was implemented that obtains groups of coil elements, such that a desired target field can be created by driving each group with an amplifier. This limits the number of amplifiers to the number of coil element groups. Simulated annealing is used due to the NP-hard combinatorial nature of the given problem. A spherical harmonic basis set up to the full third order within a sphere of 20-cm diameter in the center of the coil was investigated as target fields. We show that the median normalized least squares error for all target fields is below approximately 5% for 12 or more amplifiers. At the same time, the dissipated power stays within reasonable limits. With a relatively small set of amplifiers, switches can be used to sequentially generate spherical harmonics up to third order. The costs associated with a matrix gradient coil can be lowered, which increases the practical utility of matrix gradient coils.

  2. Time Optimal Synchronization Procedure and Associated Feedback Loops

    CERN Document Server

    Angoletta, Maria Elena; CERN. Geneva. ATS Department

    2016-01-01

    A procedure to increase the speed of currently used synchronization loops in a synchrotron by an order of magnitude is presented. Beams dynamics constraint imposes an upper limit on excursions in stable phase angle, and the procedure presented exploits this limit to arrive in the synchronized state from an arbitrary initial state in the fastest possible way. Detailed corrector design for beam phase loop, differential frequency loop and final synchronization loop is also presented. Finally, an overview of the synchronization methods currently deployed in some other CERN’s machines is provided, together with a brief comparison with the newly proposed time-optimal algorithm.

  3. Procedural Optimization Models for Multiobjective Flexible JSSP

    Directory of Open Access Journals (Sweden)

    Elena Simona NICOARA

    2013-01-01

    Full Text Available The most challenging issues related to manufacturing efficiency occur if the jobs to be sched-uled are structurally different, if these jobs allow flexible routings on the equipments and mul-tiple objectives are required. This framework, called Multi-objective Flexible Job Shop Scheduling Problems (MOFJSSP, applicable to many real processes, has been less reported in the literature than the JSSP framework, which has been extensively formalized, modeled and analyzed from many perspectives. The MOFJSSP lie, as many other NP-hard problems, in a tedious place where the vast optimization theory meets the real world context. The paper brings to discussion the most optimization models suited to MOFJSSP and analyzes in detail the genetic algorithms and agent-based models as the most appropriate procedural models.

  4. Optimizing Sparse Matrix-Multiple Vectors Multiplication for Nuclear Configuration Interaction Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Aktulga, Hasan Metin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2014-08-14

    Obtaining highly accurate predictions on the properties of light atomic nuclei using the configuration interaction (CI) approach requires computing a few extremal Eigen pairs of the many-body nuclear Hamiltonian matrix. In the Many-body Fermion Dynamics for nuclei (MFDn) code, a block Eigen solver is used for this purpose. Due to the large size of the sparse matrices involved, a significant fraction of the time spent on the Eigen value computations is associated with the multiplication of a sparse matrix (and the transpose of that matrix) with multiple vectors (SpMM and SpMM-T). Existing implementations of SpMM and SpMM-T significantly underperform expectations. Thus, in this paper, we present and analyze optimized implementations of SpMM and SpMM-T. We base our implementation on the compressed sparse blocks (CSB) matrix format and target systems with multi-core architectures. We develop a performance model that allows us to understand and estimate the performance characteristics of our SpMM kernel implementations, and demonstrate the efficiency of our implementation on a series of real-world matrices extracted from MFDn. In particular, we obtain 3-4 speedup on the requisite operations over good implementations based on the commonly used compressed sparse row (CSR) matrix format. The improvements in the SpMM kernel suggest we may attain roughly a 40% speed up in the overall execution time of the block Eigen solver used in MFDn.

  5. On the limit matrix obtained in the homogenization of an optimal ...

    Indian Academy of Sciences (India)

    R. Narasimhan (Krishtel eMaging) 1461 1996 Oct 15 13:05:22

    and, if so, identify that matrix and estimate the constants ˜βm and ˜βM. When Bε = Aε, it is well-known (cf. Murat [7]) that indeed B# = A0, the H-limit of Aε. It turns out that the solution to this problem is closely related to the question of homog- enizing an associated optimal control problem. Let Uad ⊂ L2( ) be a closed convex ...

  6. Optimization by GRASP greedy randomized adaptive search procedures

    CERN Document Server

    Resende, Mauricio G C

    2016-01-01

    This is the first book to cover GRASP (Greedy Randomized Adaptive Search Procedures), a metaheuristic that has enjoyed wide success in practice with a broad range of applications to real-world combinatorial optimization problems. The state-of-the-art coverage and carefully crafted pedagogical style lends this book highly accessible as an introductory text not only to GRASP, but also to combinatorial optimization, greedy algorithms, local search, and path-relinking, as well as to heuristics and metaheuristics, in general. The focus is on algorithmic and computational aspects of applied optimization with GRASP with emphasis given to the end-user, providing sufficient information on the broad spectrum of advances in applied optimization with GRASP. For the more advanced reader, chapters on hybridization with path-relinking and parallel and continuous GRASP present these topics in a clear and concise fashion. Additionally, the book offers a very complete annotated bibliography of GRASP and combinatorial optimizat...

  7. Minimal representation of matrix valued white stochastic processes and U–D factorisation of algorithms for optimal control

    NARCIS (Netherlands)

    Willigenburg, van L.G.; Koning, de W.L.

    2013-01-01

    Two different descriptions are used in the literature to formulate the optimal dynamic output feedback control problem for linear dynamical systems with white stochastic parameters and quadratic criteria, called the optimal compensation problem. One describes the matrix valued white stochastic

  8. Optimizing strassen matrix multiply on GPUs

    KAUST Repository

    ul Hasan Khan, Ayaz; Al-Mouhamed, Mayez; Fatayer, Allam

    2015-01-01

    © 2015 IEEE. Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.

  9. Optimizing strassen matrix multiply on GPUs

    KAUST Repository

    ul Hasan Khan, Ayaz

    2015-06-01

    © 2015 IEEE. Many core systems are basically designed for applications having large data parallelism. Strassen Matrix Multiply (MM) can be formulated as a depth first (DFS) traversal of a recursion tree where all cores work in parallel on computing each of the NxN sub-matrices that reduces storage at the detriment of large data motion to gather and aggregate the results. We propose Strassen and Winograd algorithms (S-MM and W-MM) based on three optimizations: a set of basic algebra functions to reduce overhead, invoking efficient library (CUBLAS 5.5), and parameter-tuning of parametric kernel to improve resource occupancy. On GPUs, W-MM and S-MM with one recursion level outperform CUBLAS 5.5 Library with up to twice as faster for large arrays satisfying N>=2048 and N>=3072, respectively. Compared to NVIDIA SDK library, S-MM and W-MM achieved a speedup between 20x to 80x for the above arrays. The proposed approach can be used to enhance the performance of CUBLAS and MKL libraries.

  10. Microseed matrix screening for optimization in protein crystallization: what have we learned?

    Science.gov (United States)

    D'Arcy, Allan; Bergfors, Terese; Cowan-Jacob, Sandra W; Marsh, May

    2014-09-01

    Protein crystals obtained in initial screens typically require optimization before they are of X-ray diffraction quality. Seeding is one such optimization method. In classical seeding experiments, the seed crystals are put into new, albeit similar, conditions. The past decade has seen the emergence of an alternative seeding strategy: microseed matrix screening (MMS). In this strategy, the seed crystals are transferred into conditions unrelated to the seed source. Examples of MMS applications from in-house projects and the literature include the generation of multiple crystal forms and different space groups, better diffracting crystals and crystallization of previously uncrystallizable targets. MMS can be implemented robotically, making it a viable option for drug-discovery programs. In conclusion, MMS is a simple, time- and cost-efficient optimization method that is applicable to many recalcitrant crystallization problems.

  11. Studies on the optimization of deformation processed metal metal matrix composites

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, Tim W. [Iowa State Univ., Ames, IA (United States)

    1994-01-04

    A methodology for the production of deformation processed metal metal matrix composites from hyper-eutectic copper-chromium alloys was developed. This methodology was derived from a basic study of the precipitation phenomena in these alloys encompassing evaluation of microstructural, electrical, and mechanical properties. The methodology developed produces material with a superior combination of electrical and mechanical properties compared to those presently available in commercial alloys. New and novel alloying procedures were investigated to extend the range of production methods available for these material. These studies focused on the use of High Pressure Gas Atomization and the development of new containment technologies for the liquid alloy. This allowed the production of alloys with a much more refined starting microstructure and lower contamination than available by other methods. The knowledge gained in the previous studies was used to develop two completely new families of deformation processed metal metal matrix composites. These composites are based on immissible alloys with yttrium and magnesium matrices and refractory metal reinforcement. This work extends the physical property range available in deformation processed metal metal matrix composites. Additionally, it also represents new ways to apply these metals in engineering applications.

  12. Beyond the drugs : non-pharmacological strategies to optimize procedural care in children

    NARCIS (Netherlands)

    Leroy, Piet L.; Costa, Luciane R.; Emmanouil, Dimitris; van Beukering, Alice; Franck, Linda S.

    2016-01-01

    Purpose of review Painful and/or stressful medical procedures mean a substantial burden for sick children. There is good evidence that procedural comfort can be optimized by a comprehensive comfort-directed policy containing the triad of non-pharmacological strategies (NPS) in all cases, timely or

  13. Compressive sensing using optimized sensing matrix for face verification

    Science.gov (United States)

    Oey, Endra; Jeffry; Wongso, Kelvin; Tommy

    2017-12-01

    Biometric appears as one of the solutions which is capable in solving problems that occurred in the usage of password in terms of data access, for example there is possibility in forgetting password and hard to recall various different passwords. With biometrics, physical characteristics of a person can be captured and used in the identification process. In this research, facial biometric is used in the verification process to determine whether the user has the authority to access the data or not. Facial biometric is chosen as its low cost implementation and generate quite accurate result for user identification. Face verification system which is adopted in this research is Compressive Sensing (CS) technique, in which aims to reduce dimension size as well as encrypt data in form of facial test image where the image is represented in sparse signals. Encrypted data can be reconstructed using Sparse Coding algorithm. Two types of Sparse Coding namely Orthogonal Matching Pursuit (OMP) and Iteratively Reweighted Least Squares -ℓp (IRLS-ℓp) will be used for comparison face verification system research. Reconstruction results of sparse signals are then used to find Euclidean norm with the sparse signal of user that has been previously saved in system to determine the validity of the facial test image. Results of system accuracy obtained in this research are 99% in IRLS with time response of face verification for 4.917 seconds and 96.33% in OMP with time response of face verification for 0.4046 seconds with non-optimized sensing matrix, while 99% in IRLS with time response of face verification for 13.4791 seconds and 98.33% for OMP with time response of face verification for 3.1571 seconds with optimized sensing matrix.

  14. A new approach of optimization procedure for superconducting integrated circuits

    International Nuclear Information System (INIS)

    Saitoh, K.; Soutome, Y.; Tarutani, Y.; Takagi, K.

    1999-01-01

    We have developed and tested a new circuit simulation procedure for superconducting integrated circuits which can be used to optimize circuit parameters. This method reveals a stable operation region in the circuit parameter space in connection with the global bias margin by means of a contour plot of the global bias margin versus the circuit parameters. An optimal set of parameters with margins larger than these of the initial values has been found in the stable region. (author)

  15. Matrix-product states for strongly correlated systems and quantum information processing

    International Nuclear Information System (INIS)

    Saberi, Hamed

    2008-01-01

    This thesis offers new developments in matrix-product state theory for studying the strongly correlated systems and quantum information processing through three major projects: In the first project, we perform a systematic comparison between Wilson's numerical renormalization group (NRG) and White's density-matrix renormalization group (DMRG). The NRG method for solving quantum impurity models yields a set of energy eigenstates that have the form of matrix-product states (MPS). White's DMRG for treating quantum lattice problems can likewise be reformulated in terms of MPS. Thus, the latter constitute a common algebraic structure for both approaches. We exploit this fact to compare the NRG approach for the single-impurity Anderson model to a variational matrix-product state approach (VMPS), equivalent to single-site DMRG. For the latter, we use an ''unfolded'' Wilson chain, which brings about a significant reduction in numerical costs compared to those of NRG. We show that all NRG eigenstates (kept and discarded) can be reproduced using VMPS, and compare the difference in truncation criteria, sharp vs. smooth in energy space, of the two approaches. Finally, we demonstrate that NRG results can be improved upon systematically by performing a variational optimization in the space of variational matrix-product states, using the states produced by NRG as input. In the second project we demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any entangled multiqubit state. We give

  16. Matrix-product states for strongly correlated systems and quantum information processing

    Energy Technology Data Exchange (ETDEWEB)

    Saberi, Hamed

    2008-12-12

    This thesis offers new developments in matrix-product state theory for studying the strongly correlated systems and quantum information processing through three major projects: In the first project, we perform a systematic comparison between Wilson's numerical renormalization group (NRG) and White's density-matrix renormalization group (DMRG). The NRG method for solving quantum impurity models yields a set of energy eigenstates that have the form of matrix-product states (MPS). White's DMRG for treating quantum lattice problems can likewise be reformulated in terms of MPS. Thus, the latter constitute a common algebraic structure for both approaches. We exploit this fact to compare the NRG approach for the single-impurity Anderson model to a variational matrix-product state approach (VMPS), equivalent to single-site DMRG. For the latter, we use an ''unfolded'' Wilson chain, which brings about a significant reduction in numerical costs compared to those of NRG. We show that all NRG eigenstates (kept and discarded) can be reproduced using VMPS, and compare the difference in truncation criteria, sharp vs. smooth in energy space, of the two approaches. Finally, we demonstrate that NRG results can be improved upon systematically by performing a variational optimization in the space of variational matrix-product states, using the states produced by NRG as input. In the second project we demonstrate how the matrix-product state formalism provides a flexible structure to solve the constrained optimization problem associated with the sequential generation of entangled multiqubit states under experimental restrictions. We consider a realistic scenario in which an ancillary system with a limited number of levels performs restricted sequential interactions with qubits in a row. The proposed method relies on a suitable local optimization procedure, yielding an efficient recipe for the realistic and approximate sequential generation of any

  17. Concomitant use of the matrix strategy and the mand-model procedure in teaching graphic symbol combinations.

    Science.gov (United States)

    Nigam, Ravi; Schlosser, Ralf W; Lloyd, Lyle L

    2006-09-01

    Matrix strategies employing parts of speech arranged in systematic language matrices and milieu language teaching strategies have been successfully used to teach word combining skills to children who have cognitive disabilities and some functional speech. The present study investigated the acquisition and generalized production of two-term semantic relationships in a new population using new types of symbols. Three children with cognitive disabilities and little or no functional speech were taught to combine graphic symbols. The matrix strategy and the mand-model procedure were used concomitantly as intervention procedures. A multiple probe design across sets of action-object combinations with generalization probes of untrained combinations was used to teach the production of graphic symbol combinations. Results indicated that two of the three children learned the early syntactic-semantic rule of combining action-object symbols and demonstrated generalization to untrained action-object combinations and generalization across trainers. The results and future directions for research are discussed.

  18. Numerical Optimization Design of Dynamic Quantizer via Matrix Uncertainty Approach

    Directory of Open Access Journals (Sweden)

    Kenji Sawada

    2013-01-01

    Full Text Available In networked control systems, continuous-valued signals are compressed to discrete-valued signals via quantizers and then transmitted/received through communication channels. Such quantization often degrades the control performance; a quantizer must be designed that minimizes the output difference between before and after the quantizer is inserted. In terms of the broadbandization and the robustness of the networked control systems, we consider the continuous-time quantizer design problem. In particular, this paper describes a numerical optimization method for a continuous-time dynamic quantizer considering the switching speed. Using a matrix uncertainty approach of sampled-data control, we clarify that both the temporal and spatial resolution constraints can be considered in analysis and synthesis, simultaneously. Finally, for the slow switching, we compare the proposed and the existing methods through numerical examples. From the examples, a new insight is presented for the two-step design of the existing continuous-time optimal quantizer.

  19. Optimization of sparse matrix-vector multiplication on emerging multicore platforms

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Vuduc, Richard [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)

    2007-01-01

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.

  20. NMR structural refinement of an extrahelical adenosine tridecamer d(CGCAGAATTCGCG)2 via a hybrid relaxation matrix procedure

    International Nuclear Information System (INIS)

    Nikonowicz, E.P.; Meadows, R.P.; Gorenstein, D.G.

    1990-01-01

    Until very recently interproton distances from NOESY experiments have been derived solely from the two-spin approximation method. Unfortunately, even at short mixing times, there is a significant error in many of these distances. A complete relaxation matrix approach employing a matrix eigenvalue/eigenvector solution to the Bloch equations avoids the approximation of the two-spin method. The authors calculated the structure of an extrahelical adenosine tridecamer oligodeoxyribonucleotide duplex, d-(CGCAGAATTCGCG) 2 , by an iterative refinement approach using a hybrid relaxation matrix method combined with restrained molecular dynamics calculations. Distances from the 2D NOESY spectra have been calculated from the relaxation rate matrix which has been evaluated from a hybrid NOESY volume matrix comprising elements from the experiment and those calculated from an initial structure. The hybrid matrix derived distances have then been used in a restrained molecular dynamics procedure to obtain a new structure that better approximates the NOESY spectra. The resulting partially refined structure is then used to calculate an improved theoretical NOESY volume matrix which is once again merged with the experimental matrix until refinement is complete. Although the crystal structure of the tridecamer clearly shows the extrahelical adenosine looped out way from the duplex, the NOESY distance restrained hybrid matrix/molecular dynamics structural refinement establishes that the extrahelical adenosine stacks into the duplex

  1. A novel experimental design method to optimize hydrophilic matrix formulations with drug release profiles and mechanical properties.

    Science.gov (United States)

    Choi, Du Hyung; Lim, Jun Yeul; Shin, Sangmun; Choi, Won Jun; Jeong, Seong Hoon; Lee, Sangkil

    2014-10-01

    To investigate the effects of hydrophilic polymers on the matrix system, an experimental design method was developed to integrate response surface methodology and the time series modeling. Moreover, the relationships among polymers on the matrix system were studied with the evaluation of physical properties including water uptake, mass loss, diffusion, and gelling index. A mixture simplex lattice design was proposed while considering eight input control factors: Polyethylene glycol 6000 (x1 ), polyethylene oxide (PEO) N-10 (x2 ), PEO 301 (x3 ), PEO coagulant (x4 ), PEO 303 (x5 ), hydroxypropyl methylcellulose (HPMC) 100SR (x6 ), HPMC 4000SR (x7 ), and HPMC 10(5) SR (x8 ). With the modeling, optimal formulations were obtained depending on the four types of targets. The optimal formulations showed the four significant factors (x1 , x2 , x3 , and x8 ) and other four input factors (x4 , x5 , x6 , and x7 ) were not significant based on drug release profiles. Moreover, the optimization results were analyzed with estimated values, targets values, absolute biases, and relative biases based on observed times for the drug release rates with four different targets. The result showed that optimal solutions and target values had consistent patterns with small biases. On the basis of the physical properties of the optimal solutions, the type and ratio of the hydrophilic polymer and the relationships between polymers significantly influenced the physical properties of the system and drug release. This experimental design method is very useful in formulating a matrix system with optimal drug release. Moreover, it can distinctly confirm the relationships between excipients and the effects on the system with extensive and intensive evaluations. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.

  2. Time-oriented experimental design method to optimize hydrophilic matrix formulations with gelation kinetics and drug release profiles.

    Science.gov (United States)

    Shin, Sangmun; Choi, Du Hyung; Truong, Nguyen Khoa Viet; Kim, Nam Ah; Chu, Kyung Rok; Jeong, Seong Hoon

    2011-04-04

    A new experimental design methodology was developed by integrating the response surface methodology and the time series modeling. The major purposes were to identify significant factors in determining swelling and release rate from matrix tablets and their relative factor levels for optimizing the experimental responses. Properties of tablet swelling and drug release were assessed with ten factors and two default factors, a hydrophilic model drug (terazosin) and magnesium stearate, and compared with target values. The selected input control factors were arranged in a mixture simplex lattice design with 21 experimental runs. The obtained optimal settings for gelation were PEO, LH-11, Syloid, and Pharmacoat with weight ratios of 215.33 (88.50%), 5.68 (2.33%), 19.27 (7.92%), and 3.04 (1.25%), respectively. The optimal settings for drug release were PEO and citric acid with weight ratios of 191.99 (78.91%) and 51.32 (21.09%), respectively. Based on the results of matrix swelling and drug release, the optimal solutions, target values, and validation experiment results over time were similar and showed consistent patterns with very small biases. The experimental design methodology could be a very promising experimental design method to obtain maximum information with limited time and resources. It could also be very useful in formulation studies by providing a systematic and reliable screening method to characterize significant factors in the sustained release matrix tablet. Copyright © 2011 Elsevier B.V. All rights reserved.

  3. Alternating optimization method based on nonnegative matrix factorizations for deep neural networks

    OpenAIRE

    Sakurai, Tetsuya; Imakura, Akira; Inoue, Yuto; Futamura, Yasunori

    2016-01-01

    The backpropagation algorithm for calculating gradients has been widely used in computation of weights for deep neural networks (DNNs). This method requires derivatives of objective functions and has some difficulties finding appropriate parameters such as learning rate. In this paper, we propose a novel approach for computing weight matrices of fully-connected DNNs by using two types of semi-nonnegative matrix factorizations (semi-NMFs). In this method, optimization processes are performed b...

  4. Implementation of IMDCT Block of an MP3 Decoder through Optimization on the DCT Matrix

    Directory of Open Access Journals (Sweden)

    M. Galabov

    2004-12-01

    Full Text Available The paper describes an attempt to create an efficient dedicatedMP3-decoder, according to the MPEG-1 Layer III standard. A new methodof Inverse Modified Discrete Cosine Transform by optimization on theDiscrete Cosine Transform (DCT matrix is proposed and an assemblerprogram for Digital Signal Processor is developed. In addition, aprogram to calculate DCT using Lee's algorithm for any matrix of thesize 2M is created. The experimental results have proven that thedecoder is able to stream and decode MP3 in real time.

  5. A practical optimization procedure for radial BWR fuel lattice design using tabu search with a multiobjective function

    International Nuclear Information System (INIS)

    Francois, J.L.; Martin-del-Campo, C.; Francois, R.; Morales, L.B.

    2003-01-01

    An optimization procedure based on the tabu search (TS) method was developed for the design of radial enrichment and gadolinia distributions for boiling water reactor (BWR) fuel lattices. The procedure was coded in a computing system in which the optimization code uses the tabu search method to select potential solutions and the HELIOS code to evaluate them. The goal of the procedure is to search for an optimal fuel utilization, looking for a lattice with minimum average enrichment, with minimum deviation of reactivity targets and with a local power peaking factor (PPF) lower than a limit value. Time-dependent-depletion (TDD) effects were considered in the optimization process. The additive utility function method was used to convert the multiobjective optimization problem into a single objective problem. A strategy to reduce the computing time employed by the optimization was developed and is explained in this paper. An example is presented for a 10x10 fuel lattice with 10 different fuel compositions. The main contribution of this study is the development of a practical TDD optimization procedure for BWR fuel lattice design, using TS with a multiobjective function, and a strategy to economize computing time

  6. Development of multidisciplinary design optimization procedures for smart composite wings and turbomachinery blades

    Science.gov (United States)

    Jha, Ratneshwar

    Multidisciplinary design optimization (MDO) procedures have been developed for smart composite wings and turbomachinery blades. The analysis and optimization methods used are computationally efficient and sufficiently rigorous. Therefore, the developed MDO procedures are well suited for actual design applications. The optimization procedure for the conceptual design of composite aircraft wings with surface bonded piezoelectric actuators involves the coupling of structural mechanics, aeroelasticity, aerodynamics and controls. The load carrying member of the wing is represented as a single-celled composite box beam. Each wall of the box beam is analyzed as a composite laminate using a refined higher-order displacement field to account for the variations in transverse shear stresses through the thickness. Therefore, the model is applicable for the analysis of composite wings of arbitrary thickness. Detailed structural modeling issues associated with piezoelectric actuation of composite structures are considered. The governing equations of motion are solved using the finite element method to analyze practical wing geometries. Three-dimensional aerodynamic computations are performed using a panel code based on the constant-pressure lifting surface method to obtain steady and unsteady forces. The Laplace domain method of aeroelastic analysis produces root-loci of the system which gives an insight into the physical phenomena leading to flutter/divergence and can be efficiently integrated within an optimization procedure. The significance of the refined higher-order displacement field on the aeroelastic stability of composite wings has been established. The effect of composite ply orientations on flutter and divergence speeds has been studied. The Kreisselmeier-Steinhauser (K-S) function approach is used to efficiently integrate the objective functions and constraints into a single envelope function. The resulting unconstrained optimization problem is solved using the

  7. Design Procedure for High-Speed PM Motors Aided by Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Francesco Cupertino

    2018-02-01

    Full Text Available This paper considers the electromagnetic and structural co-design of superficial permanent magnet synchronous machines for high-speed applications, with the aid of a Pareto optimization procedure. The aim of this work is to present a design procedure for the afore-mentioned machines that relies on the combined used of optimization algorithms and finite element analysis. The proposed approach allows easy analysis of the results and a lowering of the computational burden. The proposed design method is presented through a practical example starting from the specifications of an aeronautical actuator. The design procedure is based on static finite element simulations for electromagnetic analysis and on analytical formulas for structural design. The final results are validated through detailed transient finite element analysis to verify both electromagnetic and structural performance. The step-by-step presentation of the proposed design methodology allows the reader to easily adapt it to different specifications. Finally, a comparison between a distributed-winding (24 slots and a concentrated-winding (6 slots machine is presented demonstrating the advantages of the former winding arrangement for high-speed applications.

  8. ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Keyes, David E.

    2016-01-01

    In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

  9. ℋ-matrix techniques for approximating large covariance matrices and estimating its parameters

    KAUST Repository

    Litvinenko, Alexander

    2016-10-25

    In this work the task is to use the available measurements to estimate unknown hyper-parameters (variance, smoothness parameter and covariance length) of the covariance function. We do it by maximizing the joint log-likelihood function. This is a non-convex and non-linear problem. To overcome cubic complexity in linear algebra, we approximate the discretised covariance function in the hierarchical (ℋ-) matrix format. The ℋ-matrix format has a log-linear computational cost and storage O(knlogn), where rank k is a small integer. On each iteration step of the optimization procedure the covariance matrix itself, its determinant and its Cholesky decomposition are recomputed within ℋ-matrix format. (© 2016 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim)

  10. Optimized procedure for calibration and verification multileaf collimator from Elekta Synergy accelerator

    International Nuclear Information System (INIS)

    Castel Millan, A.; Perellezo Mazon, A.; Fernandez Ibiza, J.; Arnalte Olloquequi, M.; Armengol Martinez, S.; Rodriguez Rey, A.; Guedea Edo, F.

    2011-01-01

    The objective of this work is to design an optimized procedure for calibration and verification of a multileaf collimator used so as to allow the EPID and the image plate in a complementary way, using different processing systems. With this procedure we have two equivalent alternative as the same parameters obtained for the calibration of multileaf Elekta Synergy accelerator.

  11. Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Williams, Samuel; Oliker, Leonid; Vuduc, Richard; Shalf, John; Yelick, Katherine; Demmel, James

    2008-10-16

    We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.

  12. Patience of matrix games

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.

    2013-01-01

    For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...

  13. Formulation development and optimization of sustained release matrix tablet of Itopride HCl by response surface methodology and its evaluation of release kinetics.

    Science.gov (United States)

    Bose, Anirbandeep; Wong, Tin Wui; Singh, Navjot

    2013-04-01

    The objective of this present investigation was to develop and formulate sustained release (SR) matrix tablets of Itopride HCl, by using different polymer combinations and fillers, to optimize by Central Composite Design response surface methodology for different drug release variables and to evaluate drug release pattern of the optimized product. Sustained release matrix tablets of various combinations were prepared with cellulose-based polymers: hydroxy propyl methyl cellulose (HPMC) and polyvinyl pyrolidine (pvp) and lactose as fillers. Study of pre-compression and post-compression parameters facilitated the screening of a formulation with best characteristics that underwent here optimization study by response surface methodology (Central Composite Design). The optimized tablet was further subjected to scanning electron microscopy to reveal its release pattern. The in vitro study revealed that combining of HPMC K100M (24.65 MG) with pvp(20 mg)and use of LACTOSE as filler sustained the action more than 12 h. The developed sustained release matrix tablet of improved efficacy can perform therapeutically better than a conventional tablet.

  14. Optimization of wet digestion procedure of blood and tissue for selenium determination by means of 75Se tracer

    International Nuclear Information System (INIS)

    Holynska, B.; Lipinska, K.

    1977-01-01

    Selenium-75 tracer has been used for optimization of analytical procedure of selenium determination in blood and tissue. Wet digestion procedure and reduction of selenium to its elemental form with tellurium as coprecipitant have been tested. Recovery of selenium obtained with the use of optimized analytical procedure amounts up 95% and precision is equal to 4.2%. (author)

  15. Optimization of Multiresonant Wireless Power Transfer Network Based on Generalized Coupled Matrix

    Directory of Open Access Journals (Sweden)

    Qiang Zhao

    2017-01-01

    Full Text Available Magnetic coupling resonant wireless power transfer network (MCRWPTN system can realize wireless power transfer for some electrical equipment real-time and high efficiency in a certain spatial scale, which resolves the contradiction between power transfer efficiency and the power transfer distance of the wireless power transfer. A fully coupled resonant energy transfer model for multirelay coils and ports is established. A dynamic adaptive impedance matching control based on fully coupling matrix and particle swarm optimization algorithm based on annealing is developed for the MCRWPTN. Furthermore, as an example, the network which has twenty nodes is analyzed, and the best transmission coefficient which has the highest power transfer efficiency is found using the optimization algorithm, and the coupling constraints are considered simultaneously. Finally, the effectiveness of the proposed method is proved by the simulation results.

  16. Optimal reduction of flexible dynamic system

    International Nuclear Information System (INIS)

    Jankovic, J.

    1994-01-01

    Dynamic system reduction is basic procedure in various problems of active control synthesis of flexible structures. In this paper is presented direct method for system reduction by explicit extraction of modes included in reduced model form. Criterion for optimal system discrete approximation in synthesis reduced dynamic model is also presented. Subjected method of system decomposition is discussed in relation to the Schur method of solving matrix algebraic Riccati equation as condition for system reduction. By using exposed method procedure of flexible system reduction in addition with corresponding example is presented. Shown procedure is powerful in problems of active control synthesis of flexible system vibrations

  17. Matrix Completion Optimization for Localization in Wireless Sensor Networks for Intelligent IoT

    Directory of Open Access Journals (Sweden)

    Thu L. N. Nguyen

    2016-05-01

    Full Text Available Localization in wireless sensor networks (WSNs is one of the primary functions of the intelligent Internet of Things (IoT that offers automatically discoverable services, while the localization accuracy is a key issue to evaluate the quality of those services. In this paper, we develop a framework to solve the Euclidean distance matrix completion problem, which is an important technical problem for distance-based localization in WSNs. The sensor network localization problem is described as a low-rank dimensional Euclidean distance completion problem with known nodes. The task is to find the sensor locations through recovery of missing entries of a squared distance matrix when the dimension of the data is small compared to the number of data points. We solve a relaxation optimization problem using a modification of Newton’s method, where the cost function depends on the squared distance matrix. The solution obtained in our scheme achieves a lower complexity and can perform better if we use it as an initial guess for an interactive local search of other higher precision localization scheme. Simulation results show the effectiveness of our approach.

  18. Bandwidth Optimization of Normal Equation Matrix in Bundle Block Adjustment in Multi-baseline Rotational Photography

    Directory of Open Access Journals (Sweden)

    WANG Xiang

    2016-02-01

    Full Text Available A new bandwidth optimization method of normal equation matrix in bundle block adjustment in multi-baseline rotational close range photography by image index re-sorting is proposed. The equivalent exposure station of each image is calculated by its object space coverage and the relationship with other adjacent images. Then, according to the coordinate relations between equivalent exposure stations, new logical indices of all images are computed, based on which, the optimized bandwidth value can be obtained. Experimental results show that the bandwidth determined by our proposed method is significantly better than its original value, thus the operational efficiency, as well as the memory consumption of multi-baseline rotational close range photography in real-data applications, is optimized to a certain extent.

  19. Robust Structured Control Design via LMI Optimization

    DEFF Research Database (Denmark)

    Adegas, Fabiano Daher; Stoustrup, Jakob

    2011-01-01

    This paper presents a new procedure for discrete-time robust structured control design. Parameter-dependent nonconvex conditions for stabilizable and induced L2-norm performance controllers are solved by an iterative linear matrix inequalities (LMI) optimization. A wide class of controller...... structures including decentralized of any order, fixed-order dynamic output feedback, static output feedback can be designed robust to polytopic uncertainties. Stability is proven by a parameter-dependent Lyapunov function. Numerical examples on robust stability margins shows that the proposed procedure can...

  20. Optimized data evaluation for k0-based NAA

    International Nuclear Information System (INIS)

    Van Sluijs, R.; Bossus, D.A.W.

    1999-01-01

    k 0 -NAA allows the simultaneous analysis of up-to 67 elements. The k 0 method is based on calculations using a special library instead of measuring standards. For an efficient use of the method, the calculations and resulting raw data require optimized evaluation procedures. In this paper two efficient procedures for nuclide identification and gamma interference correction are outlined. For a fast computation of the source-detector efficiency and coincidence correction factors the matrix interpolation technique is introduced. (author)

  1. A New Agile Radiating System Called Electromagnetic Band Gap Matrix Antenna

    Directory of Open Access Journals (Sweden)

    Hussein Abou Taam

    2014-01-01

    Full Text Available Civil and military applications are increasingly in need for agile antenna devices which respond to wireless telecommunications, radars, and electronic warfare requirements. The objective of this paper is to design a new agile antenna system called electromagnetic band gap (EBG matrix. The working principle of this antenna is based on the radiating aperture theory and constitutes the subject of an accepted CNRS patent. In order to highlight the interest and the originality of this antenna, we present a comparison between it and a classical patch array only for the (one-dimensional 1D configuration by using a rigorous full wave simulation (CST Microwave software. In addition, EBG matrix antenna can be controlled by specific synthesis algorithms. These algorithms use inside their; optimization loop an analysis procedure to evaluate the radiation pattern. The analysis procedure is described and validated at the end of this paper.

  2. The human error rate assessment and optimizing system HEROS - a new procedure for evaluating and optimizing the man-machine interface in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Hauptmanns, U.; Unger, H.

    2001-01-01

    A new procedure allowing the probabilistic evaluation and optimization of the man-machine system is presented. This procedure and the resulting expert system HEROS, which is an acronym for Human Error Rate Assessment and Optimizing System, is based on the fuzzy set theory. Most of the well-known procedures employed for the probabilistic evaluation of human factors involve the use of vague linguistic statements on performance shaping factors to select and to modify basic human error probabilities from the associated databases. This implies a large portion of subjectivity. Vague statements are expressed here in terms of fuzzy numbers or intervals which allow mathematical operations to be performed on them. A model of the man-machine system is the basis of the procedure. A fuzzy rule-based expert system was derived from ergonomic and psychological studies. Hence, it does not rely on a database, whose transferability to situations different from its origin is questionable. In this way, subjective elements are eliminated to a large extent. HEROS facilitates the importance analysis for the evaluation of human factors, which is necessary for optimizing the man-machine system. HEROS is applied to the analysis of a simple diagnosis of task of the operating personnel in a nuclear power plant

  3. TLM modeling and system identification of optimized antenna structures

    Directory of Open Access Journals (Sweden)

    N. Fichtner

    2008-05-01

    Full Text Available The transmission line matrix (TLM method in conjunction with the genetic algorithm (GA is presented for the bandwidth optimization of a low profile patch antenna. The optimization routine is supplemented by a system identification (SI procedure. By the SI the model parameters of the structure are estimated which is used for a reduction of the total TLM simulation time. The SI utilizes a new stability criterion of the physical poles for the parameter extraction.

  4. Optimization of exposure procedures for sub-quarter-micron CMOS applications

    Science.gov (United States)

    Hotta, Shoji; Onozuka, Toshihiko; Fukumoto, Keiko; Shirai, Seiichiro; Okazaki, Shinji

    1998-06-01

    We investigated various exposure procedures to minimize the Critical Dimension (CD) variation for the patterning of sub- quarter micron gates. To examine dependence of the CD variation on the pattern pitch and defocus conditions, the light intensity profiles of four different mask structures: (1) a binary mask with clear field, (2) a binary mask with dark field, (3) a phase-edge type phase-shifting mask (a phase-edge PSM) with clear field, and (4) a halftone phase- shifting mask (a halftone PSM) were compared, where exposure wavelength was 248 nm and numerical aperture (NA) of KrF stepper was 0.55. For 200-nm gate patterns, dependence of the CD variation on the pattern pitch and defocus conditions was minimized by a phase-edge PSM with clear field. By optimizing the illumination condition for a phase-edge PSM exposure, we obtained the CD variation of 10 nm at the minimum gate pitch of 0.8 micrometer and the defocus condition of plus or minus 0.4 micrometer. Applying the optimized exposure procedure to the device fabrication process, we obtained the total CD variation of plus or minus 27 nm.

  5. Variational optimization algorithms for uniform matrix product states

    Science.gov (United States)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  6. Procedures for finding optimal layouts of vehicle components with respect to durability

    Energy Technology Data Exchange (ETDEWEB)

    Eschenauer, H.A.; Idelberger, H. [Univ. of Siegen (Germany); Bieker, G.; Rottler, A. [Bombardier, Siegen-Netphen (Germany); Weinert, M. [Ford Motor Comp., Cologne (Germany)

    2007-07-01

    When designing complete systems or system components, it is of vital importance for the manufacturers to optimally fulfill the continuously increasing demands pertaining to safety, durability, reduction of energy consumption, noise reduction, improvement of comfort, accuracy, etc. This applies to all types of traffic and transportation systems like rail vehicles, automobiles, airplanes and ships. By combining structural analysis and simulation methods with optimization algorithms, required specifications can be met faster and more reliably, and hence the production development cycles can be substantially reduced. This paper shall give an overview on results of a method with the features of a damage approximation as precisely as possible on the one hand and, on the other hand, a load-time history with few different load cycles so that a nonlinear calculation can be performed in the shortest possible time. Simulations with rigidly and elastically modeled components like bogie frames or carbodies show that depending on the type of modeling substantial differences may occur with respect to dynamic behavior and the interaction quantity between the bodies. This aspect has to be taken into consideration for quantitatively sufficient fatigue strength and durability calculation. Mathematical optimization procedures are in general an efficient tool to guarantee the optimal fulfillment of all required design objectives and constraints in all stages of the design process. Some of the procedures are illustrated at two examples (bogie frame, carbody). (orig.)

  7. Profiling the Fatty Acids Content of Ornamental Camellia Seeds Cultivated in Galicia by an Optimized Matrix Solid-Phase Dispersion Extraction

    Directory of Open Access Journals (Sweden)

    Carmen Garcia-Jares

    2017-10-01

    Full Text Available Camellia (genus of flowering plants of fam. Theaceae is one of the main crops in Asia, where tea and oil from leaves and seeds have been utilized for thousands of years. This plant is excellently adapted to the climate and soil of Galicia (northwestern Spain and northern Portugal where it is grown not only as an ornamental plant, but to be evaluated as a source of bioactive compounds. In this work, the main fatty acids were extracted from Camellia seeds of four varieties of Camellia: sasanqua, reticulata, japonica and sinensis, by means of matrix-solid phase dispersion (MSPD, and analyzed by gas chromatography (GC with MS detection of the corresponding methyl esters. MSPD constitutes an efficient and greener alternative to conventional extraction techniques, moreover if it is combined with the use of green solvents such as limonene. The optimization of the MSPD extraction procedure has been conducted using a multivariate approach based on strategies of experimental design, which enabled the simultaneous evaluation of the factors influencing the extraction efficiency as well as interactions between factors. The optimized method was applied to characterize the fatty acids profiles of four Camellia varieties seeds, allowing us to compare their fatty acid composition.

  8. Profiling the Fatty Acids Content of Ornamental Camellia Seeds Cultivated in Galicia by an Optimized Matrix Solid-Phase Dispersion Extraction

    Science.gov (United States)

    Garcia-Jares, Carmen; Sanchez-Nande, Marta; Lamas, Juan Pablo; Lores, Marta

    2017-01-01

    Camellia (genus of flowering plants of fam. Theaceae) is one of the main crops in Asia, where tea and oil from leaves and seeds have been utilized for thousands of years. This plant is excellently adapted to the climate and soil of Galicia (northwestern Spain) and northern Portugal where it is grown not only as an ornamental plant, but to be evaluated as a source of bioactive compounds. In this work, the main fatty acids were extracted from Camellia seeds of four varieties of Camellia: sasanqua, reticulata, japonica and sinensis, by means of matrix-solid phase dispersion (MSPD), and analyzed by gas chromatography (GC) with MS detection of the corresponding methyl esters. MSPD constitutes an efficient and greener alternative to conventional extraction techniques, moreover if it is combined with the use of green solvents such as limonene. The optimization of the MSPD extraction procedure has been conducted using a multivariate approach based on strategies of experimental design, which enabled the simultaneous evaluation of the factors influencing the extraction efficiency as well as interactions between factors. The optimized method was applied to characterize the fatty acids profiles of four Camellia varieties seeds, allowing us to compare their fatty acid composition. PMID:29039745

  9. Radiological protection in coronary procedures. Is it sufficient with the practices optimizations?

    International Nuclear Information System (INIS)

    Cotelo, Elena D.

    2001-01-01

    The number of percutaneous transluminal coronary procedures (PTCA) per million inhabitants in Uruguay is similar to the one obtained in developed countries. Between 1995 and 1999, PTCA procedures increased by 86 %. Despite the 'Fondo Nacional de Recursos' finances the Interventional Cardiology (IC) procedures of 90 % inhabitants, the number of IC procedures on people of public hospital is lower than that on people from private hospitals. All the 6 IC facilities are in the capital of the country. The number of IC procedures increases while decrease s the distance between the hospital and the capital. This study also shows that no one facility performs quality control tests, the 50% of the X-ray equipment is more than 10 years and, the IC staff does not have Radiation Protection education. We conclude that it is necessary to establishing as soon as possible, Quality Assurance Programmes. Despite the objective of this work was to obtain information to optimize the IC procedures, results shows that it is necessary to include the Principle of Justification of the procedures in Radiation Protection education for the IC staff. (author)

  10. Optimal decision procedures for satisfiability in fragments of alternating-time temporal logics

    DEFF Research Database (Denmark)

    Goranko, Valentin; Vester, Steen

    2014-01-01

    We consider several natural fragments of the alternating-time temporal logics ATL*and ATL with restrictions on the nesting between temporal operators and strate-gicquantifiers. We develop optimal decision procedures for satisfiability in these fragments, showing that they have much lower complexi...

  11. Matrix completion by deep matrix factorization.

    Science.gov (United States)

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  12. Evaluation and optimization of the man-machine interface in nuclear power plants using the HEROS procedure in PSA

    International Nuclear Information System (INIS)

    Richei, A.; Koch, M.K.; Unger, H.; Hauptmanns, U.

    1998-01-01

    For the probabilistic evaluation and optimization of the man-machine-system a new procedure is developed. This and the resulting expert system, which is based on the fuzzy set theory, is called HEROS, an acronym for Human Error Rate Assessment and Optimizing System. There are several existing procedures for the probabilistic evaluation of human errors, which have several disadvantages. However, in all of these procedures fuzzy verbal expressions are often used for the evaluation of human factors, also in the best known procedures. This use of verbal expressions for describing performance-shaping factors, enabling the evaluation of human factors is the basic approach for HEROS. A model of a modem man-machine-system is the basis of the procedure. Results from ergonomic studies are used to establish a rule-based expert system. HEROS simplifies the importance analysis for the evaluation of human factors, which is necessary for the optimization of the man-machine-system. It can be used in all areas of probabilistic safety assessment. The application of HEROS in the scope of accident management procedures and the comparison with the results of other procedures as an example for the usefulness and substantially more extensive applicability of this new procedure will be shown. (author)

  13. A new experimental design method to optimize formulations focusing on a lubricant for hydrophilic matrix tablets.

    Science.gov (United States)

    Choi, Du Hyung; Shin, Sangmun; Khoa Viet Truong, Nguyen; Jeong, Seong Hoon

    2012-09-01

    A robust experimental design method was developed with the well-established response surface methodology and time series modeling to facilitate the formulation development process with magnesium stearate incorporated into hydrophilic matrix tablets. Two directional analyses and a time-oriented model were utilized to optimize the experimental responses. Evaluations of tablet gelation and drug release were conducted with two factors x₁ and x₂: one was a formulation factor (the amount of magnesium stearate) and the other was a processing factor (mixing time), respectively. Moreover, different batch sizes (100 and 500 tablet batches) were also evaluated to investigate an effect of batch size. The selected input control factors were arranged in a mixture simplex lattice design with 13 experimental runs. The obtained optimal settings of magnesium stearate for gelation were 0.46 g, 2.76 min (mixing time) for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The optimal settings for drug release were 0.33 g, 7.99 min for a 100 tablet batch and 1.54 g, 6.51 min for a 500 tablet batch. The exact ratio and mixing time of magnesium stearate could be formulated according to the resulting hydrophilic matrix tablet properties. The newly designed experimental method provided very useful information for characterizing significant factors and hence to obtain optimum formulations allowing for a systematic and reliable experimental design method.

  14. Canonical polyadic decomposition of third-order semi-nonnegative semi-symmetric tensors using LU and QR matrix factorizations

    Science.gov (United States)

    Wang, Lu; Albera, Laurent; Kachenoura, Amar; Shu, Huazhong; Senhadji, Lotfi

    2014-12-01

    Semi-symmetric three-way arrays are essential tools in blind source separation (BSS) particularly in independent component analysis (ICA). These arrays can be built by resorting to higher order statistics of the data. The canonical polyadic (CP) decomposition of such semi-symmetric three-way arrays allows us to identify the so-called mixing matrix, which contains the information about the intensities of some latent source signals present in the observation channels. In addition, in many applications, such as the magnetic resonance spectroscopy (MRS), the columns of the mixing matrix are viewed as relative concentrations of the spectra of the chemical components. Therefore, the two loading matrices of the three-way array, which are equal to the mixing matrix, are nonnegative. Most existing CP algorithms handle the symmetry and the nonnegativity separately. Up to now, very few of them consider both the semi-nonnegativity and the semi-symmetry structure of the three-way array. Nevertheless, like all the methods based on line search, trust region strategies, and alternating optimization, they appear to be dependent on initialization, requiring in practice a multi-initialization procedure. In order to overcome this drawback, we propose two new methods, called [InlineEquation not available: see fulltext.] and [InlineEquation not available: see fulltext.], to solve the problem of CP decomposition of semi-nonnegative semi-symmetric three-way arrays. Firstly, we rewrite the constrained optimization problem as an unconstrained one. In fact, the nonnegativity constraint of the two symmetric modes is ensured by means of a square change of variable. Secondly, a Jacobi-like optimization procedure is adopted because of its good convergence property. More precisely, the two new methods use LU and QR matrix factorizations, respectively, which consist in formulating high-dimensional optimization problems into several sequential polynomial and rational subproblems. By using both LU

  15. Subgrid-scale scalar flux modelling based on optimal estimation theory and machine-learning procedures

    Science.gov (United States)

    Vollant, A.; Balarac, G.; Corre, C.

    2017-09-01

    New procedures are explored for the development of models in the context of large eddy simulation (LES) of a passive scalar. They rely on the combination of the optimal estimator theory with machine-learning algorithms. The concept of optimal estimator allows to identify the most accurate set of parameters to be used when deriving a model. The model itself can then be defined by training an artificial neural network (ANN) on a database derived from the filtering of direct numerical simulation (DNS) results. This procedure leads to a subgrid scale model displaying good structural performance, which allows to perform LESs very close to the filtered DNS results. However, this first procedure does not control the functional performance so that the model can fail when the flow configuration differs from the training database. Another procedure is then proposed, where the model functional form is imposed and the ANN used only to define the model coefficients. The training step is a bi-objective optimisation in order to control both structural and functional performances. The model derived from this second procedure proves to be more robust. It also provides stable LESs for a turbulent plane jet flow configuration very far from the training database but over-estimates the mixing process in that case.

  16. An automatic procedure for optimizing fuel loading in consideration of the effect of burnup nonuniformity in assembly

    International Nuclear Information System (INIS)

    Wang Guoli.

    1988-01-01

    The effect of burnup nonuniformity across the assembly on optimizing fuel loading in core is investigated. Some new rules which can be used for optimizing fuel loading in the core are proposed. New automatic procedure for optimizing fuel loading in the core is described

  17. Multi-Objective Optimization of Experiments Using Curvature and Fisher Information Matrix

    Directory of Open Access Journals (Sweden)

    Erica Manesso

    2017-11-01

    Full Text Available The bottleneck in creating dynamic models of biological networks and processes often lies in estimating unknown kinetic model parameters from experimental data. In this regard, experimental conditions have a strong influence on parameter identifiability and should therefore be optimized to give the maximum information for parameter estimation. Existing model-based design of experiment (MBDOE methods commonly rely on the Fisher information matrix (FIM for defining a metric of data informativeness. When the model behavior is highly nonlinear, FIM-based criteria may lead to suboptimal designs, as the FIM only accounts for the linear variation in the model outputs with respect to the parameters. In this work, we developed a multi-objective optimization (MOO MBDOE, for which the model nonlinearity was taken into consideration through the use of curvature. The proposed MOO MBDOE involved maximizing data informativeness using a FIM-based metric and at the same time minimizing the model curvature. We demonstrated the advantages of the MOO MBDOE over existing FIM-based and other curvature-based MBDOEs in an application to the kinetic modeling of fed-batch fermentation of baker’s yeast.

  18. Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure

    Energy Technology Data Exchange (ETDEWEB)

    Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J. [Delft University of Technology, NL-2629 JB Delft (Netherlands)

    1998-07-01

    An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)

  19. Core design optimization by integration of a fast 3-D nodal code in a heuristic search procedure

    International Nuclear Information System (INIS)

    Geemert, R. van; Leege, P.F.A. de; Hoogenboom, J.E.; Quist, A.J.

    1998-01-01

    An automated design tool is being developed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, which is a 2 MWth swimming-pool type research reactor. As a black box evaluator, the 3-D nodal code SILWER, which up to now has been used only for evaluation of predetermined core designs, is integrated in the core optimization procedure. SILWER is a part of PSl's ELCOS package and features optional additional thermal-hydraulic, control rods and xenon poisoning calculations. This allows for fast and accurate evaluation of different core designs during the optimization search. Special attention is paid to handling the in- and output files for SILWER such that no adjustment of the code itself is required for its integration in the optimization programme. The optimization objective, the safety and operation constraints, as well as the optimization procedure, are discussed. (author)

  20. Spatiotemporal matrix image formation for programmable ultrasound scanners

    Science.gov (United States)

    Berthon, Beatrice; Morichau-Beauchant, Pierre; Porée, Jonathan; Garofalakis, Anikitos; Tavitian, Bertrand; Tanter, Mickael; Provost, Jean

    2018-02-01

    As programmable ultrasound scanners become more common in research laboratories, it is increasingly important to develop robust software-based image formation algorithms that can be obtained in a straightforward fashion for different types of probes and sequences with a small risk of error during implementation. In this work, we argue that as the computational power keeps increasing, it is becoming practical to directly implement an approximation to the matrix operator linking reflector point targets to the corresponding radiofrequency signals via thoroughly validated and widely available simulations software. Once such a spatiotemporal forward-problem matrix is constructed, standard and thus highly optimized inversion procedures can be leveraged to achieve very high quality images in real time. Specifically, we show that spatiotemporal matrix image formation produces images of similar or enhanced quality when compared against standard delay-and-sum approaches in phantoms and in vivo, and show that this approach can be used to form images even when using non-conventional probe designs for which adapted image formation algorithms are not readily available.

  1. Development of an optimized procedure bridging design and structural analysis codes for the automatized design of the SMART

    International Nuclear Information System (INIS)

    Kim, Tae Wan; Park, Keun Bae; Choi, Suhn; Kim, Kang Soo; Jeong, Kyeong Hoon; Lee, Gyu Mahn

    1998-09-01

    In this report, an optimized design and analysis procedure is established to apply to the SMART (System-integrated Modular Advanced ReacTor) development. The development of an optimized procedure is to minimize the time consumption and engineering effort by squeezing the design and feedback interactions. To achieve this goal, the data and information generated through the design development should be directly transferred to the analysis program with minimum operation. The verification of the design concept requires considerable effort since the communication between the design and analysis involves time consuming stage for the conversion of input information. In this report, an optimized procedure is established bridging the design and analysis stage utilizing the IDEAS, ABAQUS and ANSYS. (author). 3 refs., 2 tabs., 5 figs

  2. Impact of a yogurt matrix and cell microencapsulation on the survival of Lactobacillus reuteri in three in vitro gastric digestion procedures.

    Science.gov (United States)

    Champagne, C P; Raymond, Y; Guertin, N; Martoni, C J; Jones, M L; Mainville, I; Arcand, Y

    2015-01-01

    The goal of this study was to assess the interaction between microencapsulation and a yogurt food matrix on the survival of Lactobacillus reuteri NCIMB 30242 in four different in vitro systems that simulate a gastric environment. The four systems were: United States Pharmacopeia (USP) solutions, a static two-step (STS) procedure which included simulated food ingredients, a constantly dynamic digestion procedure (IViDiS), as well a multi-step dynamic digestion scheme (S'IViDiS). The pH profiles of the various procedures varied between systems with acidity levels being: USP > STS > IViDiS = S'IVIDiS. Addition of a food matrix increased the pH in all systems except for the USP methodology. Microencapsulation in alginate-based gels was effective in protecting the cells in model solutions when no food ingredients were present. The stability of the probiotic culture in the in vitro gastric environments was enhanced when (1) yoghurt or simulated food ingredient were present in the medium in sufficient quantity, (2) pH was higher. The procedure-comparison data of this study will be helpful in interpreting the literature with respect to viable counts of probiotics obtained from different static or dynamic in vitro gastric systems.

  3. Fabrication process optimization for improved mechanical properties of Al 7075/SiCp metal matrix composites

    Directory of Open Access Journals (Sweden)

    Dipti Kanta Das

    2016-04-01

    Full Text Available Two sets of nine different silicon carbide particulate (SiCp reinforced Al 7075 Metal Matrix Composites (MMCs were fabricated using liquid metallurgy stir casting process. Mean particle size and weight percentage of the reinforcement were varied according to Taguchi L9 Design of Experiments (DOE. One set of the cast composites were then heat treated to T6 condition. Optical micrographs of the MMCs reveal consistent dispersion of reinforcements in the matrix phase. Mechanical properties were determined for both as-cast and heat treated MMCs for comparison of the experimental results. Linear regression models were developed for mechanical properties of the heat treated MMCs using list square method of regression analysis. The fabrication process parameters were then optimized using Taguchi based grey relational analysis for the multiple mechanical properties of the heat treated MMCs. The largest value of mean grey relational grade was obtained for the composite with mean particle size 6.18 µm and 25 weight % of reinforcement. The optimal combination of process parameters were then verified through confirmation experiments, which resulted 42% of improvement in the grey relational grade. Finally, the percentage of contribution of each process parameter on the multiple performance characteristics was calculated through Analysis of Variance (ANOVA.

  4. Optimal Procedure for siting of Nuclear Power Plant

    International Nuclear Information System (INIS)

    Aziuddin, Khairiah Binti; Park, Seo Yeon; Roh, Myung Sub

    2013-01-01

    This study discusses on a simulation approach for sensitivity analysis of the weights of multi-criteria decision models. The simulation procedures can also be used to aid the actual decision process, particularly when the task is to select a subset of superior alternatives. This study is to identify the criteria or parameters which are sensitive to the weighting factor that can affect the results in the decision making process to determine the optimal site for nuclear power plant (NPP) site. To perform this study, we adhere to IAEA NS-R-3 and DS 433. The siting process for nuclear installation consists of site survey and site selection stages. The siting process generally consists of an investigation of a large region to select one or more candidate sites by surveying the sites. After comparing the ROI, two candidate sites are compared for final determination, which are Wolsong and Kori site. Some assumptions are taken into consideration due to limitations and constraints throughout performing this study. Sensitivity analysis of multi criteria decision models is performed in this study to determine the optimal site in the site selection stage. Logical Decisions software will be employed as a tool to perform this analysis. Logical Decisions software helps to formulate the preferences and then rank the alternatives. It provides clarification of the rankings and hence aids the decision makers on evaluating the alternatives, and finally draw a conclusion on the selection of the optimal site

  5. Optimal Procedure for siting of Nuclear Power Plant

    Energy Technology Data Exchange (ETDEWEB)

    Aziuddin, Khairiah Binti; Park, Seo Yeon; Roh, Myung Sub [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    This study discusses on a simulation approach for sensitivity analysis of the weights of multi-criteria decision models. The simulation procedures can also be used to aid the actual decision process, particularly when the task is to select a subset of superior alternatives. This study is to identify the criteria or parameters which are sensitive to the weighting factor that can affect the results in the decision making process to determine the optimal site for nuclear power plant (NPP) site. To perform this study, we adhere to IAEA NS-R-3 and DS 433. The siting process for nuclear installation consists of site survey and site selection stages. The siting process generally consists of an investigation of a large region to select one or more candidate sites by surveying the sites. After comparing the ROI, two candidate sites are compared for final determination, which are Wolsong and Kori site. Some assumptions are taken into consideration due to limitations and constraints throughout performing this study. Sensitivity analysis of multi criteria decision models is performed in this study to determine the optimal site in the site selection stage. Logical Decisions software will be employed as a tool to perform this analysis. Logical Decisions software helps to formulate the preferences and then rank the alternatives. It provides clarification of the rankings and hence aids the decision makers on evaluating the alternatives, and finally draw a conclusion on the selection of the optimal site.

  6. Structural analysis and optimization procedure of the TFTR device substructure

    International Nuclear Information System (INIS)

    Driesen, G.

    1975-10-01

    A structural evaluation of the TFTR device substructure is performed in order to verify the feasibility of the proposed design concept as well as to establish a design optimization procedure for minimizing the material and fabrication cost of the substructure members. A preliminary evaluation of the seismic capability is also presented. The design concept on which the analysis is based is consistent with that described in the Conceptual Design Status Briefing report dated June 18, 1975

  7. Estimating cellular parameters through optimization procedures: elementary principles and applications

    Directory of Open Access Journals (Sweden)

    Akatsuki eKimura

    2015-03-01

    Full Text Available Construction of quantitative models is a primary goal of quantitative biology, which aims to understand cellular and organismal phenomena in a quantitative manner. In this article, we introduce optimization procedures to search for parameters in a quantitative model that can reproduce experimental data. The aim of optimization is to minimize the sum of squared errors (SSE in a prediction or to maximize likelihood. A (local maximum of likelihood or (local minimum of the SSE can efficiently be identified using gradient approaches. Addition of a stochastic process enables us to identify the global maximum/minimum without becoming trapped in local maxima/minima. Sampling approaches take advantage of increasing computational power to test numerous sets of parameters in order to determine the optimum set. By combining Bayesian inference with gradient or sampling approaches, we can estimate both the optimum parameters and the form of the likelihood function related to the parameters. Finally, we introduce four examples of research that utilize parameter optimization to obtain biological insights from quantified data: transcriptional regulation, bacterial chemotaxis, morphogenesis, and cell cycle regulation. With practical knowledge of parameter optimization, cell and developmental biologists can develop realistic models that reproduce their observations and thus, obtain mechanistic insights into phenomena of interest.

  8. Use of fractional factorial design for optimization of digestion procedures followed by multi-element determination of essential and non-essential elements in nuts using ICP-OES technique.

    Science.gov (United States)

    Momen, Awad A; Zachariadis, George A; Anthemidis, Aristidis N; Stratis, John A

    2007-01-15

    Two digestion procedures have been tested on nut samples for application in the determination of essential (Cr, Cu, Fe, Mg, Mn, Zn) and non-essential (Al, Ba, Cd, Pb) elements by inductively coupled plasma-optical emission spectrometry (ICP-OES). These included wet digestions with HNO(3)/H(2)SO(4) and HNO(3)/H(2)SO(4)/H(2)O(2). The later one is recommended for better analytes recoveries (relative errortime, pre-digestion time, temperature of the hot plate and sample weight) were used for optimization of sample digestion procedures. For this purpose Plackett-Burman fractional factorial design, which involve eight experiments was adopted. The factors HNO(3) and H(2)O(2) volume, and the digestion time were found to be the most important parameters. The instrumental conditions were also optimized (using peanut matrix rather than aqueous standard solutions) considering radio-frequency (rf) incident power, nebulizer argon gas flow rate and sample uptake flow rate. The analytical performance, such as limits of detection (LOD<0.74mugg(-1)), precision of the overall procedures (relative standard deviation between 2.0 and 8.2%) and accuracy (relative errors between 0.4 and 11%) were assessed statistically to evaluate the developed analytical procedures. The good agreement between measured and certified values for all analytes (relative error <11%) with respect to IAEA-331 (spinach leaves) and IAEA-359 (cabbage) indicates that the developed analytical method is well suited for further studies on the fate of major elements in nuts and possibly similar matrices.

  9. Performance evaluation of matrix gradient coils.

    Science.gov (United States)

    Jia, Feng; Schultz, Gerrit; Testud, Frederik; Welz, Anna Masako; Weber, Hans; Littin, Sebastian; Yu, Huijun; Hennig, Jürgen; Zaitsev, Maxim

    2016-02-01

    In this paper, we present a new performance measure of a matrix coil (also known as multi-coil) from the perspective of efficient, local, non-linear encoding without explicitly considering target encoding fields. An optimization problem based on a joint optimization for the non-linear encoding fields is formulated. Based on the derived objective function, a figure of merit of a matrix coil is defined, which is a generalization of a previously known resistive figure of merit for traditional gradient coils. A cylindrical matrix coil design with a high number of elements is used to illustrate the proposed performance measure. The results are analyzed to reveal novel features of matrix coil designs, which allowed us to optimize coil parameters, such as number of coil elements. A comparison to a scaled, existing multi-coil is also provided to demonstrate the use of the proposed performance parameter. The assessment of a matrix gradient coil profits from using a single performance parameter that takes the local encoding performance of the coil into account in relation to the dissipated power.

  10. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John; Lee, Jon; Margulies, Susan

    2010-01-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  11. Efficient high-precision matrix algebra on parallel architectures for nonlinear combinatorial optimization

    KAUST Repository

    Gunnels, John

    2010-06-01

    We provide a first demonstration of the idea that matrix-based algorithms for nonlinear combinatorial optimization problems can be efficiently implemented. Such algorithms were mainly conceived by theoretical computer scientists for proving efficiency. We are able to demonstrate the practicality of our approach by developing an implementation on a massively parallel architecture, and exploiting scalable and efficient parallel implementations of algorithms for ultra high-precision linear algebra. Additionally, we have delineated and implemented the necessary algorithmic and coding changes required in order to address problems several orders of magnitude larger, dealing with the limits of scalability from memory footprint, computational efficiency, reliability, and interconnect perspectives. © Springer and Mathematical Programming Society 2010.

  12. Optimized in vitro procedure for assessing the cytocompatibility of magnesium-based biomaterials.

    Science.gov (United States)

    Jung, Ole; Smeets, Ralf; Porchetta, Dario; Kopp, Alexander; Ptock, Christoph; Müller, Ute; Heiland, Max; Schwade, Max; Behr, Björn; Kröger, Nadja; Kluwe, Lan; Hanken, Henning; Hartjen, Philip

    2015-09-01

    Magnesium (Mg) is a promising biomaterial for degradable implant applications that has been extensively studied in vitro and in vivo in recent years. In this study, we developed a procedure that allows an optimized and uniform in vitro assessment of the cytocompatibility of Mg-based materials while respecting the standard protocol DIN EN ISO 10993-5:2009. The mouse fibroblast line L-929 was chosen as the preferred assay cell line and MEM supplemented with 10% FCS, penicillin/streptomycin and 4mM l-glutamine as the favored assay medium. The procedure consists of (1) an indirect assessment of effects of soluble Mg corrosion products in material extracts and (2) a direct assessment of the surface compatibility in terms of cell attachment and cytotoxicity originating from active corrosion processes. The indirect assessment allows the quantification of cell-proliferation (BrdU-assay), viability (XTT-assay) as well as cytotoxicity (LDH-assay) of the mouse fibroblasts incubated with material extracts. Direct assessment visualizes cells attached to the test materials by means of live-dead staining. The colorimetric assays and the visual evaluation complement each other and the combination of both provides an optimized and simple procedure for assessing the cytocompatibility of Mg-based biomaterials in vitro. Copyright © 2015 Acta Materialia Inc. Published by Elsevier Ltd. All rights reserved.

  13. Some fundamental aspects of fault-tree and digraph-matrix relationships for a systems-interaction evaluation procedure

    International Nuclear Information System (INIS)

    Alesso, H.P.

    1982-01-01

    Recent events, such as Three Mile Island-2, Brown's Ferry-3, and Crystal River-3, have demonstrated that complex accidents can occur as a result of dependent (common-cause/mode) failures. These events are now being called Systems Interactions. A procedure for the identification and evaluation of Systems Interactions is being developed by the NRC. Several national laboratories and utilities have contributed preliminary procedures. As a result, there are several important views of the Systems Interaction problem. This report reviews some fundamental mathematical background of both fault-oriented and success-oriented risk analyses in order to bring out the advantages and disadvantages of each. In addition, it outlines several fault-oriented/dependency analysis approaches and several success-oriented/digraph-matrix approaches. The objective is to obtain a broad perspective of present options for solving the Systems Interaction problem

  14. Optimal Artificial Boundary Condition Configurations for Sensitivity-Based Model Updating and Damage Detection

    Science.gov (United States)

    2010-09-01

    matrix is used in many methods, like Jacobi or Gauss Seidel , for solving linear systems. Also, no partial pivoting is necessary for a strictly column...problems that arise during the procedure, which in general, converges to the solving of a linear system. The most common issue with the solution is the... iterative procedure to find an appropriate subset of parameters that produce an optimal solution commonly known as forward selection. Then, the

  15. MDCT evaluation of pulmonary embolism in children and young adults following a lateral tunnel Fontan procedure: optimizing contrast-enhancement techniques

    International Nuclear Information System (INIS)

    Prabhu, Sanjay P.; Mahmood, Soran; Sena, Laureen; Lee, Edward Y.

    2009-01-01

    Pulmonary embolism (PE) is a life-threatening thromboembolic complication in patients who have undergone a Fontan procedure for augmenting pulmonary blood flow in the setting of single-ventricle physiology. In patients following a Fontan procedure, lack of proper contrast agent mixing in the right atrium and sluggish, low-velocity blood flow within the Fontan circulation often results in suboptimal contrast enhancement within the pulmonary artery for evaluating PE. Unfortunately, there is a paucity of information describing the optimal contrast-enhancement technique with multidetector CT (MDCT) for evaluating PE in children and young adults following a Fontan procedure. We illustrate the MDCT imaging findings of suboptimal contrast enhancement within the pulmonary artery, which can be mistaken for PE, in patients following a lateral Fontan procedure, and we discuss MDCT techniques to optimize contrast enhancement within the pulmonary artery in these patients for evaluating PE. The MDCT imaging findings in pediatric and young adult patients following a lateral Fontan procedure and with clinically suspected PE are illustrated. We describe intravenous contrast agent injection techniques that can be used to optimize the contrast enhancement in the pulmonary artery in patients following a lateral Fontan procedure. The use of a suboptimal contrast-enhancement technique led to initial misdiagnosis and incomplete evaluation of PE in the three patients following a lateral Fontan procedure. Imaging in two patients showed that optimal evaluation of thrombosis in the Fontan pathway and PE in the pulmonary arteries can be successfully achieved with simultaneous upper- and lower-limb injections of contrast agent. This series demonstrates that suboptimal contrast enhancement can result in misdiagnosis or incomplete evaluation of PE in children and young adults following a lateral Fontan procedure. Careful attention to optimizing contrast enhancement during MDCT examination for

  16. Design of an optimized biomixture for the degradation of carbofuran based on pesticide removal and toxicity reduction of the matrix.

    Science.gov (United States)

    Chin-Pampillo, Juan Salvador; Ruiz-Hidalgo, Karla; Masís-Mora, Mario; Carazo-Rojas, Elizabeth; Rodríguez-Rodríguez, Carlos E

    2015-12-01

    Pesticide biopurification systems contain a biologically active matrix (biomixture) responsible for the accelerated elimination of pesticides in wastewaters derived from pest control in crop fields. Biomixtures have been typically prepared using the volumetric composition 50:25:25 (lignocellulosic substrate/humic component/soil); nonetheless, formal composition optimization has not been performed so far. Carbofuran is an insecticide/nematicide of high toxicity widely employed in developing countries. Therefore, the composition of a highly efficient biomixture (composed of coconut fiber, compost, and soil, FCS) for the removal of carbofuran was optimized by means of a central composite design and response surface methodology. The volumetric content of soil and the ratio coconut fiber/compost were used as the design variables. The performance of the biomixture was assayed by considering the elimination of carbofuran, the mineralization of (14)C-carbofuran, and the residual toxicity of the matrix, as response variables. Based on the models, the optimal volumetric composition of the FCS biomixture consists of 45:13:42 (coconut fiber/compost/soil), which resulted in minimal residual toxicity and ∼99% carbofuran elimination after 3 days. This optimized biomixture considerably differs from the standard 50:25:25 composition, which remarks the importance of assessing the performance of newly developed biomixtures during the design of biopurification systems.

  17. Optimal Fluorescence Waveband Determination for Detecting Defective Cherry Tomatoes Using a Fluorescence Excitation-Emission Matrix

    Directory of Open Access Journals (Sweden)

    In-Suck Baek

    2014-11-01

    Full Text Available A multi-spectral fluorescence imaging technique was used to detect defective cherry tomatoes. The fluorescence excitation and emission matrix was used to measure for defects, sound surface and stem areas to determine the optimal fluorescence excitation and emission wavelengths for discrimination. Two-way ANOVA revealed the optimal excitation wavelength for detecting defect areas was 410 nm. Principal component analysis (PCA was applied to the fluorescence emission spectra of all regions at 410 nm excitation to determine the emission wavelengths for defect detection. The major emission wavelengths were 688 nm and 506 nm for the detection. Fluorescence images combined with the determined emission wavebands demonstrated the feasibility of detecting defective cherry tomatoes with >98% accuracy. Multi-spectral fluorescence imaging has potential utility in non-destructive quality sorting of cherry tomatoes.

  18. Optimization of wet digestion procedure of blood and tissue for selenium determination by means of 75Se tracer

    International Nuclear Information System (INIS)

    Holynska, B.; Lipinska-Kalita, K.

    1977-01-01

    Selenium-75 tracer has been used for optimization of analytical procedure of selenium determination in blood and tissue. Wet digestion procedure and reduction of selenium to its elemental form with tellurium as coprecipitant have been tested. It is seen that the use of a mixture of perchloric and sulphuric acid with sodium molybdenate for the wet digestion of organic matter followed by the reduction of selenium to its elementary form by a mixture of stannous chloride and hydroxylamine hydrochloride results in very good recovery of selenium. Recovery of selenium obtained with the use of optimized analytical procedure amounts to 95% and precision is equal to 4.2%. (T.I.)

  19. Kohn-Sham potentials from electron densities using a matrix representation within finite atomic orbital basis sets

    Science.gov (United States)

    Zhang, Xing; Carter, Emily A.

    2018-01-01

    We revisit the static response function-based Kohn-Sham (KS) inversion procedure for determining the KS effective potential that corresponds to a given target electron density within finite atomic orbital basis sets. Instead of expanding the potential in an auxiliary basis set, we directly update the potential in its matrix representation. Through numerical examples, we show that the reconstructed density rapidly converges to the target density. Preliminary results are presented to illustrate the possibility of obtaining a local potential in real space from the optimized potential in its matrix representation. We have further applied this matrix-based KS inversion approach to density functional embedding theory. A proof-of-concept study of a solvated proton transfer reaction demonstrates the method's promise.

  20. Optimization of the semiautomated Macdonald and Savage procedure

    International Nuclear Information System (INIS)

    Kuvik, V.; Vrbova, L.

    1990-06-01

    Several topics concerning the Macdonald and Savage (MD/S) procedure used for the potentiometric determination of plutonium were included in the programme of Analytical Services requested form CCL Rez. One part of the work was focussed on eliminating the variations of the Pu results and titems. The sulfuric reagent should be aged at least for one week. The optimal composition of the sulfuric reagent is 2M sulfuric acid + 0.2M sulfuric acid + 0.25 cerous nitrate. The excess of 0.002 N permanganate may vary between 0.016 and 0.1 ml. A variation of 0.02M oxalic acid between 0.08 and 0.14 ml is acceptable. The presence of Ga results in a slight positive bias of the Pu determination at ratio. Ga/Pu = 0.25. 3 refs, 18 tabs

  1. Fuzzy risk matrix

    International Nuclear Information System (INIS)

    Markowski, Adam S.; Mannan, M. Sam

    2008-01-01

    A risk matrix is a mechanism to characterize and rank process risks that are typically identified through one or more multifunctional reviews (e.g., process hazard analysis, audits, or incident investigation). This paper describes a procedure for developing a fuzzy risk matrix that may be used for emerging fuzzy logic applications in different safety analyses (e.g., LOPA). The fuzzification of frequency and severity of the consequences of the incident scenario are described which are basic inputs for fuzzy risk matrix. Subsequently using different design of risk matrix, fuzzy rules are established enabling the development of fuzzy risk matrices. Three types of fuzzy risk matrix have been developed (low-cost, standard, and high-cost), and using a distillation column case study, the effect of the design on final defuzzified risk index is demonstrated

  2. Optimal configuration of partial Mueller matrix polarimeter for measuring the ellipsometric parameters in the presence of Poisson shot noise and Gaussian noise

    Science.gov (United States)

    Quan, Naicheng; Zhang, Chunmin; Mu, Tingkui

    2018-05-01

    We address the optimal configuration of a partial Mueller matrix polarimeter used to determine the ellipsometric parameters in the presence of additive Gaussian noise and signal-dependent shot noise. The numerical results show that, for the PSG/PSA consisting of a variable retarder and a fixed polarizer, the detection process immune to these two types of noise can be optimally composed by 121.2° retardation with a pair of azimuths ±71.34° and a 144.48° retardation with a pair of azimuths ±31.56° for four Mueller matrix elements measurement. Compared with the existing configurations, the configuration presented in this paper can effectively decrease the measurement variance and thus statistically improve the measurement precision of the ellipsometric parameters.

  3. Systematic and efficient side chain optimization for molecular docking using a cheapest-path procedure.

    Science.gov (United States)

    Schumann, Marcel; Armen, Roger S

    2013-05-30

    Molecular docking of small-molecules is an important procedure for computer-aided drug design. Modeling receptor side chain flexibility is often important or even crucial, as it allows the receptor to adopt new conformations as induced by ligand binding. However, the accurate and efficient incorporation of receptor side chain flexibility has proven to be a challenge due to the huge computational complexity required to adequately address this problem. Here we describe a new docking approach with a very fast, graph-based optimization algorithm for assignment of the near-optimal set of residue rotamers. We extensively validate our approach using the 40 DUD target benchmarks commonly used to assess virtual screening performance and demonstrate a large improvement using the developed side chain optimization over rigid receptor docking (average ROC AUC of 0.693 vs. 0.623). Compared to numerous benchmarks, the overall performance is better than nearly all other commonly used procedures. Furthermore, we provide a detailed analysis of the level of receptor flexibility observed in docking results for different classes of residues and elucidate potential avenues for further improvement. Copyright © 2013 Wiley Periodicals, Inc.

  4. Optimization of the sample dilution for minimization of the matrix effects and achieving maximal sensitivity in XRFA

    International Nuclear Information System (INIS)

    Dimov, L.; Benova, M.

    1989-01-01

    The method of neutral medium dilution can lead to practically full leveling of the matrix but the high degree of dilution at which this effect is achieved will inevitably result in loss of sensitivity. In the XRFA of heavy elements in a light matrix the dependence of the fluorescence intensity upon concentration is characterized by gradually decreasing steepness reacting saturation (as in analysis of Pb in a Pb-concentrate). The dilution by neutral medium can shift the concentration range to a zone of greater steepness but this results in an increase of sensitivity towards the concentration in the initial (undiluted) material only to a certain degree of dilution. This work presents an optimization of the degree of dilution by neutral medium which achieves a sufficient leveling of the matrix when using different materials and also maximal sensitivity towards higher concentrations. Furthermore an original solution is found to the problem of selecting a neutral medium enabling to achieve good homogenization with various materials and particularly - with marble flour. (author)

  5. Gas chromatographic-mass spectrometric analysis of urinary volatile organic metabolites: Optimization of the HS-SPME procedure and sample storage conditions.

    Science.gov (United States)

    Živković Semren, Tanja; Brčić Karačonji, Irena; Safner, Toni; Brajenović, Nataša; Tariba Lovaković, Blanka; Pizent, Alica

    2018-01-01

    Non-targeted metabolomics research of human volatile urinary metabolome can be used to identify potential biomarkers associated with the changes in metabolism related to various health disorders. To ensure reliable analysis of urinary volatile organic metabolites (VOMs) by gas chromatography-mass spectrometry (GC-MS), parameters affecting the headspace-solid phase microextraction (HS-SPME) procedure have been evaluated and optimized. The influence of incubation and extraction temperatures and times, coating fibre material and salt addition on SPME efficiency was investigated by multivariate optimization methods using reduced factorial and Doehlert matrix designs. The results showed optimum values for temperature to be 60°C, extraction time 50min, and incubation time 35min. The proposed conditions were applied to investigate urine samples' stability regarding different storage conditions and freeze-thaw processes. The sum of peak areas of urine samples stored at 4°C, -20°C, and -80°C up to six months showed a time dependent decrease over time although storage at -80°C resulted in a slight non-significant reduction comparing to the fresh sample. However, due to the volatile nature of the analysed compounds, more than two cycles of freezing/thawing of the sample stored for six months at -80°C should be avoided whenever possible. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Statistical Optimization of Sustained Release Venlafaxine HCI Wax Matrix Tablet.

    Science.gov (United States)

    Bhalekar, M R; Madgulkar, A R; Sheladiya, D D; Kshirsagar, S J; Wable, N D; Desale, S S

    2008-01-01

    The purpose of this research was to prepare a sustained release drug delivery system of venlafaxine hydrochloride by using a wax matrix system. The effects of bees wax and carnauba wax on drug release profile was investigated. A 3(2) full factorial design was applied to systemically optimize the drug release profile. Amounts of carnauba wax (X(1)) and bees wax (X(2)) were selected as independent variables and release after 12 h and time required for 50% (t(50)) drug release were selected as dependent variables. A mathematical model was generated for each response parameter. Both waxes retarded release after 12 h and increases the t(50) but bees wax showed significant influence. The drug release pattern for all the formulation combinations was found to be approaching Peppas kinetic model. Suitable combination of two waxes provided fairly good regulated release profile. The response surfaces and contour plots for each response parameter are presented for further interpretation of the results. The optimum formulations were chosen and their predicted results found to be in close agreement with experimental findings.

  7. Numerical study on optimal Stirling engine regenerator matrix designs taking into account the effects of matrix temperature oscillations

    International Nuclear Information System (INIS)

    Andersen, Stig Kildegard; Carlsen, Henrik; Thomsen, Per Grove

    2006-01-01

    A new regenerator matrix design that improves the efficiency of a Stirling engine has been developed in a numerical study of the existing SM5 Stirling engine. A new, detailed, one-dimensional Stirling engine model that delivers results in good agreement with experimental data was used for mapping the performance of the engine, for mapping the effects of regenerator matrix temperature oscillations, and for optimising the regenerator design. The regenerator matrix temperatures were found to oscillate in two modes. The first mode was oscillation of a nearly linear axial matrix temperature profile while the second mode bended the ends of the axial matrix temperature profile when gas flowed into the regenerator with a temperature significantly different from the matrix temperature. The first mode of oscillation improved the efficiency of the engine but the second mode reduced both the work output and efficiency of the engine. A new regenerator with three differently designed matrix sections that amplified the first mode of oscillation and reduced the second improved the efficiency of the engine from the current 32.9 to 33.2% with a 3% decrease in power output. An efficiency of 33.0% was achievable with uniform regenerator matrix properties

  8. Optimization of the beam shaping assembly in the D-D neutron generators-based BNCT using the response matrix method.

    Science.gov (United States)

    Kasesaz, Y; Khalafi, H; Rahmani, F

    2013-12-01

    Optimization of the Beam Shaping Assembly (BSA) has been performed using the MCNP4C Monte Carlo code to shape the 2.45 MeV neutrons that are produced in the D-D neutron generator. Optimal design of the BSA has been chosen by considering in-air figures of merit (FOM) which consists of 70 cm Fluental as a moderator, 30 cm Pb as a reflector, 2mm (6)Li as a thermal neutron filter and 2mm Pb as a gamma filter. The neutron beam can be evaluated by in-phantom parameters, from which therapeutic gain can be derived. Direct evaluation of both set of FOMs (in-air and in-phantom) is very time consuming. In this paper a Response Matrix (RM) method has been suggested to reduce the computing time. This method is based on considering the neutron spectrum at the beam exit and calculating contribution of various dose components in phantom to calculate the Response Matrix. Results show good agreement between direct calculation and the RM method. Copyright © 2013 Elsevier Ltd. All rights reserved.

  9. Robust Portfolio Optimization Using Pseudodistances

    Science.gov (United States)

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature. PMID:26468948

  10. Robust Portfolio Optimization Using Pseudodistances.

    Science.gov (United States)

    Toma, Aida; Leoni-Aubin, Samuela

    2015-01-01

    The presence of outliers in financial asset returns is a frequently occurring phenomenon which may lead to unreliable mean-variance optimized portfolios. This fact is due to the unbounded influence that outliers can have on the mean returns and covariance estimators that are inputs in the optimization procedure. In this paper we present robust estimators of mean and covariance matrix obtained by minimizing an empirical version of a pseudodistance between the assumed model and the true model underlying the data. We prove and discuss theoretical properties of these estimators, such as affine equivariance, B-robustness, asymptotic normality and asymptotic relative efficiency. These estimators can be easily used in place of the classical estimators, thereby providing robust optimized portfolios. A Monte Carlo simulation study and applications to real data show the advantages of the proposed approach. We study both in-sample and out-of-sample performance of the proposed robust portfolios comparing them with some other portfolios known in literature.

  11. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    International Nuclear Information System (INIS)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves

    2017-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2 k experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2 3 and the 2 4 , depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  12. Experimental design technique applied to the validation of an instrumental Neutron Activation Analysis procedure

    Energy Technology Data Exchange (ETDEWEB)

    Santos, Uanda Paula de M. dos; Moreira, Edson Gonçalves, E-mail: uandapaula@gmail.com, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energéticas e Nucleares (IPEN/CNEN-SP), São Paulo, SP (Brazil)

    2017-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) method were carried out for the determination of the elements bromine, chlorine, magnesium, manganese, potassium, sodium and vanadium in biological matrix materials using short irradiations at a pneumatic system. 2{sup k} experimental designs were applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. The chosen experimental designs were the 2{sup 3} and the 2{sup 4}, depending on the radionuclide half life. Different certified reference materials and multi-element comparators were analyzed considering the following variables: sample decay time, irradiation time, counting time and sample distance to detector. Comparator concentration, sample mass and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations, it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN /CNEN-SP). Optimized conditions were estimated based on the results of z-score tests, main effect, interaction effects and better irradiation conditions. (author)

  13. Reduction of Under-Determined Linear Systems by Sparce Block Matrix Technique

    DEFF Research Database (Denmark)

    Tarp-Johansen, Niels Jacob; Poulsen, Peter Noe; Damkilde, Lars

    1996-01-01

    numerical stability of the aforementioned reduction. Moreover the coefficient matrix for the equilibrium equations is typically very sparse. The objective is to deal efficiently with the full pivoting reduction of sparse rectangular matrices using a dynamic storage scheme based on the block matrix concept.......Under-determined linear equation systems occur in different engineering applications. In structural engineering they typically appear when applying the force method. As an example one could mention limit load analysis based on The Lower Bound Theorem. In this application there is a set of under......-determined equilibrium equation restrictions in an LP-problem. A significant reduction of computer time spent on solving the LP-problem is achieved if the equilib rium equations are reduced before going into the optimization procedure. Experience has shown that for some structures one must apply full pivoting to ensure...

  14. Multidisciplinary Design Optimization for Glass-Fiber Epoxy-Matrix Composite 5 MW Horizontal-Axis Wind-Turbine Blades

    Science.gov (United States)

    Grujicic, M.; Arakere, G.; Pandurangan, B.; Sellappan, V.; Vallejo, A.; Ozen, M.

    2010-11-01

    A multi-disciplinary design-optimization procedure has been introduced and used for the development of cost-effective glass-fiber reinforced epoxy-matrix composite 5 MW horizontal-axis wind-turbine (HAWT) blades. The turbine-blade cost-effectiveness has been defined using the cost of energy (CoE), i.e., a ratio of the three-blade HAWT rotor development/fabrication cost and the associated annual energy production. To assess the annual energy production as a function of the blade design and operating conditions, an aerodynamics-based computational analysis had to be employed. As far as the turbine blade cost is concerned, it is assessed for a given aerodynamic design by separately computing the blade mass and the associated blade-mass/size-dependent production cost. For each aerodynamic design analyzed, a structural finite element-based and a post-processing life-cycle assessment analyses were employed in order to determine a minimal blade mass which ensures that the functional requirements pertaining to the quasi-static strength of the blade, fatigue-controlled blade durability and blade stiffness are satisfied. To determine the turbine-blade production cost (for the currently prevailing fabrication process, the wet lay-up) available data regarding the industry manufacturing experience were combined with the attendant blade mass, surface area, and the duration of the assumed production run. The work clearly revealed the challenges associated with simultaneously satisfying the strength, durability and stiffness requirements while maintaining a high level of wind-energy capture efficiency and a lower production cost.

  15. Numerical study on optimal Stirling engine regenerator matrix designs taking into account the effects of matrix temperature oscillations

    DEFF Research Database (Denmark)

    Andersen, Stig Kildegård; Carlsen, Henrik; Thomsen, Per Grove

    2006-01-01

    A new regenerator matrix design that improves the efficiency of a Stirling engine has been developed in a numerical study of the existing SM5 Stirling engine. A new, detailed, one-dimensional Stirling engine model that delivers results in good agreement with experimental data was used for mapping...... the per- formance of the engine, for mapping the effects of regenerator matrix temperature oscillations, and for optimising the regenerator design. The regenerator matrix temperatures were found to oscillate in two modes. The first mode was oscillation of a nearly linear axial matrix temperature profile...... while the second mode bended the ends of the axial matrix temperature profile when gas flowed into the regenerator with a temperature significantly different from the matrix temperature. The first mode of oscillation improved the efficiency of the engine but the second mode reduced both the work output...

  16. Snapshot Mueller matrix polarimetry by wavelength polarization coding and application to the study of switching dynamics in a ferroelectric liquid crystal cell.

    Directory of Open Access Journals (Sweden)

    Le Jeune B.

    2010-06-01

    Full Text Available This paper describes a snapshot Mueller matrix polarimeter by wavelength polarization coding. This device is aimed at encoding polarization states in the spectral domain through use of a broadband source and high-order retarders. This allows one to measure a full Mueller matrix from a single spectrum whose acquisition time only depends on the detection system aperture. The theoretical fundamentals of this technique are developed prior to validation by experiments. The setup calibration is described as well as optimization and stabilization procedures. Then, it is used to study, by time-resolved Mueller matrix polarimetry, the switching dynamics in a ferroelectric liquid crystal cell.

  17. Multi-objective Optimization of Departure Procedures at Gimpo International Airport

    Science.gov (United States)

    Kim, Junghyun; Lim, Dongwook; Monteiro, Dylan Jonathan; Kirby, Michelle; Mavris, Dimitri

    2018-04-01

    Most aviation communities have increasing concerns about the environmental impacts, which are directly linked to health issues for local residents near the airport. In this study, the environmental impact of different departure procedures using the Aviation Environmental Design Tool (AEDT) was analyzed. First, actual operational data were compiled at Gimpo International Airport (March 20, 2017) from an open source. Two modifications were made in the AEDT to model the operational circumstances better and the preliminary AEDT simulations were performed according to the acquired operational procedures. Simulated noise results showed good agreements with noise measurement data at specific locations. Second, a multi-objective optimization of departure procedures was performed for the Boeing 737-800. Four design variables were selected and AEDT was linked to a variety of advanced design methods. The results showed that takeoff thrust had the greatest influence and it was found that fuel burn and noise had an inverse relationship. Two points representing each fuel burn and noise optimum on the Pareto front were parsed and run in AEDT to compare with the baseline. The results showed that the noise optimum case reduced Sound Exposure Level 80-dB noise exposure area by approximately 5% while the fuel burn optimum case reduced total fuel burn by 1% relative to the baseline for aircraft-level analysis.

  18. Optimization of digitization procedures in cultural heritage preservation

    Science.gov (United States)

    Martínez, Bea; Mitjà, Carles; Escofet, Jaume

    2013-11-01

    The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.

  19. The nuclear reaction matrix

    International Nuclear Information System (INIS)

    Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)

    1976-01-01

    Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods

  20. Optimization of procedures for mercury-203 instrumental neutron activation analysis in human urine

    Energy Technology Data Exchange (ETDEWEB)

    Blotcky, A J; Claassen, J P [Nebraska Univ., Omaha, NE (United States). Medical Center; Fung, Y K [Nebraska Univ., Lincoln, NE (United States). Dept. of Chemistry; Meade, A G; Rack, E P [Nebraska Univ., Lincoln, NE (United States)

    1995-08-01

    Mercury, a known neurotoxin, has been implicated in etiology and pathogenesis of such disease states as Alzheimer`s and Parkinson`s diseases. There is concern that the exposure to mercury vapor released from dental amalgam restorations is a potential health hazard. Measurement of mercury concentrations in blood or urine may be useful in diagnosis of mercury poisoning and in assessing the extent exposure. This study describes the optimization of pre-neutron activation analysis procedures such as sampling, selection of irradiation and counting vials and acid digestion in order to minimize mercury loss via volatilization and/or permeation through containers. Therefore, the determination of mercury can be complicated by these potential losses. In the optimized procedure 20mL of urine was spiked with three different concentrations of mercury, digested with concentrated nitric acid, and placed in polypropylene vials for irradiation and counting. Analysis was performed by subtracting the Se-75 photopeak contribution to the 279 keV Hg-203 photopeak and applying the method of standard additions. Urinary mercury concentrations in normal human subjects were determined to be of the order of 10ng/mL. (author). 22 refs., 1 fig., 5 tabs.

  1. Optimization of procedures for mercury-203 instrumental neutron activation analysis in human urine

    International Nuclear Information System (INIS)

    Blotcky, A.J.; Claassen, J.P.

    1995-01-01

    Mercury, a known neurotoxin, has been implicated in etiology and pathogenesis of such disease states as Alzheimer's and Parkinson's diseases. There is concern that the exposure to mercury vapor released from dental amalgam restorations is a potential health hazard. Measurement of mercury concentrations in blood or urine may be useful in diagnosis of mercury poisoning and in assessing the extent exposure. This study describes the optimization of pre-neutron activation analysis procedures such as sampling, selection of irradiation and counting vials and acid digestion in order to minimize mercury loss via volatilization and/or permeation through containers. Therefore, the determination of mercury can be complicated by these potential losses. In the optimized procedure 20mL of urine was spiked with three different concentrations of mercury, digested with concentrated nitric acid, and placed in polypropylene vials for irradiation and counting. Analysis was performed by subtracting the Se-75 photopeak contribution to the 279 keV Hg-203 photopeak and applying the method of standard additions. Urinary mercury concentrations in normal human subjects were determined to be of the order of 10ng/mL. (author). 22 refs., 1 fig., 5 tabs

  2. Optimization procedures in mammography: First results

    International Nuclear Information System (INIS)

    Espana Lopez, M. L.; Marcos de Paz, L.; Martin Rincon, C.; Jerez Sainz, I.; Lopez Franco, M. P.

    2001-01-01

    Optimization procedures in mammography using equipment with a unique target/filter combination can be carried out through such diverse factors as target optical density, technique factors for exposure, screen film combination or processing cycle, in order to obtain an image adequate for the diagnosis with an acceptable risk benefit balance. Diverse studies show an increase in the Standardised Detection Rate of invasive carcinomas with an increase in the optical density among others factors. In our hospital an optimisation process has been established, and as previous step, the target optical density has been increased up to 1,4 DO. The aim of this paper is to value the impact of optical density variation as much in the quality of image as in the entrance surface dose and the average dose to the glandular tissue, comparing them with the results obtained in a previous study. The study has been carried out in a sample of 106 patients, with an average age of 53,4 years, considering 212 clinical images corresponding to the two projections of a same breast with an average compressed thickness of 4,86 cm. An increase of 16,6% on the entrance surface dose and 18% on the average dose to the glandular tissue has been recorded. All the clinical images has been evaluated for the physician as adequate for diagnosis. (Author) 16 refs

  3. Application of Reduced Order Transonic Aerodynamic Influence Coefficient Matrix for Design Optimization

    Science.gov (United States)

    Pak, Chan-gi; Li, Wesley W.

    2009-01-01

    Supporting the Aeronautics Research Mission Directorate guidelines, the National Aeronautics and Space Administration [NASA] Dryden Flight Research Center is developing a multidisciplinary design, analysis, and optimization [MDAO] tool. This tool will leverage existing tools and practices, and allow the easy integration and adoption of new state-of-the-art software. Today s modern aircraft designs in transonic speed are a challenging task due to the computation time required for the unsteady aeroelastic analysis using a Computational Fluid Dynamics [CFD] code. Design approaches in this speed regime are mainly based on the manual trial and error. Because of the time required for unsteady CFD computations in time-domain, this will considerably slow down the whole design process. These analyses are usually performed repeatedly to optimize the final design. As a result, there is considerable motivation to be able to perform aeroelastic calculations more quickly and inexpensively. This paper will describe the development of unsteady transonic aeroelastic design methodology for design optimization using reduced modeling method and unsteady aerodynamic approximation. The method requires the unsteady transonic aerodynamics be represented in the frequency or Laplace domain. Dynamically linear assumption is used for creating Aerodynamic Influence Coefficient [AIC] matrices in transonic speed regime. Unsteady CFD computations are needed for the important columns of an AIC matrix which corresponded to the primary modes for the flutter. Order reduction techniques, such as Guyan reduction and improved reduction system, are used to reduce the size of problem transonic flutter can be found by the classic methods, such as Rational function approximation, p-k, p, root-locus etc. Such a methodology could be incorporated into MDAO tool for design optimization at a reasonable computational cost. The proposed technique is verified using the Aerostructures Test Wing 2 actually designed

  4. New procedure for the examination of the degradation of volatile organonitrogen compounds during the treatment of industrial effluents.

    Science.gov (United States)

    Boczkaj, Grzegorz; Makoś, Patrycja; Fernandes, Andre; Przyjazny, Andrzej

    2017-03-01

    We present a new procedure for the determination of 32 volatile organonitrogen compounds in samples of industrial effluents with a complex matrix. The procedure, based on dispersive liquid-liquid microextraction followed by gas chromatography with nitrogen-phosphorus and mass spectrometric detection, was optimized and validated. Optimization of the extraction included the type of extraction and disperser solvent, disperser solvent volume, pH, salting out effect, extraction, and centrifugation time. The procedure based on nitrogen-phosphorus detection was found to be superior, having lower limits of detection (0.0067-2.29 μg/mL) and quantitation as well as a wider linear range. The developed procedure was applied to the determination of content of volatile organonitrogen compounds in samples of raw effluents from the production of bitumens in which 13 compounds were identified at concentrations ranging from 0.15 to 10.86 μg/mL and in samples of effluents treated by various chemical methods. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Performance evaluation of different types of particle representation procedures of Particle Swarm Optimization in Job-shop Scheduling Problems

    Science.gov (United States)

    Izah Anuar, Nurul; Saptari, Adi

    2016-02-01

    This paper addresses the types of particle representation (encoding) procedures in a population-based stochastic optimization technique in solving scheduling problems known in the job-shop manufacturing environment. It intends to evaluate and compare the performance of different particle representation procedures in Particle Swarm Optimization (PSO) in the case of solving Job-shop Scheduling Problems (JSP). Particle representation procedures refer to the mapping between the particle position in PSO and the scheduling solution in JSP. It is an important step to be carried out so that each particle in PSO can represent a schedule in JSP. Three procedures such as Operation and Particle Position Sequence (OPPS), random keys representation and random-key encoding scheme are used in this study. These procedures have been tested on FT06 and FT10 benchmark problems available in the OR-Library, where the objective function is to minimize the makespan by the use of MATLAB software. Based on the experimental results, it is discovered that OPPS gives the best performance in solving both benchmark problems. The contribution of this paper is the fact that it demonstrates to the practitioners involved in complex scheduling problems that different particle representation procedures can have significant effects on the performance of PSO in solving JSP.

  6. Definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines

    Directory of Open Access Journals (Sweden)

    Suslov V.M.

    2005-12-01

    Full Text Available Idle time, without introduction of wave characteristics, algorithm of definition of a matrix of the generalized parameters asymmetrical multiphase transmission lines is offered. Definition of a matrix of parameters is based on a matrix primary specific of parameters of line and simple iterative procedure. The amount of iterations of iterative procedure is determined by a set error of performance of the resulted matrix ratio between separate blocks of a determined matrix. The given error is connected by close image of with a margin error determined matrix.

  7. Hierarchical approach to optimization of parallel matrix multiplication on large-scale platforms

    KAUST Repository

    Hasanov, Khalid

    2014-03-04

    © 2014, Springer Science+Business Media New York. Many state-of-the-art parallel algorithms, which are widely used in scientific applications executed on high-end computing systems, were designed in the twentieth century with relatively small-scale parallelism in mind. Indeed, while in 1990s a system with few hundred cores was considered a powerful supercomputer, modern top supercomputers have millions of cores. In this paper, we present a hierarchical approach to optimization of message-passing parallel algorithms for execution on large-scale distributed-memory systems. The idea is to reduce the communication cost by introducing hierarchy and hence more parallelism in the communication scheme. We apply this approach to SUMMA, the state-of-the-art parallel algorithm for matrix–matrix multiplication, and demonstrate both theoretically and experimentally that the modified Hierarchical SUMMA significantly improves the communication cost and the overall performance on large-scale platforms.

  8. Property-based design: optimization and characterization of polyvinyl alcohol (PVA) hydrogel and PVA-matrix composite for artificial cornea.

    Science.gov (United States)

    Jiang, Hong; Zuo, Yi; Zhang, Li; Li, Jidong; Zhang, Aiming; Li, Yubao; Yang, Xiaochao

    2014-03-01

    Each approach for artificial cornea design is toward the same goal: to develop a material that best mimics the important properties of natural cornea. Accordingly, the selection and optimization of corneal substitute should be based on their physicochemical properties. In this study, three types of polyvinyl alcohol (PVA) hydrogels with different polymerization degree (PVA1799, PVA2499 and PVA2699) were prepared by freeze-thawing techniques. After characterization in terms of transparency, water content, water contact angle, mechanical property, root-mean-square roughness and protein adsorption behavior, the optimized PVA2499 hydrogel with similar properties of natural cornea was selected as a matrix material for artificial cornea. Based on this, a biomimetic artificial cornea was fabricated with core-and-skirt structure: a transparent PVA hydrogel core, surrounding by a ringed PVA-matrix composite skirt that composed of graphite, Fe-doped nano hydroxyapatite (n-Fe-HA) and PVA hydrogel. Different ratio of graphite/n-Fe-HA can tune the skirt color from dark brown to light brown, which well simulates the iris color of Oriental eyes. Moreover, morphologic and mechanical examination showed that an integrated core-and-skirt artificial cornea was formed from an interpenetrating polymer network, no phase separation appeared on the interface between the core and the skirt.

  9. Optimization of a Multi-Step Procedure for Isolation of Chicken Bone Collagen

    OpenAIRE

    Cansu, ?mran; Boran, G?khan

    2015-01-01

    Chicken bone is not adequately utilized despite its high nutritional value and protein content. Although not a common raw material, chicken bone can be used in many different ways besides manufacturing of collagen products. In this study, a multi-step procedure was optimized to isolate chicken bone collagen for higher yield and quality for manufacture of collagen products. The chemical composition of chicken bone was 2.9% nitrogen corresponding to about 15.6% protein, 9.5% fat, 14.7% mineral ...

  10. Application of factorial designs and Doehlert matrix in optimization of experimental variables associated with the preconcentration and determination of vanadium and copper in seawater by inductively coupled plasma optical emission spectrometry

    International Nuclear Information System (INIS)

    Ferreira, Sergio L.C.; Queiroz, Adriana S.; Fernandes, Marcelo S.; Santos, Hilda C. dos

    2002-01-01

    In the present paper a procedure for preconcentration and determination of vanadium and copper in seawater using inductively coupled plasma optical emission spectrometry (ICP OES) is proposed, which is based on solid-phase extraction of vanadium (IV), vanadium (V) and copper (II) ions as 1-(2-pyridylazo)-2-naphthol (PAN) complexes by active carbon. The optimization process was carried out using two-level full factorials and Doehlert matrix designs. Four variables (PAN mass, pH, active carbon mass and shaking time) were regarded as factors in the optimization. Results of the two-level full factorial design 2 4 with 16 runs for vanadium extraction, based on the variance analysis (ANOVA), demonstrated that the factors pH and active carbon mass, besides the interaction (pHxactive carbon mass), are statistically significant. For copper, the ANOVA revealed that the factors PAN mass, pH and active carbon mass and the interactions (PAN massxpH) and (pHxactive carbon mass) are statistically significant. Doehlert designs were applied in order to determine the optimum conditions for extraction. The procedure proposed allowed the determination of vanadium and copper with detection limits (3σ/S) of 73 and 94 ng l -1 , respectively. The precision, calculated as relative standard deviation (R.S.D.), was 1.22 and 1.37% for 12.50 μg l -1 of vanadium and copper, respectively. The preconcentration factor was 80. The recovery achieved for determination of vanadium and copper in the presence of several cations demonstrated that this procedure improved the selectivity required for seawater analysis. The procedure was applied to the determination of vanadium and copper in seawater samples collected in Salvador City, Brazil. Results showed good agreement with other data reported in the literature

  11. Fast Bayesian optimal experimental design for seismic source inversion

    KAUST Repository

    Long, Quan

    2015-07-01

    We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.

  12. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan

    2016-01-06

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  13. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan; Motamed, Mohammad; Tempone, Raul

    2016-01-01

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  14. Matrix elasticity regulates the optimal cardiac myocyte shape for contractility

    Science.gov (United States)

    McCain, Megan L.; Yuan, Hongyan; Pasqualini, Francesco S.; Campbell, Patrick H.

    2014-01-01

    Concentric hypertrophy is characterized by ventricular wall thickening, fibrosis, and decreased myocyte length-to-width aspect ratio. Ventricular thickening is considered compensatory because it reduces wall stress, but the functional consequences of cell shape remodeling in this pathological setting are unknown. We hypothesized that decreases in myocyte aspect ratio allow myocytes to maximize contractility when the extracellular matrix becomes stiffer due to conditions such as fibrosis. To test this, we engineered neonatal rat ventricular myocytes into rectangles mimicking the 2-D profiles of healthy and hypertrophied myocytes on hydrogels with moderate (13 kPa) and high (90 kPa) elastic moduli. Actin alignment was unaffected by matrix elasticity, but sarcomere content was typically higher on stiff gels. Microtubule polymerization was higher on stiff gels, implying increased intracellular elastic modulus. On moderate gels, myocytes with moderate aspect ratios (∼7:1) generated the most peak systolic work compared with other cell shapes. However, on stiffer gels, low aspect ratios (∼2:1) generated the most peak systolic work. To compare the relative contributions of intracellular vs. extracellular elasticity to contractility, we developed an analytical model and used our experimental data to fit unknown parameters. Our model predicted that matrix elasticity dominates over intracellular elasticity, suggesting that the extracellular matrix may potentially be a more effective therapeutic target than microtubules. Our data and model suggest that myocytes with lower aspect ratios have a functional advantage when the elasticity of the extracellular matrix decreases due to conditions such as fibrosis, highlighting the role of the extracellular matrix in cardiac disease. PMID:24682394

  15. A materials selection procedure for sandwiched beams via parametric optimization with applications in automotive industry

    International Nuclear Information System (INIS)

    Aly, Mohamed F.; Hamza, Karim T.; Farag, Mahmoud M.

    2014-01-01

    Highlights: • Sandwich panels optimization model. • Sandwich panels design procedure. • Study of sandwich panels for automotive vehicle flooring. • Study of sandwich panels for truck cabin exterior. - Abstract: The future of automotive industry faces many challenges in meeting increasingly strict restrictions on emissions, energy usage and recyclability of components alongside the need to maintain cost competiveness. Weight reduction through innovative design of components and proper material selection can have profound impact towards attaining such goals since most of the lifecycle energy usage occurs during the operation phase of a vehicle. In electric and hybrid vehicles, weight reduction has another important effect of extending the electric mode driving range between stops or gasoline mode. This paper adopts parametric models for design optimization and material selection of sandwich panels with the objective of weight and cost minimization subject to structural integrity constraints such as strength, stiffness and buckling resistance. The proposed design procedure employs a pre-compiled library of candidate sandwich panel material combinations, for which optimization of the layered thicknesses is conducted and the best one is reported. Example demonstration studies from the automotive industry are presented for the replacement of Aluminum and Steel panels with polypropylene-filled sandwich panel alternatives

  16. Construction of the exact Fisher information matrix of Gaussian time series models by means of matrix differential rules

    NARCIS (Netherlands)

    Klein, A.A.B.; Melard, G.; Zahaf, T.

    2000-01-01

    The Fisher information matrix is of fundamental importance for the analysis of parameter estimation of time series models. In this paper the exact information matrix of a multivariate Gaussian time series model expressed in state space form is derived. A computationally efficient procedure is used

  17. 1,8-Bis(dimethylamino)naphthalene/9-aminoacridine: A new binary matrix for lipid fingerprinting of intact bacteria by matrix assisted laser desorption ionization mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Calvano, C.D., E-mail: cosimadamiana.calvano@uniba.it [Dipartimento di Chimica, Università degli Studi di Bari Aldo Moro, Via Orabona, 4, 70126 Bari (Italy); Monopoli, A.; Ditaranto, N. [Dipartimento di Chimica, Università degli Studi di Bari Aldo Moro, Via Orabona, 4, 70126 Bari (Italy); Palmisano, F. [Dipartimento di Chimica, Università degli Studi di Bari Aldo Moro, Via Orabona, 4, 70126 Bari (Italy); Centro Interdipartimentale di Ricerca S.M.A.R.T., Università degli Studi di Bari Aldo Moro, Via Orabona, 4, 70126 Bari (Italy)

    2013-10-10

    Graphical abstract: -- Highlights: •New binary matrix for less ionizable lipid analysis with no interfering peaks. •Combined MALDI and X-ray photoelectron spectroscopy (XPS) analyses. •Fast lipid fingerprint on Gram positive and Gram negative bacteria by MALDI MS. •Mapping of phospholipids by XPS imaging. •Very fast membrane lipid extraction procedure. -- Abstract: The effectiveness of a novel binary matrix composed of 1,8-bis(dimethylamino)naphthalene (DMAN; proton sponge) and 9-aminoacridine (9AA) for the direct lipid analysis of whole bacterial cells by matrix assisted laser desorption ionization mass spectrometry (MALDI MS) is demonstrated. Deprotonated analyte signals nearly free of matrix-related ions were observed in negative ion mode. The effect of the most important factors (laser energy, pulse voltage, DMAN/9AA ratio, analyte/matrix ratio) was investigated using a Box–Behnken response surface design followed by multi-response optimization in order to simultaneously maximize signal-to-noise (S/N) ratio and resolution. The chemical surface composition of single or mixed matrices was explored by X-ray photoelectron spectroscopy (XPS). Moreover, XPS imaging was used to map the spatial distribution of a model phospholipid in single or binary matrices. The DMAN/9AA binary matrix was then successfully applied to the analysis of intact Gram positive (Lactobacillus sanfranciscensis) or Gram negative (Escherichia coli) microorganisms. About fifty major membrane components (free fatty acids, mono-, di- and tri-glycerides, phospholipids, glycolipids and cardiolipins) were quickly and easily detected over a mass range spanning from ca. 200 to ca. 1600 m/z. Moreover, mass spectra with improved S/N ratio (compared to single matrices), reduced chemical noise and no formation of matrix-clusters were invariably obtained demonstrating the potential of this binary matrix to improve sensitivity.

  18. 1,8-Bis(dimethylamino)naphthalene/9-aminoacridine: A new binary matrix for lipid fingerprinting of intact bacteria by matrix assisted laser desorption ionization mass spectrometry

    International Nuclear Information System (INIS)

    Calvano, C.D.; Monopoli, A.; Ditaranto, N.; Palmisano, F.

    2013-01-01

    Graphical abstract: -- Highlights: •New binary matrix for less ionizable lipid analysis with no interfering peaks. •Combined MALDI and X-ray photoelectron spectroscopy (XPS) analyses. •Fast lipid fingerprint on Gram positive and Gram negative bacteria by MALDI MS. •Mapping of phospholipids by XPS imaging. •Very fast membrane lipid extraction procedure. -- Abstract: The effectiveness of a novel binary matrix composed of 1,8-bis(dimethylamino)naphthalene (DMAN; proton sponge) and 9-aminoacridine (9AA) for the direct lipid analysis of whole bacterial cells by matrix assisted laser desorption ionization mass spectrometry (MALDI MS) is demonstrated. Deprotonated analyte signals nearly free of matrix-related ions were observed in negative ion mode. The effect of the most important factors (laser energy, pulse voltage, DMAN/9AA ratio, analyte/matrix ratio) was investigated using a Box–Behnken response surface design followed by multi-response optimization in order to simultaneously maximize signal-to-noise (S/N) ratio and resolution. The chemical surface composition of single or mixed matrices was explored by X-ray photoelectron spectroscopy (XPS). Moreover, XPS imaging was used to map the spatial distribution of a model phospholipid in single or binary matrices. The DMAN/9AA binary matrix was then successfully applied to the analysis of intact Gram positive (Lactobacillus sanfranciscensis) or Gram negative (Escherichia coli) microorganisms. About fifty major membrane components (free fatty acids, mono-, di- and tri-glycerides, phospholipids, glycolipids and cardiolipins) were quickly and easily detected over a mass range spanning from ca. 200 to ca. 1600 m/z. Moreover, mass spectra with improved S/N ratio (compared to single matrices), reduced chemical noise and no formation of matrix-clusters were invariably obtained demonstrating the potential of this binary matrix to improve sensitivity

  19. Comparison of VFA titration procedures used for monitoring the biogas process

    DEFF Research Database (Denmark)

    Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng

    2014-01-01

    (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40mL of four times diluted digested manure was gently stirred (200rpm). Results from samples with different VFA......Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different...

  20. Ethical Matrix Manual

    NARCIS (Netherlands)

    Mepham, B.; Kaiser, M.; Thorstensen, E.; Tomkins, S.; Millar, K.

    2006-01-01

    The ethical matrix is a conceptual tool designed to help decision-makers (as individuals or working in groups) reach sound judgements or decisions about the ethical acceptability and/or optimal regulatory controls for existing or prospective technologies in the field of food and agriculture.

  1. Optimal portfolio strategy with cross-correlation matrix composed by DCCA coefficients: Evidence from the Chinese stock market

    Science.gov (United States)

    Sun, Xuelian; Liu, Zixian

    2016-02-01

    In this paper, a new estimator of correlation matrix is proposed, which is composed of the detrended cross-correlation coefficients (DCCA coefficients), to improve portfolio optimization. In contrast to Pearson's correlation coefficients (PCC), DCCA coefficients acquired by the detrended cross-correlation analysis (DCCA) method can describe the nonlinear correlation between assets, and can be decomposed in different time scales. These properties of DCCA make it possible to improve the investment effect and more valuable to investigate the scale behaviors of portfolios. The minimum variance portfolio (MVP) model and the Mean-Variance (MV) model are used to evaluate the effectiveness of this improvement. Stability analysis shows the effect of two kinds of correlation matrices on the estimation error of portfolio weights. The observed scale behaviors are significant to risk management and could be used to optimize the portfolio selection.

  2. Closed-form solutions for linear regulator design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    Vibration in modern structural and mechanical systems can be reduced in amplitude by increasing stiffness, redistributing stiffness and mass, and/or adding damping if design techniques are available to do so. Linear Quadratic Regulator (LQR) theory in modern multivariable control design, attacks the general dissipative elastic system design problem in a global formulation. The optimal design, however, allows electronic connections and phase relations which are not physically practical or possible in passive structural-mechanical devices. The restriction of LQR solutions (to the Algebraic Riccati Equation) to design spaces which can be implemented as passive structural members and/or dampers is addressed. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical system. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist.

  3. Optimization of institutional streams of green construction using the elements of the theory of matrix games

    Directory of Open Access Journals (Sweden)

    Prokhin Egor Anatol’evich

    2016-10-01

    Full Text Available In the modern conditions innovatization of construction is of great necessity, though it is associated with a number of problems of first of all institutional genesis. The development of green construction in Russia is on its first stages, though its necessity is growing according to the tendency for energy efficiency and sustainable development. The innovative process of ecological construction has a network model and requires its optimization with the aim of further development by advancing the institutional platform. The author proposed a conceptual scheme for an institutional platform of the innovative process of green construction and conducted systematization of institutional structures. The unique role of innovative and ecological institutes is substantiated. The author recommends an optimization method for institutional interaction of the subjects using the stakeholder theory and the theory of matrix games aimed at activation of innovative green technologies. Practical application of the offered algorithms and methods will allow increasing the efficiency of green construction development.

  4. An optimization procedure for borehole emplacement in fractured media

    International Nuclear Information System (INIS)

    Billaux, D.; Guerin, F.

    1998-01-01

    Specifying the position and orientation of the 'next borehole(s)' in a fractured medium, from prior incomplete knowledge of the fracture field and depending on the objectives assigned to this new borehole(s), is a crucial point in the iterative process of site characterization. The work described here explicitly includes site knowledge and specific objectives in a tractable procedure that checks possible borehole characteristics, and rates all trial boreholes according to their compliance with objectives. The procedure is based on the following ideas : Firstly, the optimization problem is strongly constrained, since feasible borehole head locations and borehole dips are generally limited. Secondly, a borehole is an 'access point' to the fracture network. Finally, when performing a flow or tracer test, the information obtained through the monitoring system will be best if this system detects the largest possible share of the flow induced by the test, and if it cuts the most 'interesting' flow paths. The optimization is carried out in four steps. 1) All possible borehole configurations are defined and stored. Typically, several hundred possible boreholes are created. Existing boreholes are also specified. 2) Stochastic fracture networks reproducing known site characteristics are generated. 3) A purely geometrical rating of all boreholes is used to select the 'geometrically best' boreholes or groups of boreholes. 4) Among the boreholes selected by the geometrical rating, the best one(s) is chosen by simulating the experiment for which it will be used and checking flowrates through possible boreholes. This method is applied to study the emplacement of a set of five monitoring boreholes prior to the sinking of a shaft for a planned underground laboratory in a granite massif in France (Vienne site). Twelve geometrical parameters are considered for each possible borehole. A detailed statistical study helps decide on the shape of a minimization function. This is then used

  5. POLYMAT-C: a comprehensive SPSS program for computing the polychoric correlation matrix.

    Science.gov (United States)

    Lorenzo-Seva, Urbano; Ferrando, Pere J

    2015-09-01

    We provide a free noncommercial SPSS program that implements procedures for (a) obtaining the polychoric correlation matrix between a set of ordered categorical measures, so that it can be used as input for the SPSS factor analysis (FA) program; (b) testing the null hypothesis of zero population correlation for each element of the matrix by using appropriate simulation procedures; (c) obtaining valid and accurate confidence intervals via bootstrap resampling for those correlations found to be significant; and (d) performing, if necessary, a smoothing procedure that makes the matrix amenable to any FA estimation procedure. For the main purpose (a), the program uses a robust unified procedure that allows four different types of estimates to be obtained at the user's choice. Overall, we hope the program will be a very useful tool for the applied researcher, not only because it provides an appropriate input matrix for FA, but also because it allows the researcher to carefully check the appropriateness of the matrix for this purpose. The SPSS syntax, a short manual, and data files related to this article are available as Supplemental materials that are available for download with this article.

  6. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-01-01

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  7. Extensions of Dynamic Programming: Decision Trees, Combinatorial Optimization, and Data Mining

    KAUST Repository

    Hussain, Shahid

    2016-07-10

    This thesis is devoted to the development of extensions of dynamic programming to the study of decision trees. The considered extensions allow us to make multi-stage optimization of decision trees relative to a sequence of cost functions, to count the number of optimal trees, and to study relationships: cost vs cost and cost vs uncertainty for decision trees by construction of the set of Pareto-optimal points for the corresponding bi-criteria optimization problem. The applications include study of totally optimal (simultaneously optimal relative to a number of cost functions) decision trees for Boolean functions, improvement of bounds on complexity of decision trees for diagnosis of circuits, study of time and memory trade-off for corner point detection, study of decision rules derived from decision trees, creation of new procedure (multi-pruning) for construction of classifiers, and comparison of heuristics for decision tree construction. Part of these extensions (multi-stage optimization) was generalized to well-known combinatorial optimization problems: matrix chain multiplication, binary search trees, global sequence alignment, and optimal paths in directed graphs.

  8. Closed-form solutions for linear regulator-design of mechanical systems including optimal weighting matrix selection

    Science.gov (United States)

    Hanks, Brantley R.; Skelton, Robert E.

    1991-01-01

    This paper addresses the restriction of Linear Quadratic Regulator (LQR) solutions to the algebraic Riccati Equation to design spaces which can be implemented as passive structural members and/or dampers. A general closed-form solution to the optimal free-decay control problem is presented which is tailored for structural-mechanical systems. The solution includes, as subsets, special cases such as the Rayleigh Dissipation Function and total energy. Weighting matrix selection is a constrained choice among several parameters to obtain desired physical relationships. The closed-form solution is also applicable to active control design for systems where perfect, collocated actuator-sensor pairs exist. Some examples of simple spring mass systems are shown to illustrate key points.

  9. General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory.

    Science.gov (United States)

    Roch, Loïc M; Baldridge, Kim K

    2017-10-04

    A general optimization procedure towards the development and implementation of a new family of minimal parameter spin-component-scaled double-hybrid (mSD) density functional theory (DFT) is presented. The nature of the proposed exchange-correlation functional establishes a methodology with minimal empiricism. This new family of double-hybrid (DH) density functionals is demonstrated using the PBEPBE functional, illustrating the optimization procedure to the mSD-PBEPBE method, and the performance characteristics shown for a set of non-covalent complexes covering a broad regime of weak interactions. With only two parameters, mSD-PBEPBE and its cost-effective counterpart, RI-mSD-PBEPBE, show a mean absolute error of ca. 0.4 kcal mol -1 averaged over 66 weak interacting systems. Following a successive 2D-grid refinement for a CBS extrapolation of the coefficients, the optimization procedure can be recommended for the design and implementation of a variety of additional DH methods using any of the plethora of currently available functionals.

  10. High performance matrix inversion based on LU factorization for multicore architectures

    KAUST Repository

    Dongarra, Jack

    2011-01-01

    The goal of this paper is to present an efficient implementation of an explicit matrix inversion of general square matrices on multicore computer architecture. The inversion procedure is split into four steps: 1) computing the LU factorization, 2) inverting the upper triangular U factor, 3) solving a linear system, whose solution yields inverse of the original matrix and 4) applying backward column pivoting on the inverted matrix. Using a tile data layout, which represents the matrix in the system memory with an optimized cache-aware format, the computation of the four steps is decomposed into computational tasks. A directed acyclic graph is generated on the fly which represents the program data flow. Its nodes represent tasks and edges the data dependencies between them. Previous implementations of matrix inversions, available in the state-of-the-art numerical libraries, are suffer from unnecessary synchronization points, which are non-existent in our implementation in order to fully exploit the parallelism of the underlying hardware. Our algorithmic approach allows to remove these bottlenecks and to execute the tasks with loose synchronization. A runtime environment system called QUARK is necessary to dynamically schedule our numerical kernels on the available processing units. The reported results from our LU-based matrix inversion implementation significantly outperform the state-of-the-art numerical libraries such as LAPACK (5x), MKL (5x) and ScaLAPACK (2.5x) on a contemporary AMD platform with four sockets and the total of 48 cores for a matrix of size 24000. A power consumption analysis shows that our high performance implementation is also energy efficient and substantially consumes less power than its competitors. © 2011 ACM.

  11. Table-sized matrix model in fractional learning

    Science.gov (United States)

    Soebagyo, J.; Wahyudin; Mulyaning, E. C.

    2018-05-01

    This article provides an explanation of the fractional learning model i.e. a Table-Sized Matrix model in which fractional representation and its operations are symbolized by the matrix. The Table-Sized Matrix are employed to develop problem solving capabilities as well as the area model. The Table-Sized Matrix model referred to in this article is used to develop an understanding of the fractional concept to elementary school students which can then be generalized into procedural fluency (algorithm) in solving the fractional problem and its operation.

  12. Optimization of photovoltaic energy production through an efficient switching matrix

    Directory of Open Access Journals (Sweden)

    Pietro Romano

    2013-09-01

    Full Text Available This work presents a preliminary study on the implementation of a new system for power output maximization of photovoltaic generators under non-homogeneous conditions. The study evaluates the performance of an efficient switching matrix and the relevant automatic reconfiguration control algorithms. The switching matrix is installed between the PV generator and the inverter, allowing a large number of possible module configurations. PV generator, switching matrix and the intelligent controller have been simulated in Simulink. The proposed reconfiguration system improved the energy extracted by the PV generator under non-uniform solar irradiation conditions. Short calculation times of the proposed control algorithms allow its use in real time applications even where a higher number of PV modules is required.

  13. Automated procedure for selection of optimal refueling policies for light water reactors

    International Nuclear Information System (INIS)

    Lin, B.I.; Zolotar, B.; Weisman, J.

    1979-01-01

    An automated procedure determining a minimum cost refueling policy has been developed for light water reactors. The procedure is an extension of the equilibrium core approach previously devised for pressurized water reactors (PWRs). Use of 1 1/2-group theory has improved the accuracy of the nuclear model and eliminated tedious fitting of albedos. A simple heuristic algorithm for locating a good starting policy has materially reduced PWR computing time. Inclusion of void effects and use of the Haling principle for axial flux calculations extended the nuclear model to boiling water reactors (BWRs). A good initial estimate of the refueling policy is obtained by recognizing that a nearly uniform distribution of reactivity provides low-power peaking. The initial estimate is improved upon by interchanging groups of four assemblies and is subsequently refined by interchanging individual assemblies. The method yields very favorable results, is simpler than previously proposed BWR fuel optimization schemes, and retains power cost as the objective function

  14. A derived heuristics based multi-objective optimization procedure for micro-grid scheduling

    Science.gov (United States)

    Li, Xin; Deb, Kalyanmoy; Fang, Yanjun

    2017-06-01

    With the availability of different types of power generators to be used in an electric micro-grid system, their operation scheduling as the load demand changes with time becomes an important task. Besides satisfying load balance constraints and the generator's rated power, several other practicalities, such as limited availability of grid power and restricted ramping of power output from generators, must all be considered during the operation scheduling process, which makes it difficult to decide whether the optimization results are accurate and satisfactory. In solving such complex practical problems, heuristics-based customized optimization algorithms are suggested. However, due to nonlinear and complex interactions of variables, it is difficult to come up with heuristics in such problems off-hand. In this article, a two-step strategy is proposed in which the first task deciphers important heuristics about the problem and the second task utilizes the derived heuristics to solve the original problem in a computationally fast manner. Specifically, the specific operation scheduling is considered from a two-objective (cost and emission) point of view. The first task develops basic and advanced level knowledge bases offline from a series of prior demand-wise optimization runs and then the second task utilizes them to modify optimized solutions in an application scenario. Results on island and grid connected modes and several pragmatic formulations of the micro-grid operation scheduling problem clearly indicate the merit of the proposed two-step procedure.

  15. Convex Banding of the Covariance Matrix.

    Science.gov (United States)

    Bien, Jacob; Bunea, Florentina; Xiao, Luo

    2016-01-01

    We introduce a new sparse estimator of the covariance matrix for high-dimensional models in which the variables have a known ordering. Our estimator, which is the solution to a convex optimization problem, is equivalently expressed as an estimator which tapers the sample covariance matrix by a Toeplitz, sparsely-banded, data-adaptive matrix. As a result of this adaptivity, the convex banding estimator enjoys theoretical optimality properties not attained by previous banding or tapered estimators. In particular, our convex banding estimator is minimax rate adaptive in Frobenius and operator norms, up to log factors, over commonly-studied classes of covariance matrices, and over more general classes. Furthermore, it correctly recovers the bandwidth when the true covariance is exactly banded. Our convex formulation admits a simple and efficient algorithm. Empirical studies demonstrate its practical effectiveness and illustrate that our exactly-banded estimator works well even when the true covariance matrix is only close to a banded matrix, confirming our theoretical results. Our method compares favorably with all existing methods, in terms of accuracy and speed. We illustrate the practical merits of the convex banding estimator by showing that it can be used to improve the performance of discriminant analysis for classifying sound recordings.

  16. Design of an X-band accelerating structure using a newly developed structural optimization procedure

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Xiaoxia [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Fang, Wencheng; Gu, Qiang [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); Zhao, Zhentang, E-mail: zhaozhentang@sinap.ac.cn [Shanghai Institute of Applied Physics, Chinese Academy of Sciences, Shanghai 201800 (China); University of Chinese Academy of Sciences, Beijing 100049 (China)

    2017-05-11

    An X-band high gradient accelerating structure is a challenging technology for implementation in advanced electron linear accelerator facilities. The present work discusses the design of an X-band accelerating structure for dedicated application to a compact hard X-ray free electron laser facility at the Shanghai Institute of Applied Physics, and numerous design optimizations are conducted with consideration for radio frequency (RF) breakdown, RF efficiency, short-range wakefields, and dipole/quadrupole field modes, to ensure good beam quality and a high accelerating gradient. The designed X-band accelerating structure is a constant gradient structure with a 4π/5 operating mode and input and output dual-feed couplers in a racetrack shape. The design process employs a newly developed effective optimization procedure for optimization of the X-band accelerating structure. In addition, the specific design of couplers providing high beam quality by eliminating dipole field components and reducing quadrupole field components is discussed in detail.

  17. Hyper-systolic matrix multiplication

    NARCIS (Netherlands)

    Lippert, Th.; Petkov, N.; Palazzari, P.; Schilling, K.

    A novel parallel algorithm for matrix multiplication is presented. It is based on a 1-D hyper-systolic processor abstraction. The procedure can be implemented on all types of parallel systems. (C) 2001 Elsevier Science B,V. All rights reserved.

  18. On topology optimization of plates with prestress

    DEFF Research Database (Denmark)

    Pedersen, Niels Leergaard

    2001-01-01

    of the sensitivities is complicated because of the initial stress stiffness matrix, but the computational cost can be kept low by using the adjoint method. The topology optimization problem is solved using the solid isotropic material with penalization (SIMP) method in combination with method of moving asymptotes (MMA......In this work, topology optimization is used to optimize the compliance or eigenvalues of prestressed plates. The prestress is accounted for by including the force equivalent to the prestressing and adding the initial stress stiffness matrix to the original stiffness matrix. The calculation...

  19. Efficient Data Gathering Methods in Wireless Sensor Networks Using GBTR Matrix Completion

    Directory of Open Access Journals (Sweden)

    Donghao Wang

    2016-09-01

    Full Text Available To obtain efficient data gathering methods for wireless sensor networks (WSNs, a novel graph based transform regularized (GBTR matrix completion algorithm is proposed. The graph based transform sparsity of the sensed data is explored, which is also considered as a penalty term in the matrix completion problem. The proposed GBTR-ADMM algorithm utilizes the alternating direction method of multipliers (ADMM in an iterative procedure to solve the constrained optimization problem. Since the performance of the ADMM method is sensitive to the number of constraints, the GBTR-A2DM2 algorithm obtained to accelerate the convergence of GBTR-ADMM. GBTR-A2DM2 benefits from merging two constraint conditions into one as well as using a restart rule. The theoretical analysis shows the proposed algorithms obtain satisfactory time complexity. Extensive simulation results verify that our proposed algorithms outperform the state of the art algorithms for data collection problems in WSNs in respect to recovery accuracy, convergence rate, and energy consumption.

  20. Porting of the DBCSR library for Sparse Matrix-Matrix Multiplications to Intel Xeon Phi systems

    OpenAIRE

    Bethune, Iain; Gloess, Andeas; Hutter, Juerg; Lazzaro, Alfio; Pabst, Hans; Reid, Fiona

    2017-01-01

    Multiplication of two sparse matrices is a key operation in the simulation of the electronic structure of systems containing thousands of atoms and electrons. The highly optimized sparse linear algebra library DBCSR (Distributed Block Compressed Sparse Row) has been specifically designed to efficiently perform such sparse matrix-matrix multiplications. This library is the basic building block for linear scaling electronic structure theory and low scaling correlated methods in CP2K. It is para...

  1. Analysis of Nonlinear Dynamics by Square Matrix Method

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Li Hua [Brookhaven National Lab. (BNL), Upton, NY (United States). Energy and Photon Sciences Directorate. National Synchrotron Light Source II

    2016-07-25

    The nonlinear dynamics of a system with periodic structure can be analyzed using a square matrix. In this paper, we show that because the special property of the square matrix constructed for nonlinear dynamics, we can reduce the dimension of the matrix from the original large number for high order calculation to low dimension in the first step of the analysis. Then a stable Jordan decomposition is obtained with much lower dimension. The transformation to Jordan form provides an excellent action-angle approximation to the solution of the nonlinear dynamics, in good agreement with trajectories and tune obtained from tracking. And more importantly, the deviation from constancy of the new action-angle variable provides a measure of the stability of the phase space trajectories and their tunes. Thus the square matrix provides a novel method to optimize the nonlinear dynamic system. The method is illustrated by many examples of comparison between theory and numerical simulation. Finally, in particular, we show that the square matrix method can be used for optimization to reduce the nonlinearity of a system.

  2. High-dimensional statistical inference: From vector to matrix

    Science.gov (United States)

    Zhang, Anru

    Statistical inference for sparse signals or low-rank matrices in high-dimensional settings is of significant interest in a range of contemporary applications. It has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. In this thesis, we consider several problems in including sparse signal recovery (compressed sensing under restricted isometry) and low-rank matrix recovery (matrix recovery via rank-one projections and structured matrix completion). The first part of the thesis discusses compressed sensing and affine rank minimization in both noiseless and noisy cases and establishes sharp restricted isometry conditions for sparse signal and low-rank matrix recovery. The analysis relies on a key technical tool which represents points in a polytope by convex combinations of sparse vectors. The technique is elementary while leads to sharp results. It is shown that, in compressed sensing, delta kA 0, delta kA < 1/3 + epsilon, deltak A + thetak,kA < 1 + epsilon, or deltatkA< √(t - 1) / t + epsilon are not sufficient to guarantee the exact recovery of all k-sparse signals for large k. Similar result also holds for matrix recovery. In addition, the conditions delta kA<1/3, deltak A+ thetak,kA<1, delta tkA < √(t - 1)/t and deltarM<1/3, delta rM+ thetar,rM<1, delta trM< √(t - 1)/ t are also shown to be sufficient respectively for stable recovery of approximately sparse signals and low-rank matrices in the noisy case. For the second part of the thesis, we introduce a rank-one projection model for low-rank matrix recovery and propose a constrained nuclear norm minimization method for stable recovery of low-rank matrices in the noisy case. The procedure is adaptive to the rank and robust against small perturbations. Both upper and lower bounds for the estimation accuracy under the Frobenius norm loss are obtained. The proposed estimator is shown to be rate-optimal under certain conditions. The

  3. Optimized data fusion for K-means Laplacian clustering

    Science.gov (United States)

    Yu, Shi; Liu, Xinhai; Tranchevent, Léon-Charles; Glänzel, Wolfgang; Suykens, Johan A. K.; De Moor, Bart; Moreau, Yves

    2011-01-01

    Motivation: We propose a novel algorithm to combine multiple kernels and Laplacians for clustering analysis. The new algorithm is formulated on a Rayleigh quotient objective function and is solved as a bi-level alternating minimization procedure. Using the proposed algorithm, the coefficients of kernels and Laplacians can be optimized automatically. Results: Three variants of the algorithm are proposed. The performance is systematically validated on two real-life data fusion applications. The proposed Optimized Kernel Laplacian Clustering (OKLC) algorithms perform significantly better than other methods. Moreover, the coefficients of kernels and Laplacians optimized by OKLC show some correlation with the rank of performance of individual data source. Though in our evaluation the K values are predefined, in practical studies, the optimal cluster number can be consistently estimated from the eigenspectrum of the combined kernel Laplacian matrix. Availability: The MATLAB code of algorithms implemented in this paper is downloadable from http://homes.esat.kuleuven.be/~sistawww/bioi/syu/oklc.html. Contact: shiyu@uchicago.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20980271

  4. Package of procedures for the decision of optimization tasks by the method of branches and borders

    OpenAIRE

    Nestor, Natalia

    2012-01-01

    The practical aspects of realization of method of branches and borders are examined. The structure of package of procedures is pointed for implementation of basic operations at the decision of optimization tasks. A package is projected as a programmatic kernel which can be used for the various tasks of exhaustive search with returning.

  5. Matrix Krylov subspace methods for image restoration

    Directory of Open Access Journals (Sweden)

    khalide jbilou

    2015-09-01

    Full Text Available In the present paper, we consider some matrix Krylov subspace methods for solving ill-posed linear matrix equations and in those problems coming from the restoration of blurred and noisy images. Applying the well known Tikhonov regularization procedure leads to a Sylvester matrix equation depending the Tikhonov regularized parameter. We apply the matrix versions of the well known Krylov subspace methods, namely the Least Squared (LSQR and the conjugate gradient (CG methods to get approximate solutions representing the restored images. Some numerical tests are presented to show the effectiveness of the proposed methods.

  6. Optimization Algorithm for Kalman Filter Exploiting the Numerical Characteristics of SINS/GPS Integrated Navigation Systems.

    Science.gov (United States)

    Hu, Shaoxing; Xu, Shike; Wang, Duhu; Zhang, Aiwu

    2015-11-11

    Aiming at addressing the problem of high computational cost of the traditional Kalman filter in SINS/GPS, a practical optimization algorithm with offline-derivation and parallel processing methods based on the numerical characteristics of the system is presented in this paper. The algorithm exploits the sparseness and/or symmetry of matrices to simplify the computational procedure. Thus plenty of invalid operations can be avoided by offline derivation using a block matrix technique. For enhanced efficiency, a new parallel computational mechanism is established by subdividing and restructuring calculation processes after analyzing the extracted "useful" data. As a result, the algorithm saves about 90% of the CPU processing time and 66% of the memory usage needed in a classical Kalman filter. Meanwhile, the method as a numerical approach needs no precise-loss transformation/approximation of system modules and the accuracy suffers little in comparison with the filter before computational optimization. Furthermore, since no complicated matrix theories are needed, the algorithm can be easily transplanted into other modified filters as a secondary optimization method to achieve further efficiency.

  7. Application of Newton's optimal power flow in voltage/reactive power control

    Energy Technology Data Exchange (ETDEWEB)

    Bjelogrlic, M.; Babic, B.S. (Electric Power Board of Serbia, Belgrade (YU)); Calovic, M.S. (Dept. of Electrical Engineering, University of Belgrade, Belgrade (YU)); Ristanovic, P. (Institute Nikola Tesla, Belgrade (YU))

    1990-11-01

    This paper considers an application of Newton's optimal power flow to the solution of the secondary voltage/reactive power control in transmission networks. An efficient computer program based on the latest achievements in the sparse matrix/vector techniques has been developed for this purpose. It is characterized by good robustness, accuracy and speed. A combined objective function appropriate for various system load levels with suitable constraints, for treatment of the power system security and economy is also proposed. For the real-time voltage/reactive power control, a suboptimal power flow procedure has been derived by using the reduced set of control variables. This procedure is based on the sensitivity theory applied to the determination of zones for the secondary voltage/reactive power control and corresponding reduced set of regulating sources, whose reactive outputs represent control variables in the optimal power flow program. As a result, the optimal power flow program output becomes a schedule to be used by operators in the process of the real-time voltage/reactive power control in both normal and emergency operating states.

  8. Variational Optimization of the Second-Order Density Matrix Corresponding to a Seniority-Zero Configuration Interaction Wave Function.

    Science.gov (United States)

    Poelmans, Ward; Van Raemdonck, Mario; Verstichel, Brecht; De Baerdemacker, Stijn; Torre, Alicia; Lain, Luis; Massaccesi, Gustavo E; Alcoba, Diego R; Bultinck, Patrick; Van Neck, Dimitri

    2015-09-08

    We perform a direct variational determination of the second-order (two-particle) density matrix corresponding to a many-electron system, under a restricted set of the two-index N-representability P-, Q-, and G-conditions. In addition, we impose a set of necessary constraints that the two-particle density matrix must be derivable from a doubly occupied many-electron wave function, i.e., a singlet wave function for which the Slater determinant decomposition only contains determinants in which spatial orbitals are doubly occupied. We rederive the two-index N-representability conditions first found by Weinhold and Wilson and apply them to various benchmark systems (linear hydrogen chains, He, N2, and CN(-)). This work is motivated by the fact that a doubly occupied many-electron wave function captures in many cases the bulk of the static correlation. Compared to the general case, the structure of doubly occupied two-particle density matrices causes the associate semidefinite program to have a very favorable scaling as L(3), where L is the number of spatial orbitals. Since the doubly occupied Hilbert space depends on the choice of the orbitals, variational calculation steps of the two-particle density matrix are interspersed with orbital-optimization steps (based on Jacobi rotations in the space of the spatial orbitals). We also point to the importance of symmetry breaking of the orbitals when performing calculations in a doubly occupied framework.

  9. Optimization of the procedure for the synthesis of calcium lactate pentahydrate in laboratory and semi-industrial conditions

    Directory of Open Access Journals (Sweden)

    Ušćumlić Gordana S.

    2009-01-01

    Full Text Available This paper is concerned on the development of the optimal laboratory procedure for the synthesis of calcium lactate pentahydrate and the application of obtained results in a project for a semi-industrial installation for its production. Calcium lactate is used as an additive in numerous food and pharmaceutical products. Basically, it has to satisfy quality requirements. That was the reason why the procedure for its synthesis had to be optimized in aspects of selection of reactants, their molar ratio, necessary laboratory equipment, reactant addition order, working temperature, isolation of final product from the reaction mixture, yield and product quality. A semi-industrial installation for the production of calcium lactate pentahydrate is projected on the basis of the results of this investigation. The importance of this investigation arises from the fact that this salt is not produced in Serbia and the complete quantity (about 20 t per year is imported.

  10. Machining of Metal Matrix Composites

    CERN Document Server

    2012-01-01

    Machining of Metal Matrix Composites provides the fundamentals and recent advances in the study of machining of metal matrix composites (MMCs). Each chapter is written by an international expert in this important field of research. Machining of Metal Matrix Composites gives the reader information on machining of MMCs with a special emphasis on aluminium matrix composites. Chapter 1 provides the mechanics and modelling of chip formation for traditional machining processes. Chapter 2 is dedicated to surface integrity when machining MMCs. Chapter 3 describes the machinability aspects of MMCs. Chapter 4 contains information on traditional machining processes and Chapter 5 is dedicated to the grinding of MMCs. Chapter 6 describes the dry cutting of MMCs with SiC particulate reinforcement. Finally, Chapter 7 is dedicated to computational methods and optimization in the machining of MMCs. Machining of Metal Matrix Composites can serve as a useful reference for academics, manufacturing and materials researchers, manu...

  11. Pointwise second-order necessary optimality conditions and second-order sensitivity relations in optimal control

    Science.gov (United States)

    Frankowska, Hélène; Hoehener, Daniel

    2017-06-01

    This paper is devoted to pointwise second-order necessary optimality conditions for the Mayer problem arising in optimal control theory. We first show that with every optimal trajectory it is possible to associate a solution p (ṡ) of the adjoint system (as in the Pontryagin maximum principle) and a matrix solution W (ṡ) of an adjoint matrix differential equation that satisfy a second-order transversality condition and a second-order maximality condition. These conditions seem to be a natural second-order extension of the maximum principle. We then prove a Jacobson like necessary optimality condition for general control systems and measurable optimal controls that may be only ;partially singular; and may take values on the boundary of control constraints. Finally we investigate the second-order sensitivity relations along optimal trajectories involving both p (ṡ) and W (ṡ).

  12. Effect of type and percentage of reinforcement for optimization of the cutting force in turning of Aluminium matrix nanocomposites using response surface methodologies

    Energy Technology Data Exchange (ETDEWEB)

    Priyadarshi, Devinder [DAV Institute of Engineering and Technology, Jalandhar (India); Sharma, Rajesh Kumar [Institute of Technology, Hamirpur (India)

    2016-03-15

    Aluminium matrix composites (AMCs) now hold a significant share of raw materials in many applications. It is of prime importance to study the machinability of such composites so as to enhance their applicability. Sufficient work has been done for studying the machining of AMCs with particle reinforcements of micron range. This paper presents the study of AMCs with particle reinforcement of under micron range i.e. nanoparticles. This paper brings out the results of an experimental investigation of type and weight percent of nanoparticles on the tangential cutting force during turning operation. SiC, Gr and SiC-Gr (in equal proportions) were used with Al-6061 alloy as the matrix phase. The results indicate that composites with SiC require greater cutting force followed by hybrid and then Gr. Increase in the weight percent also significantly affected the magnitude of cutting force. RSM was used first to design and analyze the experiments and then to optimize the turning process and obtain optimal conditions of weight and type of reinforcements for turning operation.

  13. A TOTP-based enhanced route optimization procedure for mobile IPv6 to reduce handover delay and signalling overhead.

    Science.gov (United States)

    Shah, Peer Azmat; Hasbullah, Halabi B; Lawal, Ibrahim A; Aminu Mu'azu, Abubakar; Tang Jung, Low

    2014-01-01

    Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO).

  14. A TOTP-Based Enhanced Route Optimization Procedure for Mobile IPv6 to Reduce Handover Delay and Signalling Overhead

    Science.gov (United States)

    Shah, Peer Azmat; Hasbullah, Halabi B.; Lawal, Ibrahim A.; Aminu Mu'azu, Abubakar; Tang Jung, Low

    2014-01-01

    Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP), video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node's reachability at the home address and at the care-of address (home test and care-of test) that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO), for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP) along with verification of the mobile node via direct communication and maintaining the status of correspondent node's compatibility. The TOTP-RO was implemented in network simulator (NS-2) and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6's Return-Routability-based Route Optimization (RR-RO). PMID:24688398

  15. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad

    2016-05-23

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  16. Performance optimization of Sparse Matrix-Vector Multiplication for multi-component PDE-based applications using GPUs

    KAUST Repository

    Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.; Dongarra, Jack

    2016-01-01

    Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  17. Optimization of instrumental neutron activation analysis method by means of 2k experimental design technique aiming the validation of analytical procedures

    International Nuclear Information System (INIS)

    Petroni, Robson; Moreira, Edson G.

    2013-01-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2 k experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  18. Random matrix theory and portfolio optimization in Moroccan stock exchange

    Science.gov (United States)

    El Alaoui, Marwane

    2015-09-01

    In this work, we use random matrix theory to analyze eigenvalues and see if there is a presence of pertinent information by using Marčenko-Pastur distribution. Thus, we study cross-correlation among stocks of Casablanca Stock Exchange. Moreover, we clean correlation matrix from noisy elements to see if the gap between predicted risk and realized risk would be reduced. We also analyze eigenvectors components distributions and their degree of deviations by computing the inverse participation ratio. This analysis is a way to understand the correlation structure among stocks of Casablanca Stock Exchange portfolio.

  19. Soft Tissue Surgical Procedures for Optimizing Anterior Implant Esthetics

    Science.gov (United States)

    Ioannou, Andreas L.; Kotsakis, Georgios A.; McHale, Michelle G.; Lareau, Donald E.; Hinrichs, James E.; Romanos, Georgios E.

    2015-01-01

    Implant dentistry has been established as a predictable treatment with excellent clinical success to replace missing or nonrestorable teeth. A successful esthetic implant reconstruction is predicated on two fundamental components: the reproduction of the natural tooth characteristics on the implant crown and the establishment of soft tissue housing that will simulate a healthy periodontium. In order for an implant to optimally rehabilitate esthetics, the peri-implant soft tissues must be preserved and/or augmented by means of periodontal surgical procedures. Clinicians who practice implant dentistry should strive to achieve an esthetically successful outcome beyond just osseointegration. Knowledge of a variety of available techniques and proper treatment planning enables the clinician to meet the ever-increasing esthetic demands as requested by patients. The purpose of this paper is to enhance the implant surgeon's rationale and techniques beyond that of simply placing a functional restoration in an edentulous site to a level whereby an implant-supported restoration is placed in reconstructed soft tissue, so the site is indiscernible from a natural tooth. PMID:26124837

  20. Matrix formulation of pebble circulation in the pebbed code

    International Nuclear Information System (INIS)

    Gougar, H.D.; Terry, W.K.; Ougouag, A.M.

    2002-01-01

    The PEBBED technique provides a foundation for equilibrium fuel cycle analysis and optimization in pebble-bed cores in which the fuel elements are continuously flowing and, if desired, recirculating. In addition to the modern analysis techniques used in or being developed for the code, PEBBED incorporates a novel nuclide-mixing algorithm that allows for sophisticated recirculation patterns using a matrix generated from basic core parameters. Derived from a simple partitioning of the pebble flow, the elements of the recirculation matrix are used to compute the spatially averaged density of each nuclide at the entry plane from the nuclide densities of pebbles emerging from the discharge conus. The order of the recirculation matrix is a function of the flexibility and sophistication of the fuel handling mechanism. This formulation for coupling pebble flow and neutronics enables core design and fuel cycle optimization to be performed by the manipulation of a few key core parameters. The formulation is amenable to modern optimization techniques. (author)

  1. Optimizing Travel Time to Outpatient Interventional Radiology Procedures in a Multi-Site Hospital System Using a Google Maps Application.

    Science.gov (United States)

    Mandel, Jacob E; Morel-Ovalle, Louis; Boas, Franz E; Ziv, Etay; Yarmohammadi, Hooman; Deipolyi, Amy; Mohabir, Heeralall R; Erinjeri, Joseph P

    2018-02-20

    The purpose of this study is to determine whether a custom Google Maps application can optimize site selection when scheduling outpatient interventional radiology (IR) procedures within a multi-site hospital system. The Google Maps for Business Application Programming Interface (API) was used to develop an internal web application that uses real-time traffic data to determine estimated travel time (ETT; minutes) and estimated travel distance (ETD; miles) from a patient's home to each a nearby IR facility in our hospital system. Hypothetical patient home addresses based on the 33 cities comprising our institution's catchment area were used to determine the optimal IR site for hypothetical patients traveling from each city based on real-time traffic conditions. For 10/33 (30%) cities, there was discordance between the optimal IR site based on ETT and the optimal IR site based on ETD at non-rush hour time or rush hour time. By choosing to travel to an IR site based on ETT rather than ETD, patients from discordant cities were predicted to save an average of 7.29 min during non-rush hour (p = 0.03), and 28.80 min during rush hour (p travel time when more than one location providing IR procedures is available within the same hospital system.

  2. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    Science.gov (United States)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  3. Atomization of Cd in U+Zr matrix after chemical separation using GF-AAS

    International Nuclear Information System (INIS)

    Thulasidas, S.K.; Gupta, Santosh Kumar; Natarajan, V.

    2014-01-01

    Studies on the direct atomization of Cd in U+Zr matrix were carried out and the effect of matrix composition and matrix concentration on the analyte absorbance were investigated. Development of a method using graphite furnace atomic absorption spectrometry (GF-AAS) for determination of Cd is required for FBR fuel (U+20%Zr) materials. It was reported that the absorbance signal for Cd is reduced with matrix, 50% at 20 mg/mL of U and 10 mg/mL of Zr matrix as compared to matrix free solution. To use the method for U+Zr mixed oxide samples, effect of varying composition of Zr in U+Zr mixed matrix was studied. The results indicated that Cd absorbance signal remained unaffected in the range 0-40% Zr in (U+Zr) mixed matrix with 20 mg/mL total matrix. Based on these studies, an analytical method was developed for the direct determination of Cd with 20% Zr in 20 mg/mL of U+Zr solution with optimized experimental parameters. The range of analysis was found to be 0.005-0.1 g/mL for Cd with 20 mg/mL matrix; this leads to detection limits of 0.25 ppm. To meet the specification limits at 0.1 ppm level for Cd, it was necessary to separate the matrix from the sample using solvent extraction method. It was reported that with 30%TBP+70%CCl 4 in 7M HNO 3 , a selective simultaneous extraction of U and Zr into the organic phase can be achieved. In the present studies, same extraction procedure was used with 100 mg U+Zr sample. The effect of U+Zr in raffinate on Cd was also estimated. To validate the method, the extracted aqueous samples were also analyzed by ICP-AES SPECTRO ARCOS SOP technique independently and the results were compared. It was seen that Cd estimation was not affected in the presence of 10-50 μg/mL U+Zr by ICP-AES as well

  4. Matrix-product-state method with local basis optimization for nonequilibrium electron-phonon systems

    Science.gov (United States)

    Heidrich-Meisner, Fabian; Brockt, Christoph; Dorfner, Florian; Vidmar, Lev; Jeckelmann, Eric

    We present a method for simulating the time evolution of quasi-one-dimensional correlated systems with strongly fluctuating bosonic degrees of freedom (e.g., phonons) using matrix product states. For this purpose we combine the time-evolving block decimation (TEBD) algorithm with a local basis optimization (LBO) approach. We discuss the performance of our approach in comparison to TEBD with a bare boson basis, exact diagonalization, and diagonalization in a limited functional space. TEBD with LBO can reduce the computational cost by orders of magnitude when boson fluctuations are large and thus it allows one to investigate problems that are out of reach of other approaches. First, we test our method on the non-equilibrium dynamics of a Holstein polaron and show that it allows us to study the regime of strong electron-phonon coupling. Second, the method is applied to the scattering of an electronic wave packet off a region with electron-phonon coupling. Our study reveals a rich physics including transient self-trapping and dissipation. Supported by Deutsche Forschungsgemeinschaft (DFG) via FOR 1807.

  5. A Two Stage Solution Procedure for Production Planning System with Advance Demand Information

    Science.gov (United States)

    Ueno, Nobuyuki; Kadomoto, Kiyotaka; Hasuike, Takashi; Okuhara, Koji

    We model for ‘Naiji System’ which is a unique corporation technique between a manufacturer and suppliers in Japan. We propose a two stage solution procedure for a production planning problem with advance demand information, which is called ‘Naiji’. Under demand uncertainty, this model is formulated as a nonlinear stochastic programming problem which minimizes the sum of production cost and inventory holding cost subject to a probabilistic constraint and some linear production constraints. By the convexity and the special structure of correlation matrix in the problem where inventory for different periods is not independent, we propose a solution procedure with two stages which are named Mass Customization Production Planning & Management System (MCPS) and Variable Mesh Neighborhood Search (VMNS) based on meta-heuristics. It is shown that the proposed solution procedure is available to get a near optimal solution efficiently and practical for making a good master production schedule in the suppliers.

  6. A TOTP-Based Enhanced Route Optimization Procedure for Mobile IPv6 to Reduce Handover Delay and Signalling Overhead

    Directory of Open Access Journals (Sweden)

    Peer Azmat Shah

    2014-01-01

    Full Text Available Due to the proliferation of handheld mobile devices, multimedia applications like Voice over IP (VoIP, video conferencing, network music, and online gaming are gaining popularity in recent years. These applications are well known to be delay sensitive and resource demanding. The mobility of mobile devices, running these applications, across different networks causes delay and service disruption. Mobile IPv6 was proposed to provide mobility support to IPv6-based mobile nodes for continuous communication when they roam across different networks. However, the Route Optimization procedure in Mobile IPv6 involves the verification of mobile node’s reachability at the home address and at the care-of address (home test and care-of test that results in higher handover delays and signalling overhead. This paper presents an enhanced procedure, time-based one-time password Route Optimization (TOTP-RO, for Mobile IPv6 Route Optimization that uses the concepts of shared secret Token, time based one-time password (TOTP along with verification of the mobile node via direct communication and maintaining the status of correspondent node’s compatibility. The TOTP-RO was implemented in network simulator (NS-2 and an analytical analysis was also made. Analysis showed that TOTP-RO has lower handover delays, packet loss, and signalling overhead with an increased level of security as compared to the standard Mobile IPv6’s Return-Routability-based Route Optimization (RR-RO.

  7. Cleaning the correlation matrix with a denoising autoencoder

    OpenAIRE

    Hayou, Soufiane

    2017-01-01

    In this paper, we use an adjusted autoencoder to estimate the true eigenvalues of the population correlation matrix from the sample correlation matrix when the number of samples is small. We show that the model outperforms the Rotational Invariant Estimator (Bouchaud) which is the optimal estimator in the sample eigenvectors basis when the dimension goes to infinity.

  8. Optimization of instrumental neutron activation analysis method by means of 2{sup k} experimental design technique aiming the validation of analytical procedures

    Energy Technology Data Exchange (ETDEWEB)

    Petroni, Robson; Moreira, Edson G., E-mail: rpetroni@ipen.br, E-mail: emoreira@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2013-07-01

    In this study optimization of procedures and standardization of Instrumental Neutron Activation Analysis (INAA) methods were carried out for the determination of the elements arsenic, chromium, cobalt, iron, rubidium, scandium, selenium and zinc in biological materials. The aim is to validate the analytical methods for future accreditation at the National Institute of Metrology, Quality and Technology (INMETRO). The 2{sup k} experimental design was applied for evaluation of the individual contribution of selected variables of the analytical procedure in the final mass fraction result. Samples of Mussel Tissue Certified Reference Material and multi-element standards were analyzed considering the following variables: sample decay time, counting time and sample distance to detector. The standard multi-element concentration (comparator standard), mass of the sample and irradiation time were maintained constant in this procedure. By means of the statistical analysis and theoretical and experimental considerations it was determined the optimized experimental conditions for the analytical methods that will be adopted for the validation procedure of INAA methods in the Neutron Activation Analysis Laboratory (LAN) of the Research Reactor Center (CRPq) at the Nuclear and Energy Research Institute (IPEN - CNEN/SP). Optimized conditions were estimated based on the results of z-score tests, main effect and interaction effects. The results obtained with the different experimental configurations were evaluated for accuracy (precision and trueness) for each measurement. (author)

  9. Library designs for generic C++ sparse matrix computations of iterative methods

    Energy Technology Data Exchange (ETDEWEB)

    Pozo, R.

    1996-12-31

    A new library design is presented for generic sparse matrix C++ objects for use in iterative algorithms and preconditioners. This design extends previous work on C++ numerical libraries by providing a framework in which efficient algorithms can be written *independent* of the matrix layout or format. That is, rather than supporting different codes for each (element type) / (matrix format) combination, only one version of the algorithm need be maintained. This not only reduces the effort for library developers, but also simplifies the calling interface seen by library users. Furthermore, the underlying matrix library can be naturally extended to support user-defined objects, such as hierarchical block-structured matrices, or application-specific preconditioners. Utilizing optimized kernels whenever possible, the resulting performance of such framework can be shown to be competitive with optimized Fortran programs.

  10. Study on the ultrasonic inspection method using the full matrix capture for the in service railway wheel

    Energy Technology Data Exchange (ETDEWEB)

    Peng, Jianping; Wang, Li; Zhang, Yu; Gao, Xiaorong; Wang, Zeyong; Peng, Chaoyong [NDT Research Center, School of Physical Science and Technology, Southwest Jiaotong University, Chengdu 610031 (China)

    2014-02-18

    The quality of wheel is especially important for the safety of high speed railway. In this paper, a new ultrasonic array inspection method, the Full Matrix Capture (FMC) has been studied and applied to the high speed railway wheel inspection, especially in the wheel web from the tread. Firstly, the principle of FMC and TFM algorithm is discussed, and then the new optimization is applied to the standard FMC; Secondly the fundamentals of optimization is described in detail and the performance is analyzed. Finally, the experiment has been built with a standard phased array block and railway wheel, and then the testing results are discussed and analyzed. It is demonstrated that this change for the ultrasonic data acquisition and image reconstruction has higher efficiency and lower cost comparing to the FMC's procedure.

  11. Numerical solution of quadratic matrix equations for free vibration analysis of structures

    Science.gov (United States)

    Gupta, K. K.

    1975-01-01

    This paper is concerned with the efficient and accurate solution of the eigenvalue problem represented by quadratic matrix equations. Such matrix forms are obtained in connection with the free vibration analysis of structures, discretized by finite 'dynamic' elements, resulting in frequency-dependent stiffness and inertia matrices. The paper presents a new numerical solution procedure of the quadratic matrix equations, based on a combined Sturm sequence and inverse iteration technique enabling economical and accurate determination of a few required eigenvalues and associated vectors. An alternative procedure based on a simultaneous iteration procedure is also described when only the first few modes are the usual requirement. The employment of finite dynamic elements in conjunction with the presently developed eigenvalue routines results in a most significant economy in the dynamic analysis of structures.

  12. Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

    KAUST Repository

    Gower, Robert M.

    2018-02-12

    We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\\\\em first accelerated (deterministic and stochastic) quasi-Newton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.

  13. Procedure-specific pain management and outcome strategies

    DEFF Research Database (Denmark)

    Joshi, Girish P; Schug, Stephan A; Kehlet, Henrik

    2014-01-01

    Optimal dynamic pain relief is a prerequisite for optimizing post-operative recovery and reducing morbidity and convalescence. Procedure-specific pain management initiative aims to overcome the limitations of conventional guidelines and provide health-care professionals with practical recommendat......, optimizing fluid therapy and optimizing post-operative nursing care with early mobilization and oral feeding are utilized....... recommendations formulated in a way that facilitates clinical decision making across all the stages of the perioperative period. The procedure-specific evidence is supplemented with data from other similar surgical procedures and clinical practices to balance benefits and risks of each analgesic technique...

  14. Robust alignment of chromatograms by statistically analyzing the shifts matrix generated by moving window fast Fourier transform cross-correlation.

    Science.gov (United States)

    Zhang, Mingjing; Wen, Ming; Zhang, Zhi-Min; Lu, Hongmei; Liang, Yizeng; Zhan, Dejian

    2015-03-01

    Retention time shift is one of the most challenging problems during the preprocessing of massive chromatographic datasets. Here, an improved version of the moving window fast Fourier transform cross-correlation algorithm is presented to perform nonlinear and robust alignment of chromatograms by analyzing the shifts matrix generated by moving window procedure. The shifts matrix in retention time can be estimated by fast Fourier transform cross-correlation with a moving window procedure. The refined shift of each scan point can be obtained by calculating the mode of corresponding column of the shifts matrix. This version is simple, but more effective and robust than the previously published moving window fast Fourier transform cross-correlation method. It can handle nonlinear retention time shift robustly if proper window size has been selected. The window size is the only one parameter needed to adjust and optimize. The properties of the proposed method are investigated by comparison with the previous moving window fast Fourier transform cross-correlation and recursive alignment by fast Fourier transform using chromatographic datasets. The pattern recognition results of a gas chromatography mass spectrometry dataset of metabolic syndrome can be improved significantly after preprocessing by this method. Furthermore, the proposed method is available as an open source package at https://github.com/zmzhang/MWFFT2. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Application of Box-Behnken design to optimize multi-sorbent solid phase extraction for trace neonicotinoids in water containing high level of matrix substances.

    Science.gov (United States)

    Zhang, Junjie; Wei, Yanli; Li, Huizhen; Zeng, Eddy Y; You, Jing

    2017-08-01

    Extensive use of neonicotinoid insecticides has raised great concerns about their ecological risk. A reliable method to measure trace neonicotinoids in complicated aquatic environment is a premise for assessing their aquatic risk. To effectively remove matrix interfering substances from field water samples before instrumental analysis with HPLC/MS/MS, a multi-sorbent solid phase extraction method was developed using Box-Behnken design. The optimized method employed 200mg HLB/GCB (w/w, 8/2) as the sorbents and 6mL of 20% acetone in acetonitrile as the elution solution. The method was applied for measuring neonicotinoids in water at a wide range of concentrations (0.03-100μg/L) containing various amounts of matrix components. The recoveries of acetamiprid, imidacloprid, thiacloprid and thiamethoxam from the spiked samples ranged from 76.3% to 107% while clothianidin and dinotefuran had relatively lower recoveries. The recoveries of neonicotinoids in water with various amounts of matrix interfering substances were comparable and the matrix removal rates were approximately 50%. The method was sensitive with method detection limits in the range of 1.8-6.8ng/L for all target neonicotinoids. Finally, the developed method was validated by measurement of trace neonicotinoids in natural water. Copyright © 2017 Elsevier B.V. All rights reserved.

  16. Survival comparison of the Ross procedure and mechanical valve replacement with optimal self-management anticoagulation therapy: propensity-matched cohort study.

    Science.gov (United States)

    Mokhles, M Mostafa; Körtke, Heinrich; Stierle, Ulrich; Wagner, Otto; Charitos, Efstratios I; Bogers, Ad J J C; Gummert, Jan; Sievers, Hans-Hinrich; Takkenberg, Johanna J M

    2011-01-04

    It is suggested that in young adults the Ross procedure results in better late patient survival compared with mechanical prosthesis implantation. We performed a propensity score-matched study that assessed late survival in young adult patients after a Ross procedure versus that after mechanical aortic valve replacement with optimal self-management anticoagulation therapy. We selected 918 Ross patients and 406 mechanical valve patients 18 to 60 years of age without dissection, aneurysm, or mitral valve replacement who survived an elective procedure (1994 to 2008). With the use of propensity score matching, late survival was compared between the 2 groups. Two hundred fifty-three patients with a mechanical valve (mean follow-up, 6.3 years) could be propensity matched to a Ross patient (mean follow-up, 5.1 years). Mean age of the matched cohort was 47.3 years in the Ross procedure group and 48.0 years in the mechanical valve group (P=0.17); the ratio of male to female patients was 3.2 in the Ross procedure group and 2.7 in the mechanical valve group (P=0.46). Linearized all-cause mortality rate was 0.53% per patient-year in the Ross procedure group compared with 0.30% per patient-year in the mechanical valve group (matched hazard ratio, 1.86; 95% confidence interval, 0.58 to 5.91; P=0.32). Late survival was comparable to that of the general German population. In comparable patients, there is no late survival difference in the first postoperative decade between the Ross procedure and mechanical aortic valve implantation with optimal anticoagulation self-management. Survival in these selected young adult patients closely resembles that of the general population, possibly as a result of highly specialized anticoagulation self-management, better timing of surgery, and improved patient selection in recent years.

  17. Developing Optimal Procedure of Emergency Outside Cooling Water Injection for APR1400 Extended SBO Scenario Using MARS Code

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Jong Rok; Oh, Seung Jong [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2013-10-15

    In this study, we examined optimum operator actions to mitigate extended SBO using MARS code. Particularly, this paper focuses on analyzing outside core cooling water injection scenario, and aimed to develop optimal extended SBO procedure. Supplying outside emergency cooling water is the key feature of flexible strategy in extended SBO situation. An optimum strategy to maintain core cooling is developed for typical extended SBO. MARS APR1400 best estimate model was used to find optimal procedure. Also RCP seal leakage effect was considered importantly. Recent Fukushima accident shows the importance of mitigation capability against extended SBO scenarios. In Korea, all nuclear power plants incorporated various measures against Fukushima-like events. For APR1400 NPP, outside connectors are installed to inject cooling water using fire trucks or portable pumps. Using these connectors, outside cooling water can be provided to reactor, steam generators (SG), containment spray system, and spent fuel pool. In U. S., similar approach is chosen to provide a diverse and flexible means to prevent fuel damage (core and SFP) in external event conditions resulting in extended loss of AC power and loss of ultimate heat sink. Hence, hardware necessary to cope with extended SBO is already available for APR1400. However, considering the complex and stressful condition encountered by operators during extended SBO, it is important to develop guidelines/procedures to best cope with the event.

  18. An alternative approach to KP hierarchy in matrix models

    International Nuclear Information System (INIS)

    Bonora, L.; Xiong, C.S.

    1992-01-01

    We show that there exists an alternative procedure in order to extract differential hierarchies, such as the KdV hierarchy, from one-matrix models, without taking a continuum limit. To prove this we introduce the Toda lattice and reformulate it in operator form. We then consider the reduction to the systems appropriate for a one-matrix model. (orig.)

  19. Improving the ensemble-optimization method through covariance-matrix adaptation

    NARCIS (Netherlands)

    Fonseca, R.M.; Leeuwenburgh, O.; Hof, P.M.J. van den; Jansen, J.D.

    2015-01-01

    Ensemble optimization (referred to throughout the remainder of the paper as EnOpt) is a rapidly emerging method for reservoirmodel-based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current

  20. Co-movements among financial stocks and covariance matrix analysis

    OpenAIRE

    Sharifi, Saba

    2003-01-01

    The major theories of finance leading into the main body of this research are discussed and our experiments on studying the risk and co-movements among stocks are presented. This study leads to the application of Random Matrix Theory (RMT) The idea of this theory refers to the importance of the empirically measured correlation (or covariance) matrix, C, in finance and particularly in the theory of optimal portfolios However, this matrix has recently come into question, as a large part of ...

  1. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    Science.gov (United States)

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  2. Optimization of a method based on micro-matrix solid-phase dispersion (micro-MSPD for the determination of PCBs in mussel samples

    Directory of Open Access Journals (Sweden)

    Nieves Carro

    2017-03-01

    Full Text Available This paper reports the development and optimization of micro-matrix solid-phase dispersion (micro-MSPD of nine polychlorinated biphenyls (PCBs in mussel samples (Mytilus galloprovincialis by using a two-level factorial design. Four variables (amount of sample, anhydrous sodium sulphate, Florisil and solvent volume were considered as factors in the optimization process. The results suggested that only the interaction between the amount of anhydrous sodium sulphate and the solvent volume was statistically significant for the overall recovery of a trichlorinated compound, CB 28. Generally most of the considered species exhibited a similar behaviour, the sample and Florisil amounts had a positive effect on PCBs extractions and solvent volume and sulphate amount had a negative effect. The analytical determination and confirmation of PCBs were carried out by using GC-ECD and GC-MS/MS, respectively. The method was validated having satisfactory precision and accuracy with RSD values below 6% and recoveries between 81 and 116% for all congeners. The optimized method was applied to the extraction of real mussel samples from two Galician Rías.

  3. Computational Characterization of Type I collagen-based Extra-cellular Matrix

    Science.gov (United States)

    Liang, Long; Jones, Christopher Allen Rucksack; Lin, Daniel; Jiao, Yang; Sun, Bo

    2015-03-01

    A model of extracellular matrix (ECM) of collagen fibers has been built, in which cells could communicate with distant partners via fiber-mediated long-range-transmitted stress states. The ECM is modeled as a spring-like fiber network derived from skeletonized confocal microscopy data. Different local and global perturbations have been performed on the network, each followed by an optimized global Monte-Carlo (MC) energy minimization leading to the deformed network in response to the perturbations. In the optimization, a highly efficient local energy update procedure is employed and force-directed MC moves are used, which results in a convergence to the energy minimum state 20 times faster than the commonly used random displacement trial moves in MC. Further analysis and visualization of the distribution and correlation of the resulting force network reveal that local perturbations can give rise to global impacts: the force chains formed with a linear extent much further than the characteristic length scale associated with the perturbation sites and average fiber length. This behavior provides a strong evidence for our hypothesis of fiber-mediated long-range force transmission in ECM networks and the resulting long-range cell-cell mechanical signaling. ASU Seed Grant.

  4. Matrix-reinforcement reactivity in P/M titanium matrix composites

    International Nuclear Information System (INIS)

    Amigo, V.; Romero, F.; Salvador, M. D.; Busquets, D.

    2007-01-01

    The high reactivity of titanium and the facility of the same one to form intermetallics makes difficult obtaining composites with this material and brings the need in any case of covering the principal fibres used as reinforcement. To obtain composites of titanium reinforced with ceramic particles ins proposed in this paper, for this reason it turns out to be fundamental to evaluate the reactivity between the matrix and reinforcement. Both titanium nitride and carbide (TiN and TiC) are investigated as materials of low reactivity whereas titanium silicide (TiSi 2 ) is also studied as materials of major reactivity, already stated by the scientific community. This reactivity will be analysed by means of scanning electron microscopy (SEM) there being obtained distribution maps of the elements that allow to establish the possible influence of the sintering temperature and time. Hereby the matrix-reinforcement interactions are optimized to obtain suitable mechanical properties. (Author) 39 refs

  5. Optimal control of two coupled spinning particles in the Euler–Lagrange picture

    International Nuclear Information System (INIS)

    Delgado-Téllez, M; Ibort, A; Peña, T Rodríguez de la; Salmoni, R

    2016-01-01

    A family of optimal control problems for a single and two coupled spinning particles in the Euler–Lagrange formalism is discussed. A characteristic of such problems is that the equations controlling the system are implicit and a reduction procedure to deal with them must be carried out. The reduction of the implicit control equations arising in these problems will be discussed in the slightly more general setting of implicit equations defined by invariant one-forms on Lie groups. As an example the first order differential equations describing the extremal solutions of an optimal control problem for a single spinning particle, obtained by using Pontryagin’s Maximum Principle (PMP), will be found and shown to be completely integrable. Then, again using PMP, solutions for the problem of two coupled spinning particles will be characterized as solutions of a system of coupled non-linear matrix differential equations. The reduction of the implicit system will show that the reduced space for them is the product of the space of states for the independent systems, implying the absence of ‘entanglement’ in this instance. Finally, it will be shown that, in the case of identical systems, the degree three matrix polynomial differential equations determined by the optimal feedback law, constitute a completely integrable Hamiltonian system and some of its solutions are described explicitly. (paper)

  6. A pilot study of short-duration sputum pretreatment procedures for optimizing smear microscopy for tuberculosis.

    Directory of Open Access Journals (Sweden)

    Peter Daley

    2009-05-01

    Full Text Available Direct sputum smear microscopy for tuberculosis (TB lacks sensitivity for the detection of acid fast bacilli. Sputum pretreatment procedures may enhance sensitivity. We did a pilot study to compare the diagnostic accuracy and incremental yield of two short-duration (<1 hour sputum pretreatment procedures to optimize direct smears among patients with suspected TB at a referral hospital in India.Blinded laboratory comparison of bleach and universal sediment processing (USP pretreated centrifuged auramine smears to direct Ziehl-Neelsen (ZN and direct auramine smears and to solid (Loweinstein-Jensen (LJ and liquid (BACTEC 460 culture. 178 pulmonary and extrapulmonary TB suspects were prospectively recruited during a one year period. Thirty six (20.2% were positive by either solid or liquid culture. Direct ZN smear detected 22 of 36 cases and direct auramine smears detected 26 of 36 cases. Bleach and USP centrifugation detected 24 cases each, providing no incremental yield beyond direct smears. When compared to combined culture, pretreated smears were not more sensitive than direct smears (66.6% vs 61.1 (ZN or 72.2 (auramine, and were not more specific (92.3% vs 93.0 (ZN or 97.2 (auramine.Short duration sputum pretreatment with bleach and USP centrifugation did not increase yield as compared to direct sputum smears. Further work is needed to confirm this in a larger study and also determine if longer duration pre-treatment might be effective in optimizing smear microscopy for TB.

  7. Topology optimization under stochastic stiffness

    Science.gov (United States)

    Asadpoure, Alireza

    Topology optimization is a systematic computational tool for optimizing the layout of materials within a domain for engineering design problems. It allows variation of structural boundaries and connectivities. This freedom in the design space often enables discovery of new, high performance designs. However, solutions obtained by performing the optimization in a deterministic setting may be impractical or suboptimal when considering real-world engineering conditions with inherent variabilities including (for example) variabilities in fabrication processes and operating conditions. The aim of this work is to provide a computational methodology for topology optimization in the presence of uncertainties associated with structural stiffness, such as uncertain material properties and/or structural geometry. Existing methods for topology optimization under deterministic conditions are first reviewed. Modifications are then proposed to improve the numerical performance of the so-called Heaviside Projection Method (HPM) in continuum domains. Next, two approaches, perturbation and Polynomial Chaos Expansion (PCE), are proposed to account for uncertainties in the optimization procedure. These approaches are intrusive, allowing tight and efficient coupling of the uncertainty quantification with the optimization sensitivity analysis. The work herein develops a robust topology optimization framework aimed at reducing the sensitivity of optimized solutions to uncertainties. The perturbation-based approach combines deterministic topology optimization with a perturbation method for the quantification of uncertainties. The use of perturbation transforms the problem of topology optimization under uncertainty to an augmented deterministic topology optimization problem. The PCE approach combines the spectral stochastic approach for the representation and propagation of uncertainties with an existing deterministic topology optimization technique. The resulting compact representations

  8. On the design of 1-3 piezo-composites using topology optimization

    DEFF Research Database (Denmark)

    Sigmund, Ole; Torquato, S.; Aksay, I.A.

    1998-01-01

    (h)((*))g(h)((*)), and the electromechanical coupling factor k(h)((*)). The piezocomposite consists of piezoelectric rods embedded in an optimal polymer matrix. We use the topology optimization method to design the optimal (porous) matrix microstructure. When we design for maximum d(h)((*)) and d(h)((*))g(h)((*)) the optimal transversally......We use a topology optimization method to design 1-3 piezocomposites with optimal performance characteristics for hydrophone applications. The performance characteristics we focus on are the hydrostatic charge coefficient d(h)((*)), the hydrophone figure of merit d...

  9. Patient Dose Optimization in Fluoroscopically Guided Interventional Procedures. Final Report of a Coordinated Research Project

    International Nuclear Information System (INIS)

    2010-01-01

    In recent years, many surgical procedures have increasingly been replaced by interventional procedures that guide catheters into the arteries under X ray fluoroscopic guidance to perform a variety of operations such as ballooning, embolization, implantation of stents etc. The radiation exposure to patients and staff in such procedures is much higher than in simple radiographic examinations like X ray of chest or abdomen such that radiation induced skin injuries to patients and eye lens opacities among workers have been reported in the 1990's and after. Interventional procedures have grown both in frequency and importance during the last decade. This Coordinated Research Project (CRP) and TECDOC were developed within the International Atomic Energy Agency's (IAEA) framework of statutory responsibility to provide for the worldwide application of the standards for the protection of people against exposure to ionizing radiation. The CRP took place between 2003 and 2005 in six countries, with a view of optimizing the radiation protection of patients undergoing interventional procedures. The Fundamental Safety Principles and the International Basic Safety Standards for Protection against Ionizing Radiation (BSS) issued by the IAEA and co-sponsored by the Food and Agriculture Organization of the United Nations (FAO), the International Labour Organization (ILO), the World Health Organization (WHO), the Pan American Health Organization (PAHO) and the Nuclear Energy Agency (NEA), among others, require the radiation protection of patients undergoing medical exposures through justification of the procedures involved and through optimization. In keeping with its responsibility on the application of standards, the IAEA programme on Radiological Protection of Patients encourages the reduction of patient doses. To facilitate this, it has issued specific advice on the application of the BSS in the field of radiology in Safety Reports Series No. 39 and the three volumes on Radiation

  10. Simple expression for the quantum Fisher information matrix

    Science.gov (United States)

    Šafránek, Dominik

    2018-04-01

    Quantum Fisher information matrix (QFIM) is a cornerstone of modern quantum metrology and quantum information geometry. Apart from optimal estimation, it finds applications in description of quantum speed limits, quantum criticality, quantum phase transitions, coherence, entanglement, and irreversibility. We derive a surprisingly simple formula for this quantity, which, unlike previously known general expression, does not require diagonalization of the density matrix, and is provably at least as efficient. With a minor modification, this formula can be used to compute QFIM for any finite-dimensional density matrix. Because of its simplicity, it could also shed more light on the quantum information geometry in general.

  11. Fibroin/dodecanol floating solidification microextraction for the preconcentration of trace levels of flavonoids in complex matrix samples.

    Science.gov (United States)

    Chen, Xuan; Li, Jie; Hu, Shuang; Bai, Xiaohong; Zhao, Haodong; Zhang, Yi

    2018-01-01

    A new fibroin/dodecanol floating solidification microextraction, coupled with high performance liquid chromatography, was developed and applied for enrichment and quantification of the trace flavonoids in traditional Chinese medicine and biological samples. Also, fibroin sensibilization mechanism was described, and influence of sample matrix to enrichment factor was investigated. In this method, a homogeneous fibroin/dodecanol of dispersed solution was employed as microextraction phase to flavonoids (myricetin, quercetin, isorhamnetin, chrysin, kaempferide), the several critical parameters affecting the performance, such as organic extractant, amount of fibroin in organic extractant, volume of extraction phase, dispersant, salt concentration, pH of sample phase, stirring rate, extraction time, and volume of sample phase were tested and optimized. Under the optimized conditions, enrichment factor of flavonoids ranged from 42.4 to 238.1 in different samples, excellent linearities with r 2 ≥ 0.9968 for all analytes were achieved, limits of detection were less than or equal to 5.0ng/mL, average recoveries were 92.5% to 115.0% in different samples. The new procedure is simple, fast, low cost, environmentally friendly and high EF, it can also be applied to the concentration and enrichment of the trace flavonoids in other complex matrixes. Copyright © 2017. Published by Elsevier B.V.

  12. Polyhedral and semidefinite programming methods in combinatorial optimization

    CERN Document Server

    Tunçel, Levent

    2010-01-01

    Since the early 1960s, polyhedral methods have played a central role in both the theory and practice of combinatorial optimization. Since the early 1990s, a new technique, semidefinite programming, has been increasingly applied to some combinatorial optimization problems. The semidefinite programming problem is the problem of optimizing a linear function of matrix variables, subject to finitely many linear inequalities and the positive semidefiniteness condition on some of the matrix variables. On certain problems, such as maximum cut, maximum satisfiability, maximum stable set and geometric r

  13. Two-dimensional 1H and 31P NMR spectra and restrained molecular dynamics structure of an oligodeoxyribonucleotide duplex refined via a hybrid relaxation matrix procedure

    International Nuclear Information System (INIS)

    Powers, R.; Jones, C.R.; Gorenstein, D.G.

    1990-01-01

    Assignment of the 1H and 31P resonances of a decamer DNA duplex, d(CGCTTAAGCG)2 was determined by two-dimensional COSY, NOESY and 1H-31P Pure Absorption phase Constant time (PAC) heteronuclear correlation spectroscopy. The solution structure of the decamer was calculated by an iterative hybrid relaxation matrix method combined with NOESY-distance restrained molecular dynamics. The distances from the 2D NOESY spectra were calculated from the relaxation rate matrix which were evaluated from a hybrid NOESY volume matrix comprising elements from the experiment and those calculated from an initial structure. The hybrid matrix-derived distances were then used in a restrained molecular dynamics procedure to obtain a new structure that better approximates the NOESY spectra. The resulting partially refined structure was then used to calculate an improved theoretical NOESY volume matrix which is once again merged with the experimental matrix until refinement is complete. JH3'-P coupling constants for each of the phosphates of the decamer were obtained from 1H-31P J-resolved selective proton flip 2D spectra. By using a modified Karplus relationship the C4'-C3'-O3'-P torsional angles were obtained. Comparison of the 31P chemical shifts and JH3'-P coupling constants of this sequence has allowed a greater insight into the various factors responsible for 31P chemical shift variations in oligonucleotides. It also provides an important probe of the sequence-dependent structural variation of the deoxyribose phosphate backbone of DNA in solution. These correlations are consistent with the hypothesis that changes in local helical structure perturb the deoxyribose phosphate backbone. The variation of the 31P chemical shift, and the degree of this variation from one base step to the next is proposed as a potential probe of local helical conformation within the DNA double helix

  14. The finite element response matrix method

    International Nuclear Information System (INIS)

    Nakata, H.; Martin, W.R.

    1983-02-01

    A new technique is developed with an alternative formulation of the response matrix method implemented with the finite element scheme. Two types of response matrices are generated from the Galerkin solution to the weak form of the diffusion equation subject to an arbitrary current and source. The piecewise polynomials are defined in two levels, the first for the local (assembly) calculations and the second for the global (core) response matrix calculations. This finite element response matrix technique was tested in two 2-dimensional test problems, 2D-IAEA benchmark problem and Biblis benchmark problem, with satisfatory results. The computational time, whereas the current code is not extensively optimized, is of the same order of the well estabilished coarse mesh codes. Furthermore, the application of the finite element technique in an alternative formulation of response matrix method permits the method to easily incorporate additional capabilities such as treatment of spatially dependent cross-sections, arbitrary geometrical configurations, and high heterogeneous assemblies. (Author) [pt

  15. Viscoplastic Matrix Materials for Embedded 3D Printing.

    Science.gov (United States)

    Grosskopf, Abigail K; Truby, Ryan L; Kim, Hyoungsoo; Perazzo, Antonio; Lewis, Jennifer A; Stone, Howard A

    2018-03-16

    Embedded three-dimensional (EMB3D) printing is an emerging technique that enables free-form fabrication of complex architectures. In this approach, a nozzle is translated omnidirectionally within a soft matrix that surrounds and supports the patterned material. To optimize print fidelity, we have investigated the effects of matrix viscoplasticity on the EMB3D printing process. Specifically, we determine how matrix composition, print path and speed, and nozzle diameter affect the yielded region within the matrix. By characterizing the velocity and strain fields and analyzing the dimensions of the yielded regions, we determine that scaling relationships based on the Oldroyd number, Od, exist between these dimensions and the rheological properties of the matrix materials and printing parameters. Finally, we use EMB3D printing to create complex architectures within an elastomeric silicone matrix. Our methods and findings will both facilitate future characterization of viscoplastic matrices and motivate the development of new materials for EMB3D printing.

  16. Simultaneous determination of phenolic compounds in Equisetum palustre L. by ultra high performance liquid chromatography with tandem mass spectrometry combined with matrix solid-phase dispersion extraction.

    Science.gov (United States)

    Wei, Zuofu; Pan, Youzhi; Li, Lu; Huang, Yuyang; Qi, Xiaolin; Luo, Meng; Zu, Yuangang; Fu, Yujie

    2014-11-01

    A method based on matrix solid-phase dispersion extraction followed by ultra high performance liquid chromatography with tandem mass spectrometry is presented for the extraction and determination of phenolic compounds in Equisetum palustre. This method combines the high efficiency of matrix solid-phase dispersion extraction and the rapidity, sensitivity, and accuracy of ultra high performance liquid chromatography with tandem mass spectrometry. The influential parameters of the matrix solid-phase dispersion extraction were investigated and optimized. The optimized conditions were as follows: silica gel was selected as dispersing sorbent, the ratio of silica gel to sample was selected to be 2:1 (400/200 mg), and 8 mL of 80% methanol was used as elution solvent. Furthermore, a fast and sensitive ultra high performance liquid chromatography with tandem mass spectrometry method was developed for the determination of nine phenolic compounds in E. palustre. This method was carried out within <6 min, and exhibited satisfactory linearity, precision, and recovery. Compared with ultrasound-assisted extraction, the proposed matrix solid-phase dispersion procedure possessed higher extraction efficiency, and was more convenient and time saving with reduced requirements on sample and solvent amounts. All these results suggest that the developed method represents an excellent alternative for the extraction and determination of active components in plant matrices. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  17. Low-rank matrix approximation with manifold regularization.

    Science.gov (United States)

    Zhang, Zhenyue; Zhao, Keke

    2013-07-01

    This paper proposes a new model of low-rank matrix factorization that incorporates manifold regularization to the matrix factorization. Superior to the graph-regularized nonnegative matrix factorization, this new regularization model has globally optimal and closed-form solutions. A direct algorithm (for data with small number of points) and an alternate iterative algorithm with inexact inner iteration (for large scale data) are proposed to solve the new model. A convergence analysis establishes the global convergence of the iterative algorithm. The efficiency and precision of the algorithm are demonstrated numerically through applications to six real-world datasets on clustering and classification. Performance comparison with existing algorithms shows the effectiveness of the proposed method for low-rank factorization in general.

  18. Application of an optimized AM procedure following a SBO in a VVER1000

    International Nuclear Information System (INIS)

    Cherubini, Marco; D'Auria, Francesco; Petrangeli, Gianni; Muellner, Nikolaus

    2006-01-01

    The University of Pisa was involved in investigations of an Accident Management procedure based on passive feed water injection. Some experiments were performed to validate this possibility (e.g. in LOBI and Bethsy facilities) and fully analyzed by thermal hydraulic system codes. Recent activities in which the University of Pisa is engaged (also as leader) are focused on VVER-1000 safety analyses. The idea is now to use the acquired knowledge to explore if a procedure based on passive feed water injection is applicable and can provide any benefits to the Russian design pressurized plant. The postulated accident is a station blackout, in such a way only passive systems are available. The proposed AM is based on secondary and primary side depressurisation in sequence. The secondary side depressurisation performed by the BRU-A valves has the scope to feed passively the SGs with the water left in the feed water lines and in the deaerators. The primary side depressurisation, via the PORV, is foreseen to keep the plant at the lowest pressure (to reduce the energy of the system) and to maximize the 'grace time' of the plant. Three cases are here considered: no operator action, application of the optimized AM sequence, application of the AM procedure at the last time when it is effective. The intention of this paper is to show that in case of an unlikely event such a SBO the implementation of a strategy based on systems not designed for specific safety application can have a large impact on the 'grace time' of the plant. (author)

  19. An Experimental Study of Structural Identification of Bridges Using the Kinetic Energy Optimization Technique and the Direct Matrix Updating Method

    Directory of Open Access Journals (Sweden)

    Gwanghee Heo

    2016-01-01

    Full Text Available This paper aims to develop an SI (structural identification technique using the KEOT and the DMUM to decide on optimal location of sensors and to update FE model, respectively, which ultimately contributes to a composition of more effective SHM. Owing to the characteristic structural flexing behavior of cable bridges (e.g., cable-stayed bridges and suspension bridges, which makes them vulnerable to any vibration, systematic and continuous structural health monitoring (SHM is pivotal for them. Since it is necessary to select optimal measurement locations with the fewest possible measurements and also to accurately assess the structural state of a bridge for the development of an effective SHM, an SI technique is as much important to accurately determine the modal parameters of the current structure based on the data optimally obtained. In this study, the kinetic energy optimization technique (KEOT was utilized to determine the optimal measurement locations, while the direct matrix updating method (DMUM was utilized for FE model updating. As a result of experiment, the required number of measurement locations derived from KEOT based on the target mode was reduced by approximately 80% compared to the initial number of measurement locations. Moreover, compared to the eigenvalue of the modal experiment, an improved FE model with a margin of error of less than 1% was derived from DMUM. Thus, the SI technique for cable-stayed bridges proposed in this study, which utilizes both KEOT and DMUM, is proven effective in minimizing the number of sensors while accurately determining the structural dynamic characteristics.

  20. Amorphous metal matrix composite ribbons

    International Nuclear Information System (INIS)

    Barczy, P.; Szigeti, F.

    1998-01-01

    Composite ribbons with amorphous matrix and ceramic (SiC, WC, MoB) particles were produced by modified planar melt flow casting methods. Weldability, abrasive wear and wood sanding examinations were carried out in order to find optimal material and technology for elevated wear resistance and sanding durability. The correlation between structure and composite properties is discussed. (author)

  1. Robust Face Recognition via Multi-Scale Patch-Based Matrix Regression.

    Directory of Open Access Journals (Sweden)

    Guangwei Gao

    Full Text Available In many real-world applications such as smart card solutions, law enforcement, surveillance and access control, the limited training sample size is the most fundamental problem. By making use of the low-rank structural information of the reconstructed error image, the so-called nuclear norm-based matrix regression has been demonstrated to be effective for robust face recognition with continuous occlusions. However, the recognition performance of nuclear norm-based matrix regression degrades greatly in the face of the small sample size problem. An alternative solution to tackle this problem is performing matrix regression on each patch and then integrating the outputs from all patches. However, it is difficult to set an optimal patch size across different databases. To fully utilize the complementary information from different patch scales for the final decision, we propose a multi-scale patch-based matrix regression scheme based on which the ensemble of multi-scale outputs can be achieved optimally. Extensive experiments on benchmark face databases validate the effectiveness and robustness of our method, which outperforms several state-of-the-art patch-based face recognition algorithms.

  2. Research and Development Progress of National Key Laboratory of Advanced Composites on Advanced Aeronautical Resin Matrix Composites

    Directory of Open Access Journals (Sweden)

    LI Bintai

    2016-06-01

    Full Text Available Applications and research progress in advanced aeronautical resin matrix composites by National Key Laboratory of Advanced Composites (LAC were summarized. A novel interlaminar toughening technology employing ultra-thin TP non-woven fabric was developed in LAC, which significantly improved the compression after impact (CAI performances of composite laminates.Newly designed multilayer sandwich stealth composite structures exhibited a good broadband radar absorbing properties at 1-18 GHz.There were remarkable developments in high toughness and high temperature resin matrix composites, covering major composite processing technologies such as prepreg-autoclave procedure, liquid composite molding and automation manufacture, etc. Finally, numerical simulation and optimization methods were deliberately utilized in the study of composites curing behavior, resin flow and curing deformation. A composite material database was also established.In conclusion, LAC has been a great support for the development of aeronautical equipment, playing such roles as innovation leading, system dominating, foundation supporting and application ensuring of aerocomposites.

  3. A matrix formulation of Frobenius power series solutions using products of 4X4 matrices

    Directory of Open Access Journals (Sweden)

    Jeremy Mandelkern

    2015-08-01

    Full Text Available In Coddington and Levison [7, p. 119, Thm. 4.1] and Balser [4, p. 18-19, Thm. 5], matrix formulations of Frobenius theory, near a regular singular point, are given using 2X2 matrix recurrence relations yielding fundamental matrices consisting of two linearly independent solutions together with their quasi-derivatives. In this article we apply a reformulation of these matrix methods to the Bessel equation of nonintegral order. The reformulated approach of this article differs from [7] and [4] by its implementation of a new ``vectorization'' procedure that yields recurrence relations of an altogether different form: namely, it replaces the implicit 2X2 matrix recurrence relations of both [7] and [4] by explicit 4X4 matrix recurrence relations that are implemented by means only of 4X4 matrix products. This new idea of using a vectorization procedure may further enable the development of symbolic manipulator programs for matrix forms of the Frobenius theory.

  4. Elaboration of a computer code for the solution of a two-dimensional two-energy group diffusion problem using the matrix response method

    International Nuclear Information System (INIS)

    Alvarenga, M.A.B.

    1980-12-01

    An analytical procedure to solve the neutron diffusion equation in two dimensions and two energy groups was developed. The response matrix method was used coupled with an expansion of the neutron flux in finite Fourier series. A computer code 'MRF2D' was elaborated to implement the above mentioned procedure for PWR reactor core calculations. Different core symmetry options are allowed by the code, which is also flexible enough to allow for improvements by means of algorithm optimization. The code performance was compared with a corner mesh finite difference code named TVEDIM by using a International Atomic Energy Agency (IAEA) standard problem. Computer processing time 12,7% smaller is required by the MRF2D code to reach the same precision on criticality eigenvalue. (Author) [pt

  5. Microlocal study of S-matrix singularity structure

    International Nuclear Information System (INIS)

    Kawai, Takahiro; Kyoto Univ.; Stapp, H.P.

    1975-01-01

    Support is adduced for two related conjectures of simplicity of the analytic structure of the S-matrix and related function; namely, Sato's conjecture that the S-matrix is a solution of a maximally over-determined system of pseudo-differential equations, and our conjecture that the singularity spectrum of any bubble diagram function has the conormal structure with respect to a canonical decomposition of the solutions of the relevant Landau equations. This latter conjecture eliminates the open sets of allowed singularities that existing procedures permit. (orig.) [de

  6. Optimization of Ni-Based WC/Co/Cr Composite Coatings Produced by Multilayer Laser Cladding

    Directory of Open Access Journals (Sweden)

    Andrea Angelastro

    2013-01-01

    Full Text Available As a surface coating technique, laser cladding (LC has been developed for improving wear, corrosion, and fatigue properties of mechanical components. The main advantage of this process is the capability of introducing hard particles such as SiC, TiC, and WC as reinforcements in the metallic matrix such as Ni-based alloy, Co-based alloy, and Fe-based alloy to form ceramic-metal composite coatings, which have very high hardness and good wear resistance. In this paper, Ni-based alloy (Colmonoy 227-F and Tungsten Carbides/Cobalt/Chromium (WC/Co/Cr composite coatings were fabricated by the multilayer laser cladding technique (MLC. An optimization procedure was implemented to obtain the combination of process parameters that minimizes the porosity and produces good adhesion to a stainless steel substrate. The optimization procedure was worked out with a mathematical model that was supported by an experimental analysis, which studied the shape of the clad track generated by melting coaxially fed powders with a laser. Microstructural and microhardness analysis completed the set of test performed on the coatings.

  7. Massive IIA string theory and Matrix theory compactification

    International Nuclear Information System (INIS)

    Lowe, David A.; Nastase, Horatiu; Ramgoolam, Sanjaye

    2003-01-01

    We propose a Matrix theory approach to Romans' massive Type IIA supergravity. It is obtained by applying the procedure of Matrix theory compactifications to Hull's proposal of the massive Type IIA string theory as M-theory on a twisted torus. The resulting Matrix theory is a super-Yang-Mills theory on large N three-branes with a space-dependent noncommutativity parameter, which is also independently derived by a T-duality approach. We give evidence showing that the energies of a class of physical excitations of the super-Yang-Mills theory show the correct symmetry expected from massive Type IIA string theory in a lightcone quantization

  8. Max–min distance nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan; Gao, Xin

    2014-01-01

    Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.

  9. Max–min distance nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-10-26

    Nonnegative Matrix Factorization (NMF) has been a popular representation method for pattern classification problems. It tries to decompose a nonnegative matrix of data samples as the product of a nonnegative basis matrix and a nonnegative coefficient matrix. The columns of the coefficient matrix can be used as new representations of these data samples. However, traditional NMF methods ignore class labels of the data samples. In this paper, we propose a novel supervised NMF algorithm to improve the discriminative ability of the new representation by using the class labels. Using the class labels, we separate all the data sample pairs into within-class pairs and between-class pairs. To improve the discriminative ability of the new NMF representations, we propose to minimize the maximum distance of the within-class pairs in the new NMF space, and meanwhile to maximize the minimum distance of the between-class pairs. With this criterion, we construct an objective function and optimize it with regard to basis and coefficient matrices, and slack variables alternatively, resulting in an iterative algorithm. The proposed algorithm is evaluated on three pattern classification problems and experiment results show that it outperforms the state-of-the-art supervised NMF methods.

  10. Closed-Loop Optimal Control Implementations for Space Applications

    Science.gov (United States)

    2016-12-01

    with standard linear algebra techniques if is converted to a diagonal square matrix by multiplying by the identity matrix, I , as was done in (1.134...OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS by Colin S. Monk December 2016 Thesis Advisor: Mark Karpenko Second Reader: I. M...COVERED Master’s thesis, Jan-Dec 2016 4. TITLE AND SUBTITLE CLOSED-LOOP OPTIMAL CONTROL IMPLEMENTATIONS FOR SPACE APPLICATIONS 5. FUNDING NUMBERS

  11. Tensor operators in R-matrix approach

    International Nuclear Information System (INIS)

    Bytsko, A.G.; Rossijskaya Akademiya Nauk, St. Petersburg

    1995-12-01

    The definitions and some properties (e.g. the Wigner-Eckart theorem, the fusion procedure) of covariant and contravariant q-tensor operators for quasitriangular quantum Lie algebras are formulated in the R-matrix language. The case of U q (sl(n)) (in particular, for n=2) is discussed in more detail. (orig.)

  12. Metal matrix composites synthesis, wear characteristics, machinability study of MMC brake drum

    CERN Document Server

    Natarajan, Nanjappan; Davim, J Paulo

    2015-01-01

    This book is dedicated to composite materials, presenting different synthesis processes, composite properties and their machining behaviour. The book describes also the problems on manufacturing of metal matrix composite components. Among others, it provides procedures for manufacturing of metal matrix composites and case studies.

  13. Parallelized preconditioned model building algorithm for matrix factorization

    OpenAIRE

    Kaya, Kamer; Birbil, İlker; Birbil, Ilker; Öztürk, Mehmet Kaan; Ozturk, Mehmet Kaan; Gohari, Amir

    2017-01-01

    Matrix factorization is a common task underlying several machine learning applications such as recommender systems, topic modeling, or compressed sensing. Given a large and possibly sparse matrix A, we seek two smaller matrices W and H such that their product is as close to A as possible. The objective is minimizing the sum of square errors in the approximation. Typically such problems involve hundreds of thousands of unknowns, so an optimizer must be exceptionally efficient. In this study, a...

  14. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  15. Evaluation and optimization of DNA extraction and purification procedures for soil and sediment samples.

    Science.gov (United States)

    Miller, D N; Bryant, J E; Madsen, E L; Ghiorse, W C

    1999-11-01

    We compared and statistically evaluated the effectiveness of nine DNA extraction procedures by using frozen and dried samples of two silt loam soils and a silt loam wetland sediment with different organic matter contents. The effects of different chemical extractants (sodium dodecyl sulfate [SDS], chloroform, phenol, Chelex 100, and guanadinium isothiocyanate), different physical disruption methods (bead mill homogenization and freeze-thaw lysis), and lysozyme digestion were evaluated based on the yield and molecular size of the recovered DNA. Pairwise comparisons of the nine extraction procedures revealed that bead mill homogenization with SDS combined with either chloroform or phenol optimized both the amount of DNA extracted and the molecular size of the DNA (maximum size, 16 to 20 kb). Neither lysozyme digestion before SDS treatment nor guanidine isothiocyanate treatment nor addition of Chelex 100 resin improved the DNA yields. Bead mill homogenization in a lysis mixture containing chloroform, SDS, NaCl, and phosphate-Tris buffer (pH 8) was found to be the best physical lysis technique when DNA yield and cell lysis efficiency were used as criteria. The bead mill homogenization conditions were also optimized for speed and duration with two different homogenizers. Recovery of high-molecular-weight DNA was greatest when we used lower speeds and shorter times (30 to 120 s). We evaluated four different DNA purification methods (silica-based DNA binding, agarose gel electrophoresis, ammonium acetate precipitation, and Sephadex G-200 gel filtration) for DNA recovery and removal of PCR inhibitors from crude extracts. Sephadex G-200 spin column purification was found to be the best method for removing PCR-inhibiting substances while minimizing DNA loss during purification. Our results indicate that for these types of samples, optimum DNA recovery requires brief, low-speed bead mill homogenization in the presence of a phosphate-buffered SDS-chloroform mixture, followed

  16. A Robust Statistics Approach to Minimum Variance Portfolio Optimization

    Science.gov (United States)

    Yang, Liusha; Couillet, Romain; McKay, Matthew R.

    2015-12-01

    We study the design of portfolios under a minimum risk criterion. The performance of the optimized portfolio relies on the accuracy of the estimated covariance matrix of the portfolio asset returns. For large portfolios, the number of available market returns is often of similar order to the number of assets, so that the sample covariance matrix performs poorly as a covariance estimator. Additionally, financial market data often contain outliers which, if not correctly handled, may further corrupt the covariance estimation. We address these shortcomings by studying the performance of a hybrid covariance matrix estimator based on Tyler's robust M-estimator and on Ledoit-Wolf's shrinkage estimator while assuming samples with heavy-tailed distribution. Employing recent results from random matrix theory, we develop a consistent estimator of (a scaled version of) the realized portfolio risk, which is minimized by optimizing online the shrinkage intensity. Our portfolio optimization method is shown via simulations to outperform existing methods both for synthetic and real market data.

  17. t matrix of metallic wire structures

    International Nuclear Information System (INIS)

    Zhan, T. R.; Chui, S. T.

    2014-01-01

    To study the electromagnetic resonance and scattering properties of complex structures of which metallic wire structures are constituents within multiple scattering theory, the t matrix of individual structures is needed. We have recently developed a rigorous and numerically efficient equivalent circuit theory in which retardation effects are taken into account for metallic wire structures. Here, we show how the t matrix can be calculated analytically within this theory. We illustrate our method with the example of split ring resonators. The density of states and cross sections for scattering and absorption are calculated, which are shown to be remarkably enhanced at resonant frequencies. The t matrix serves as the basic building block to evaluate the interaction of wire structures within the framework of multiple scattering theory. This will open the door to efficient design and optimization of assembly of wire structures

  18. Optimization of mechanical properties of Al-metal matrix composite produced by direct fusion of beverage cans

    International Nuclear Information System (INIS)

    Carrasco, C.; Inzunza, G.; Camurri, C.; Rodríguez, C.; Radovic, L.; Soldera, F.; Suarez, S.

    2014-01-01

    The collection of used beverage cans is limited in countries where they are not fabricated; their low value does not justify the extra charge of exporting them for further processing. To address this increasingly serious problem, here we optimize the properties of an aluminum metal matrix composite (Al-MMC) obtained through direct fusion of beverage cans by using the slag generated in the melting process as reinforcement. This method consists of a modified rheocasting process followed by thixoforming. Our main operational variable is the shear rate applied to a semi-solid bath, subsequent to which a suitable heat treatment (T8) is proposed to improve the mechanical properties. The microstructure, the phases obtained and their effect on composite mechanical properties are analyzed. The composite material produced has, under the best conditions, a yield stress of 175 MPa and a tensile strength of 273 MPa. These results demonstrate that the proposed process does indeed transform the used beverage cans into promising composite materials, e.g., for structural applications

  19. Optimization of mechanical properties of Al-metal matrix composite produced by direct fusion of beverage cans

    Energy Technology Data Exchange (ETDEWEB)

    Carrasco, C., E-mail: ccarrascoc@udec.cl [Department of Materials Engineering, University of Concepción, Edmundo Larenas 270, Concepción (Chile); Inzunza, G.; Camurri, C.; Rodríguez, C. [Department of Materials Engineering, University of Concepción, Edmundo Larenas 270, Concepción (Chile); Radovic, L. [Department of Chemical Engineering, University of Concepción, Edmundo Larenas 129, Concepción (Chile); Department of Energy and Geo-Environmental Engineering, Pennsylvania State University, University Park, PA 16802 (United States); Soldera, F.; Suarez, S. [Department of Materials Science, Saarland University, Campus D3.3, 66123 Saarbrücken (Germany)

    2014-11-03

    The collection of used beverage cans is limited in countries where they are not fabricated; their low value does not justify the extra charge of exporting them for further processing. To address this increasingly serious problem, here we optimize the properties of an aluminum metal matrix composite (Al-MMC) obtained through direct fusion of beverage cans by using the slag generated in the melting process as reinforcement. This method consists of a modified rheocasting process followed by thixoforming. Our main operational variable is the shear rate applied to a semi-solid bath, subsequent to which a suitable heat treatment (T8) is proposed to improve the mechanical properties. The microstructure, the phases obtained and their effect on composite mechanical properties are analyzed. The composite material produced has, under the best conditions, a yield stress of 175 MPa and a tensile strength of 273 MPa. These results demonstrate that the proposed process does indeed transform the used beverage cans into promising composite materials, e.g., for structural applications.

  20. Procedures for Dealing with Optimism Bias in Transport Planning

    DEFF Research Database (Denmark)

    Flyvbjerg, Bent; Glenting, Carsten; Rønnest, Arne Kvist

    of the document are to provide empirically based optimism bias up-lifts for selected reference classes of transport infrastructure projects and provide guidance on using the established uplifts to produce more realistic forecasts for the individual project's capital expenditures. Furthermore, the underlying...... causes and institutional context for optimism bias in British transport projects are discussed and some possibilities for reducing optimism bias in project preparation and decision-making are identified....

  1. Virtual reality simulation for the optimization of endovascular procedures: current perspectives.

    Science.gov (United States)

    Rudarakanchana, Nung; Van Herzeele, Isabelle; Desender, Liesbeth; Cheshire, Nicholas J W

    2015-01-01

    Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR) simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.

  2. A collocation finite element method with prior matrix condensation

    International Nuclear Information System (INIS)

    Sutcliffe, W.J.

    1977-01-01

    For thin shells with general loading, sixteen degrees of freedom have been used for a previous finite element solution procedure using a Collocation method instead of the usual variational based procedures. Although the number of elements required was relatively small, nevertheless the final matrix for the simultaneous solution of all unknowns could become large for a complex compound structure. The purpose of the present paper is to demonstrate a method of reducing the final matrix size, so allowing solution for large structures with comparatively small computer storage requirements while retaining the accuracy given by high order displacement functions. Collocation points, a number are equilibrium conditions which must be satisfied independently of the overall compatibility of forces and deflections for a complete structure. (Auth.)

  3. Optimal admission to higher education

    DEFF Research Database (Denmark)

    Albæk, Karsten

    2016-01-01

    that documents the relevance of theory and illustrates how to apply optimal admission procedures. Indirect gains from optimal admission procedures include the potential for increasing entire cohorts of students' probability of graduating with a higher education degree, thereby increasing the skill level...

  4. Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix

    Directory of Open Access Journals (Sweden)

    Xin-Wei Zha

    Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation

  5. Progress on matrix SiC processing and properties for fully ceramic microencapsulated fuel form

    International Nuclear Information System (INIS)

    Terrani, K.A.; Kiggans, J.O.; Silva, C.M.; Shih, C.; Katoh, Y.; Snead, L.L.

    2015-01-01

    The consolidation mechanism and resulting properties of the silicon carbide (SiC) matrix of fully ceramic microencapsulated (FCM) fuel form are discussed. The matrix is produced via the nano-infiltration transient eutectic-forming (NITE) process. Coefficient of thermal expansion, thermal conductivity, and strength characteristics of this SiC matrix have been characterized in the unirradiated state. An ad hoc methodology for estimation of thermal conductivity of the neutron-irradiated NITE–SiC matrix is also provided to aid fuel performance modeling efforts specific to this concept. Finally, specific processing methods developed for production of an optimal and reliable fuel form using this process are summarized. These various sections collectively report the progress made to date on production of optimal FCM fuel form to enable its application in light water and advanced reactors

  6. Optimization of a dual-rotating-retarder polarimeter as applied to a tunable infrared Mueller-matrix scatterometer

    International Nuclear Information System (INIS)

    Vap, J C; Nauyoks, S E; Marciniak, M A

    2013-01-01

    The value of Mueller-matrix (Mm) scatterometers lies in their ability to simultaneously characterize the polarimetric and directional scatter properties of a sample. To extend their utility to characterizing modern optical materials in the infrared (IR), which often have very narrow resonances yet interesting polarization and directional properties, the addition of tunable IR lasers and an achromatic dual-rotating-retarder (DRR) polarimeter is necessary. An optimization method has been developed for use with the tunable IR Mm scatterometer. This method is rooted in the application of random error analysis to three different DRR retardances, λ/5, λ/4 and λ/3, for three different analyzer (A)-to-generator (G) retarder rotation ratios, θ A :θ G = 34:26, 25:5 and 37.5:7.5, and a variable number of intensity measurements. The product of the error analysis is in terms of the level of error that could be expected from a free-space Mm extraction for the various retardances, retarder rotation ratios and number of intensity measurements of the DRR. The optimal DRR specifications identified are a λ/3 retardance and a Fourier rotation ratio, with the number of required collected measurements dependent on the level of error acceptable to the user. Experimental results corroborate this error analysis using an achromatic 110-degree retardance-configured DRR polarimeter at 5 µm wavelength, which resulted in consistent 1% error in its free-space Mm extractions. (paper)

  7. An algorithm for mass matrix calculation of internally constrained molecular geometries

    International Nuclear Information System (INIS)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-01

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model

  8. An algorithm for mass matrix calculation of internally constrained molecular geometries.

    Science.gov (United States)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-28

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.

  9. General factorization relations and consistency conditions in the sudden approximation via infinite matrix inversion

    International Nuclear Information System (INIS)

    Chan, C.K.; Hoffman, D.K.; Evans, J.W.

    1985-01-01

    Local, i.e., multiplicative, operators satisfy well-known linear factorization relations wherein matrix elements (between states associated with a complete set of wave functions) can be obtained as a linear combination of those out of the ground state (the input data). Analytic derivation of factorization relations for general state input data results in singular integral expressions for the coefficients, which can, however, be regularized using consistency conditions between matrix elements out of a single (nonground) state. Similar results hold for suitable ''symmetry class'' averaged matrix elements where the symmetry class projection operators are ''complete.'' In several cases where the wave functions or projection operators incorporate orthogonal polynomial dependence, we show that the ground state factorization relations have a simplified structure allowing an alternative derivation of the general factorization relations via an infinite matrix inversion procedure. This form is shown to have some advantages over previous versions. In addition, this matrix inversion procedure obtains all consistency conditions (which is not always the case from regularization of singular integrals)

  10. Helicopter Flight Procedures for Community Noise Reduction

    Science.gov (United States)

    Greenwood, Eric

    2017-01-01

    A computationally efficient, semiempirical noise model suitable for maneuvering flight noise prediction is used to evaluate the community noise impact of practical variations on several helicopter flight procedures typical of normal operations. Turns, "quick-stops," approaches, climbs, and combinations of these maneuvers are assessed. Relatively small variations in flight procedures are shown to cause significant changes to Sound Exposure Levels over a wide area. Guidelines are developed for helicopter pilots intended to provide effective strategies for reducing the negative effects of helicopter noise on the community. Finally, direct optimization of flight trajectories is conducted to identify low noise optimal flight procedures and quantify the magnitude of community noise reductions that can be obtained through tailored helicopter flight procedures. Physically realizable optimal turns and approaches are identified that achieve global noise reductions of as much as 10 dBA Sound Exposure Level.

  11. Modeling and optimization of LCD optical performance

    CERN Document Server

    Yakovlev, Dmitry A; Kwok, Hoi-Sing

    2015-01-01

    The aim of this book is to present the theoretical foundations of modeling the optical characteristics of liquid crystal displays, critically reviewing modern modeling methods and examining areas of applicability. The modern matrix formalisms of optics of anisotropic stratified media, most convenient for solving problems of numerical modeling and optimization of LCD, will be considered in detail. The benefits of combined use of the matrix methods will be shown, which generally provides the best compromise between physical adequacy and accuracy with computational efficiency and optimization fac

  12. Correction: General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory.

    Science.gov (United States)

    Roch, Loïc M; Baldridge, Kim K

    2018-02-07

    Correction for 'General optimization procedure towards the design of a new family of minimal parameter spin-component-scaled double-hybrid density functional theory' by Loïc M. Roch and Kim K. Baldridge, Phys. Chem. Chem. Phys., 2017, 19, 26191-26200.

  13. Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System

    Directory of Open Access Journals (Sweden)

    Somen K. Bhudolia

    2017-03-01

    Full Text Available For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA thermoplastic matrix, Elium®. Vacuum assisted resin infusion (VARI process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS and NCF with novel reactive thermoplastic (TP resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented.

  14. Optimizing Polymer Infusion Process for Thin Ply Textile Composites with Novel Matrix System.

    Science.gov (United States)

    Bhudolia, Somen K; Perrotey, Pavel; Joshi, Sunil C

    2017-03-15

    For mass production of structural composites, use of different textile patterns, custom preforming, room temperature cure high performance polymers and simplistic manufacturing approaches are desired. Woven fabrics are widely used for infusion processes owing to their high permeability but their localised mechanical performance is affected due to inherent associated crimps. The current investigation deals with manufacturing low-weight textile carbon non-crimp fabrics (NCFs) composites with a room temperature cure epoxy and a novel liquid Methyl methacrylate (MMA) thermoplastic matrix, Elium ® . Vacuum assisted resin infusion (VARI) process is chosen as a cost effective manufacturing technique. Process parameters optimisation is required for thin NCFs due to intrinsic resistance it offers to the polymer flow. Cycles of repetitive manufacturing studies were carried out to optimise the NCF-thermoset (TS) and NCF with novel reactive thermoplastic (TP) resin. It was noticed that the controlled and optimised usage of flow mesh, vacuum level and flow speed during the resin infusion plays a significant part in deciding the final quality of the fabricated composites. The material selections, the challenges met during the manufacturing and the methods to overcome these are deliberated in this paper. An optimal three stage vacuum technique developed to manufacture the TP and TS composites with high fibre volume and lower void content is established and presented.

  15. Optimization and critical evaluation of decellularization strategies to develop renal extracellular matrix scaffolds as biological templates for organ engineering and transplantation.

    Science.gov (United States)

    Caralt, M; Uzarski, J S; Iacob, S; Obergfell, K P; Berg, N; Bijonowski, B M; Kiefer, K M; Ward, H H; Wandinger-Ness, A; Miller, W M; Zhang, Z J; Abecassis, M M; Wertheim, J A

    2015-01-01

    The ability to generate patient-specific cells through induced pluripotent stem cell (iPSC) technology has encouraged development of three-dimensional extracellular matrix (ECM) scaffolds as bioactive substrates for cell differentiation with the long-range goal of bioengineering organs for transplantation. Perfusion decellularization uses the vasculature to remove resident cells, leaving an intact ECM template wherein new cells grow; however, a rigorous evaluative framework assessing ECM structural and biochemical quality is lacking. To address this, we developed histologic scoring systems to quantify fundamental characteristics of decellularized rodent kidneys: ECM structure (tubules, vessels, glomeruli) and cell removal. We also assessed growth factor retention--indicating matrix biofunctionality. These scoring systems evaluated three strategies developed to decellularize kidneys (1% Triton X-100, 1% Triton X-100/0.1% sodium dodecyl sulfate (SDS) and 0.02% Trypsin-0.05% EGTA/1% Triton X-100). Triton and Triton/SDS preserved renal microarchitecture and retained matrix-bound basic fibroblast growth factor and vascular endothelial growth factor. Trypsin caused structural deterioration and growth factor loss. Triton/SDS-decellularized scaffolds maintained 3 h of leak-free blood flow in a rodent transplantation model and supported repopulation with human iPSC-derived endothelial cells and tubular epithelial cells ex vivo. Taken together, we identify an optimal Triton/SDS-based decellularization strategy that produces a biomatrix that may ultimately serve as a rodent model for kidney bioengineering. © Copyright 2014 The American Society of Transplantation and the American Society of Transplant Surgeons.

  16. Extensions of linear-quadratic control, optimization and matrix theory

    CERN Document Server

    Jacobson, David H

    1977-01-01

    In this book, we study theoretical and practical aspects of computing methods for mathematical modelling of nonlinear systems. A number of computing techniques are considered, such as methods of operator approximation with any given accuracy; operator interpolation techniques including a non-Lagrange interpolation; methods of system representation subject to constraints associated with concepts of causality, memory and stationarity; methods of system representation with an accuracy that is the best within a given class of models; methods of covariance matrix estimation;methods for low-rank mat

  17. Design optimization of axial flow hydraulic turbine runner: Part II - multi-objective constrained optimization method

    Science.gov (United States)

    Peng, Guoyi; Cao, Shuliang; Ishizuka, Masaru; Hayama, Shinji

    2002-06-01

    This paper is concerned with the design optimization of axial flow hydraulic turbine runner blade geometry. In order to obtain a better design plan with good performance, a new comprehensive performance optimization procedure has been presented by combining a multi-variable multi-objective constrained optimization model with a Q3D inverse computation and a performance prediction procedure. With careful analysis of the inverse design of axial hydraulic turbine runner, the total hydraulic loss and the cavitation coefficient are taken as optimization objectives and a comprehensive objective function is defined using the weight factors. Parameters of a newly proposed blade bound circulation distribution function and parameters describing positions of blade leading and training edges in the meridional flow passage are taken as optimization variables.The optimization procedure has been applied to the design optimization of a Kaplan runner with specific speed of 440 kW. Numerical results show that the performance of designed runner is successfully improved through optimization computation. The optimization model is found to be validated and it has the feature of good convergence. With the multi-objective optimization model, it is possible to control the performance of designed runner by adjusting the value of weight factors defining the comprehensive objective function. Copyright

  18. Structural differences of matrix metalloproteinases. Homology modeling and energy minimization of enzyme-substrate complexes

    DEFF Research Database (Denmark)

    Terp, G E; Christensen, I T; Jørgensen, Flemming Steen

    2000-01-01

    Matrix metalloproteinases are extracellular enzymes taking part in the remodeling of extracellular matrix. The structures of the catalytic domain of MMP1, MMP3, MMP7 and MMP8 are known, but structures of enzymes belonging to this family still remain to be determined. A general approach...... to the homology modeling of matrix metalloproteinases, exemplified by the modeling of MMP2, MMP9, MMP12 and MMP14 is described. The models were refined using an energy minimization procedure developed for matrix metalloproteinases. This procedure includes incorporation of parameters for zinc and calcium ions...... in the AMBER 4.1 force field, applying a non-bonded approach and a full ion charge representation. Energy minimization of the apoenzymes yielded structures with distorted active sites, while reliable three-dimensional structures of the enzymes containing a substrate in active site were obtained. The structural...

  19. Optimized manufacturable porous materials

    DEFF Research Database (Denmark)

    Andreassen, Erik; Andreasen, Casper Schousboe; Jensen, Jakob Søndergaard

    Topology optimization has been used to design two-dimensional material structures with specific elastic properties, but optimized designs of three-dimensional material structures are more scarsely seen. Partly because it requires more computational power, and partly because it is a major challenge...... to include manufacturing constraints in the optimization. This work focuses on incorporating the manufacturability into the optimization procedure, allowing the resulting material structure to be manufactured directly using rapid manufacturing techniques, such as selective laser melting/sintering (SLM....../S). The available manufacturing methods are best suited for porous materials (one constituent and void), but the optimization procedure can easily include more constituents. The elasticity tensor is found from one unit cell using the homogenization method together with a standard finite element (FE) discretization...

  20. Matrix Optical Absorption in UV-MALDI MS.

    Science.gov (United States)

    Robinson, Kenneth N; Steven, Rory T; Bunch, Josephine

    2018-03-01

    In ultraviolet matrix-assisted laser desorption/ionization mass spectrometry (UV-MALDI MS) matrix compound optical absorption governs the uptake of laser energy, which in turn has a strong influence on experimental results. Despite this, quantitative absorption measurements are lacking for most matrix compounds. Furthermore, despite the use of UV-MALDI MS to detect a vast range of compounds, investigations into the effects of laser energy have been primarily restricted to single classes of analytes. We report the absolute solid state absorption spectra of the matrix compounds α-cyano-4-hydroxycinnamic acid (CHCA), para-nitroaniline (PNA), 2-mercaptobenzothiazole (MBT), 2,5-dihydroxybenzoic acid (2,5-DHB), and 2,4,6-trihydroxyacetophenone (THAP). The desorption/ionization characteristics of these matrix compounds with respect to laser fluence was investigated using mixed systems of matrix with either angiotensin II, PC(34:1) lipid standard, or haloperidol, acting as representatives for typical classes of analyte encountered in UV-MALDI MS. The first absolute solid phase spectra for PNA, MBT, and THAP are reported; additionally, inconsistencies between previously published spectra for CHCA are resolved. In light of these findings, suggestions are made for experimental optimization with regards to matrix and laser wavelength selection. The relationship between matrix optical cross-section and wavelength-dependant threshold fluence, fluence of maximum ion yield, and R, a new descriptor for the change in ion intensity with fluence, are described. A matrix cross-section of 1.3 × 10 -17 cm -2 was identified as a potential minimum for desorption/ionization of analytes. Graphical Abstract ᅟ.

  1. Studies of mineralization in tissue culture: optimal conditions for cartilage calcification

    Science.gov (United States)

    Boskey, A. L.; Stiner, D.; Doty, S. B.; Binderman, I.; Leboy, P.

    1992-01-01

    The optimal conditions for obtaining a calcified cartilage matrix approximating that which exists in situ were established in a differentiating chick limb bud mesenchymal cell culture system. Using cells from stage 21-24 embryos in a micro-mass culture, at an optimal density of 0.5 million cells/20 microliters spot, the deposition of small crystals of hydroxyapatite on a collagenous matrix and matrix vesicles was detected by day 21 using X-ray diffraction, FT-IR microscopy, and electron microscopy. Optimal media, containing 1.1 mM Ca, 4 mM P, 25 micrograms/ml vitamin C, 0.3 mg/ml glutamine, no Hepes buffer, and 10% fetal bovine serum, produced matrix resembling the calcifying cartilage matrix of fetal chick long bones. Interestingly, higher concentrations of fetal bovine serum had an inhibitory effect on calcification. The cartilage phenotype was confirmed based on the cellular expression of cartilage collagen and proteoglycan mRNAs, the presence of type II and type X collagen, and cartilage type proteoglycan at the light microscopic level, and the presence of chondrocytes and matrix vesicles at the EM level. The system is proposed as a model for evaluating the events in cell mediated cartilage calcification.

  2. General 4–zero texture mass matrix parametrizations

    International Nuclear Information System (INIS)

    Barranco, J; Delepine, D; Lopez-Lozano, L

    2014-01-01

    It is performed the diagonalization of a non–Hermitian four–zero texture Yukawa matrix with a general formalism. This procedure leads to 3 possibilities to parametrize the relation between the fermion masses and the elements of the corresponding Yukawa matrix. Then, the matrices that diagonalize each Yukawa mass matrix are combined in order to obtain 9 different theoretical CKM or PMNS mixing matrices [1]. Through a χ 2 analysis, we have constrained the values of the remaining free parameters such as the theoretical mixing matrix matches the latest experimental measurements of the mixing matrices. This analysis was done without assuming any approximations. In the case of the quark sector, it is found that only four different theoretical mixing matrices are compatible with the actual high precision experimental measurement of the CKM matrix elements. For the lepton sector, where the masses of neutrinos are not known, we found that independently of the parametrization that have been chosen, the updated experimental measurements of the mixing angles in the PMNS matrix, imply a mass for the heaviest left–handed neutrino to be ∼ 0.05eV

  3. The Optimization of the Oiling Bath Cosmetic Composition Containing Rapeseed Phospholipids and Grapeseed Oil by the Full Factorial Design

    Directory of Open Access Journals (Sweden)

    Michał Górecki

    2015-04-01

    Full Text Available The proper condition of hydrolipid mantle and the stratum corneum intercellular matrix determines effective protection against transepidermal water loss (TEWL. Some chemicals, improper use of cosmetics, poor hygiene, old age and some diseases causes disorder in the mentioned structures and leads to TEWL increase. The aim of this study was to obtain the optimal formulation composition of an oiling bath cosmetic based on rapeseed phospholipids and vegetable oil with high content of polyunsaturated fatty acids. In this work, the composition of oiling bath form was calculated and the degree of oil dispersion after mixing the bath preparation with water was selected as the objective function in the optimizing procedure. The full factorial design 23 in the study was used. The concentrations of rapeseed lecithin ethanol soluble fraction (LESF, alcohol (E and non-ionic emulsifier (P were optimized. Based on the calculations from our results, the optimal composition of oiling bath cosmetic was: L (LESF 5.0 g, E (anhydrous ethanol 20.0 g and P (Polysorbate 85 1.5 g. The optimization procedure used in the study allowed to obtain the oiling bath cosmetic which gives above 60% higher emulsion dispersion degree 5.001 × 10−5 cm−1 compared to the initial formulation composition with the 3.096 × 10−5 cm−1.

  4. Computational procedure of optimal inventory model involving controllable backorder rate and variable lead time with defective units

    Science.gov (United States)

    Lee, Wen-Chuan; Wu, Jong-Wuu; Tsou, Hsin-Hui; Lei, Chia-Ling

    2012-10-01

    This article considers that the number of defective units in an arrival order is a binominal random variable. We derive a modified mixture inventory model with backorders and lost sales, in which the order quantity and lead time are decision variables. In our studies, we also assume that the backorder rate is dependent on the length of lead time through the amount of shortages and let the backorder rate be a control variable. In addition, we assume that the lead time demand follows a mixture of normal distributions, and then relax the assumption about the form of the mixture of distribution functions of the lead time demand and apply the minimax distribution free procedure to solve the problem. Furthermore, we develop an algorithm procedure to obtain the optimal ordering strategy for each case. Finally, three numerical examples are also given to illustrate the results.

  5. The augmented lagrange multipliers method for matrix completion from corrupted samplings with application to mixed Gaussian-impulse noise removal.

    Directory of Open Access Journals (Sweden)

    Fan Meng

    Full Text Available This paper studies the problem of the restoration of images corrupted by mixed Gaussian-impulse noise. In recent years, low-rank matrix reconstruction has become a research hotspot in many scientific and engineering domains such as machine learning, image processing, computer vision and bioinformatics, which mainly involves the problem of matrix completion and robust principal component analysis, namely recovering a low-rank matrix from an incomplete but accurate sampling subset of its entries and from an observed data matrix with an unknown fraction of its entries being arbitrarily corrupted, respectively. Inspired by these ideas, we consider the problem of recovering a low-rank matrix from an incomplete sampling subset of its entries with an unknown fraction of the samplings contaminated by arbitrary errors, which is defined as the problem of matrix completion from corrupted samplings and modeled as a convex optimization problem that minimizes a combination of the nuclear norm and the l(1-norm in this paper. Meanwhile, we put forward a novel and effective algorithm called augmented Lagrange multipliers to exactly solve the problem. For mixed Gaussian-impulse noise removal, we regard it as the problem of matrix completion from corrupted samplings, and restore the noisy image following an impulse-detecting procedure. Compared with some existing methods for mixed noise removal, the recovery quality performance of our method is dominant if images possess low-rank features such as geometrically regular textures and similar structured contents; especially when the density of impulse noise is relatively high and the variance of Gaussian noise is small, our method can outperform the traditional methods significantly not only in the simultaneous removal of Gaussian noise and impulse noise, and the restoration ability for a low-rank image matrix, but also in the preservation of textures and details in the image.

  6. Structural optimization procedure of a composite wind turbine blade for reducing both material cost and blade weight

    Science.gov (United States)

    Hu, Weifei; Park, Dohyun; Choi, DongHoon

    2013-12-01

    A composite blade structure for a 2 MW horizontal axis wind turbine is optimally designed. Design requirements are simultaneously minimizing material cost and blade weight while satisfying the constraints on stress ratio, tip deflection, fatigue life and laminate layup requirements. The stress ratio and tip deflection under extreme gust loads and the fatigue life under a stochastic normal wind load are evaluated. A blade element wind load model is proposed to explain the wind pressure difference due to blade height change during rotor rotation. For fatigue life evaluation, the stress result of an implicit nonlinear dynamic analysis under a time-varying fluctuating wind is converted to the histograms of mean and amplitude of maximum stress ratio using the rainflow counting algorithm Miner's rule is employed to predict the fatigue life. After integrating and automating the whole analysis procedure an evolutionary algorithm is used to solve the discrete optimization problem.

  7. The sintered microsphere matrix for bone tissue engineering: in vitro osteoconductivity studies.

    Science.gov (United States)

    Borden, Mark; Attawia, Mohamed; Laurencin, Cato T

    2002-09-05

    A tissue engineering approach has been used to design three-dimensional synthetic matrices for bone repair. The osteoconductivity and degradation profile of a novel polymeric bone-graft substitute was evaluated in an in vitro setting. Using the copolymer poly(lactide-co-glycolide) [PLAGA], a sintering technique based on microsphere technology was used to fabricate three-dimensional porous scaffolds for bone regeneration. Osteoblasts and fibroblasts were seeded onto a 50:50 PLAGA scaffold. Morphologic evaluation through scanning electron microscopy demonstrated that both cell types attached and spread over the scaffold. Cells migrated through the matrix using cytoplasmic extensions to bridge the structure. Cross-sectional images indicated that cellular proliferation had penetrated into the matrix approximately 700 microm from the surface. Examination of the surfaces of cell/matrix constructs demonstrated that cellular proliferation had encompassed the pores of the matrix by 14 days of cell culture. With the aim of optimizing polymer composition and polymer molecular weight, a degradation study was conducted utilizing the matrix. The results demonstrate that degradation of the sintered matrix is dependent on molecular weight, copolymer ratio, and pore volume. From this data, it was determined that 75:25 PLAGA with an initial molecular weight of 100,000 has an optimal degradation profile. These studies show that the sintered microsphere matrix has an osteoconductive structure capable of functioning as a cellular scaffold with a degradation profile suitable for bone regeneration. Copyright 2002 Wiley Periodicals, Inc.

  8. Two-step algorithm of generalized PAPA method applied to linear programming solution of dynamic matrix control

    International Nuclear Information System (INIS)

    Shimizu, Yoshiaki

    1991-01-01

    In recent complicated nuclear systems, there are increasing demands for developing highly advanced procedures for various problems-solvings. Among them keen interests have been paid on man-machine communications to improve both safety and economy factors. Many optimization methods have been good enough to elaborate on these points. In this preliminary note, we will concern with application of linear programming (LP) for this purpose. First we will present a new superior version of the generalized PAPA method (GEPAPA) to solve LP problems. We will then examine its effectiveness when applied to derive dynamic matrix control (DMC) as the LP solution. The approach is to aim at the above goal through a quality control of process that will appear in the system. (author)

  9. A biotin-drug extraction and acid dissociation (BEAD) procedure to eliminate matrix and drug interference in a protein complex anti-drug antibody (ADA) isotype specific assay.

    Science.gov (United States)

    Niu, Hongmei; Klem, Thomas; Yang, Jinsong; Qiu, Yongchang; Pan, Luying

    2017-07-01

    Monitoring anti-drug antibody (ADA) responses in patients receiving protein therapeutics treatment is an important safety assessment for regulatory agencies, drug manufacturers, clinicians and patients. Recombinant human IGF-1/IGFBP-3 (rhIGF-1/rhIGFBP-3) is a 1:1 formulation of naturally occurring protein complex. The individual IGF-1 and IGFBP-3 proteins have multiple binding partners in serum matrix with high binding affinity to each other, which presents challenges in ADA assay development. We have developed a biotin-drug extraction with acid dissociation (BEAD) procedure followed by an electrochemiluminescence (ECL) direct assay to overcome matrix and drug interference. The method utilizes two step acid dissociation and excess biotin-drug to extract total ADA, which are further captured by soluble biotin-drug and detected in an ECL semi-homogeneous direct assay format. The pre-treatment method effectively eliminates interference by serum matrix and free drug, and enhances assay sensitivity. The assays passed acceptance criteria for all validation parameters, and have been used for clinical sample Ab testing. This method principle exemplifies a new approach for anti-isotype ADA assays, and could be an effective strategy for neutralizing antibody (NAb), pharmacokinetic (PK) and biomarker analysis in need of overcoming interference factors. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Fuel shuffling optimization for the Delft research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Geemert, R. van; Hoogenboom, J.E.; Gibcus, H.P.M. [Delft Univ. of Technology, Interfaculty Reactor Inst., Delft (Netherlands); Quist, A.J. [Delft Univ., Fac. of Applied Mathematics and Informatics, Delft (Netherlands)

    1997-07-01

    A fuel shuffling optimization procedure is proposed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, a 2 MWth swimming-pool type research reactor. In order to cope with the fluctuatory behaviour of objective functions in loading pattern optimization, the proposed cyclic permutation optimization procedure features a gradual transition from global to local search behaviour via the introduction of stochastic tests for the number of fuel assemblies involved in a cyclic permutation. The possible objectives and the safety and operation constraints, as well as the optimization procedure, are discussed, followed by some optimization results for the HOR. (author) 5 figs., 4 refs.

  11. Fuel shuffling optimization for the Delft research reactor

    International Nuclear Information System (INIS)

    Geemert, R. van; Hoogenboom, J.E.; Gibcus, H.P.M.; Quist, A.J.

    1997-01-01

    A fuel shuffling optimization procedure is proposed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, a 2 MWth swimming-pool type research reactor. In order to cope with the fluctuatory behaviour of objective functions in loading pattern optimization, the proposed cyclic permutation optimization procedure features a gradual transition from global to local search behaviour via the introduction of stochastic tests for the number of fuel assemblies involved in a cyclic permutation. The possible objectives and the safety and operation constraints, as well as the optimization procedure, are discussed, followed by some optimization results for the HOR. (author)

  12. Fuel shuffling optimization for the Delft research reactor

    Energy Technology Data Exchange (ETDEWEB)

    Geemert, R. van; Hoogenboom, J.E.; Gibcus, H.P.M. [Delft Univ. of Technology, Interfaculty Reactor Inst., Delft (Netherlands); Quist, A.J. [Delft Univ., Fac. of Applied Mathematics and Informatics, Delft (Netherlands)

    1997-07-01

    A fuel shuffling optimization procedure is proposed for the Hoger Onderwijs Reactor (HOR) in Delft, the Netherlands, a 2 MWth swimming-pool type research reactor. In order to cope with the fluctuatory behaviour of objective functions in loading pattern optimization, the proposed cyclic permutation optimization procedure features a gradual transition from global to local search behaviour via the introduction of stochastic tests for the number of fuel assemblies involved in a cyclic permutation. The possible objectives and the safety and operation constraints, as well as the optimization procedure, are discussed, followed by some optimization results for the HOR. (author)

  13. Efficient Tridiagonal Preconditioner for the Matrix-Free Truncated Newton Method

    Czech Academy of Sciences Publication Activity Database

    Lukšan, Ladislav; Vlček, Jan

    2014-01-01

    Roč. 235, 25 May (2014), s. 394-407 ISSN 0096-3003 R&D Projects: GA ČR GA13-06684S Institutional support: RVO:67985807 Keywords : unconstrained optimization * large scale optimization * matrix-free truncated Newton method * preconditioned conjugate gradient method * preconditioners obtained by the directional differentiation * numerical algorithms Subject RIV: BA - General Mathematics Impact factor: 1.551, year: 2014

  14. Structured Matrix Completion with Applications to Genomic Data Integration.

    Science.gov (United States)

    Cai, Tianxi; Cai, T Tony; Zhang, Anru

    2016-01-01

    Matrix completion has attracted significant recent attention in many fields including statistics, applied mathematics and electrical engineering. Current literature on matrix completion focuses primarily on independent sampling models under which the individual observed entries are sampled independently. Motivated by applications in genomic data integration, we propose a new framework of structured matrix completion (SMC) to treat structured missingness by design. Specifically, our proposed method aims at efficient matrix recovery when a subset of the rows and columns of an approximately low-rank matrix are observed. We provide theoretical justification for the proposed SMC method and derive lower bound for the estimation errors, which together establish the optimal rate of recovery over certain classes of approximately low-rank matrices. Simulation studies show that the method performs well in finite sample under a variety of configurations. The method is applied to integrate several ovarian cancer genomic studies with different extent of genomic measurements, which enables us to construct more accurate prediction rules for ovarian cancer survival.

  15. NMR structural refinement of a tandem G·A mismatched decamer d(CCAAGATTGG)2 via the hybrid matrix procedure

    International Nuclear Information System (INIS)

    Nikonowicz, E.P.; Meadows, R.P.; Fagan, P.; Gorenstein, D.G.

    1991-01-01

    A complete relaxation matrix approach employing a matrix eigenvalue/eigenvector solution to the Bloch equations is used to evaluate the NMR solution structure of a tandemly positioned G·A double mismatch decamer oligodeoxyribonucleotide duplex, d(CCAAGATTGG) 2 . An iterative refinement method using a hybrid relaxation matrix combined with restrained molecular dynamics calculations is shown to provide structures having good agreement with the experimentally derived structures. Distances incorporated into the MD simulations have been calculated from the relaxation rate matrix evaluated from a hybrid NOESY volume matrix whose elements are obtained from the merging of experimental and calculated NOESY intensities. Starting from both A- and B-DNA and mismatch syn and anti models, it is possible to calculate structures that are in good atomic RMS agreement with each other ( 3.6 angstrom). Importantly, the hybrid matrix derived structures are in excellent agreement with the experimental solution conformation as determined by comparison of the 200-ms simulated and experimental NOESY spectra, while the crystallographic data provide spectra that are grossly different

  16. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion

    International Nuclear Information System (INIS)

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian

    2015-01-01

    Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was

  17. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion.

    Science.gov (United States)

    Luo, Xiongbiao; Wan, Ying; He, Xiangjian

    2015-04-01

    Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor's) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. The experimental results demonstrate that the authors' proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors' framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. A robust electromagnetically guided endoscopy framework was proposed on the basis of an enhanced particle swarm

  18. Robust electromagnetically guided endoscopic procedure using enhanced particle swarm optimization for multimodal information fusion

    Energy Technology Data Exchange (ETDEWEB)

    Luo, Xiongbiao, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au [Robarts Research Institute, Western University, London, Ontario N6A 5K8 (Canada); Wan, Ying, E-mail: xluo@robarts.ca, E-mail: Ying.Wan@student.uts.edu.au; He, Xiangjian [School of Computing and Communications, University of Technology, Sydney, New South Wales 2007 (Australia)

    2015-04-15

    Purpose: Electromagnetically guided endoscopic procedure, which aims at accurately and robustly localizing the endoscope, involves multimodal sensory information during interventions. However, it still remains challenging in how to integrate these information for precise and stable endoscopic guidance. To tackle such a challenge, this paper proposes a new framework on the basis of an enhanced particle swarm optimization method to effectively fuse these information for accurate and continuous endoscope localization. Methods: The authors use the particle swarm optimization method, which is one of stochastic evolutionary computation algorithms, to effectively fuse the multimodal information including preoperative information (i.e., computed tomography images) as a frame of reference, endoscopic camera videos, and positional sensor measurements (i.e., electromagnetic sensor outputs). Since the evolutionary computation method usually limits its possible premature convergence and evolutionary factors, the authors introduce the current (endoscopic camera and electromagnetic sensor’s) observation to boost the particle swarm optimization and also adaptively update evolutionary parameters in accordance with spatial constraints and the current observation, resulting in advantageous performance in the enhanced algorithm. Results: The experimental results demonstrate that the authors’ proposed method provides a more accurate and robust endoscopic guidance framework than state-of-the-art methods. The average guidance accuracy of the authors’ framework was about 3.0 mm and 5.6° while the previous methods show at least 3.9 mm and 7.0°. The average position and orientation smoothness of their method was 1.0 mm and 1.6°, which is significantly better than the other methods at least with (2.0 mm and 2.6°). Additionally, the average visual quality of the endoscopic guidance was improved to 0.29. Conclusions: A robust electromagnetically guided endoscopy framework was

  19. Optimization of matrix solid-phase dispersion for the rapid determination of salicylate and benzophenone-type UV absorbing substances in marketed fish.

    Science.gov (United States)

    Tsai, Dung-Ying; Chen, Chien-Liang; Ding, Wang-Hsien

    2014-07-01

    A simple and effective method for the rapid determination of five salicylate and benzophenone-type UV absorbing substances in marketed fish is described. The method involves the use of matrix solid-phase dispersion (MSPD) prior to their determination by on-line silylation gas chromatography tandem mass spectrometry (GC-MS/MS). The parameters that affect the extraction efficiency were optimized using a Box-Behnken design method. The optimal extraction conditions involved dispersing 0.5g of freeze-dried powdered fish with 1.0g of Florisil using a mortar and pestle. This blend was then transferred to a solid-phase extraction (SPE) cartridge containing 1.0g of octadecyl bonded silica (C18), as the clean-up co-sorbent. The target analytes were then eluted with 7mL of acetonitrile. The extract was derivatized on-line in the GC injection-port by reaction with a trimethylsilylating (TMS) reagent. The TMS-derivatives were then identified and quantitated by GC-MS/MS. The limits of quantitation (LOQs) were less than 0.1ng/g. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Estimating the two-particle $K$-matrix for multiple partial waves and decay channels from finite-volume energies

    DEFF Research Database (Denmark)

    Morningstar, Colin; Bulava, John; Singha, Bijit

    2017-01-01

    An implementation of estimating the two-to-two $K$-matrix from finite-volume energies based on the L\\"uscher formalism and involving a Hermitian matrix known as the "box matrix" is described. The method includes higher partial waves and multiple decay channels. Two fitting procedures for estimating...

  1. Orbifolds and Exact Solutions of Strongly-Coupled Matrix Models

    Science.gov (United States)

    Córdova, Clay; Heidenreich, Ben; Popolitov, Alexandr; Shakirov, Shamil

    2018-02-01

    We find an exact solution to strongly-coupled matrix models with a single-trace monomial potential. Our solution yields closed form expressions for the partition function as well as averages of Schur functions. The results are fully factorized into a product of terms linear in the rank of the matrix and the parameters of the model. We extend our formulas to include both logarithmic and finite-difference deformations, thereby generalizing the celebrated Selberg and Kadell integrals. We conjecture a formula for correlators of two Schur functions in these models, and explain how our results follow from a general orbifold-like procedure that can be applied to any one-matrix model with a single-trace potential.

  2. MODELING IN MAPLE AS THE RESEARCHING MEANS OF FUNDAMENTAL CONCEPTS AND PROCEDURES IN LINEAR ALGEBRA

    Directory of Open Access Journals (Sweden)

    Vasil Kushnir

    2016-05-01

    Full Text Available The article is devoted to binary technology and "fundamental training technology." Binary training refers to the simultaneous teaching of mathematics and computer science, for example differential equations and Maple, linear algebra and Maple. Moreover the system of traditional course of Maple is not performed. The use of the opportunities of Maple-technology in teaching mathematics is based on the following fundamental concepts of computer science as an algorithm, program, a linear program, cycle, branching, relative operators, etc. That’s why only a certain system of command operators in Maple is considered. They are necessary for fundamental concepts of linear algebra and differential equations studying in Maple-environment. Relative name - "the technology of fundamental training" reflects the study of fundamental mathematical concepts and procedures that express the properties of these concepts in Maple-environment. This article deals with the study of complex fundamental concepts of linear algebra (determinant of the matrix and algorithm of its calculation, the characteristic polynomial of the matrix and the eigenvalues of matrix, canonical form of characteristic matrix, eigenvectors of matrix, elementary divisors of the characteristic matrix, etc., which are discussed in the appropriate courses briefly enough, and sometimes are not considered at all, but they are important in linear systems of differential equations, asymptotic methods for solving differential equations, systems of linear equations. Herewith complex and voluminous procedures of finding of these linear algebra concepts embedded in Maple can be performed as a result of a simple command-operator. Especially important issue is building matrix to canonical form. In fact matrix functions are effectively reduced to the functions of the diagonal matrix or matrix in Jordan canonical form. These matrices are used to rise a square matrix to a power, to extract the roots of the n

  3. Optimization of somatic embryogenesis procedure for commercial ...

    African Journals Online (AJOL)

    The first objective of this study was to assess and optimize somatic embryo production in a genetically diverse range of cacao genotypes. The primary and secondary somatic embryogenesis response of eight promising cacao clones and a positive control was evaluated using modified versions of standard protocols.

  4. N-representability-driven reconstruction of the two-electron reduced-density matrix for a real-time time-dependent electronic structure method

    International Nuclear Information System (INIS)

    Jeffcoat, David B.; DePrince, A. Eugene

    2014-01-01

    Propagating the equations of motion (EOM) for the one-electron reduced-density matrix (1-RDM) requires knowledge of the corresponding two-electron RDM (2-RDM). We show that the indeterminacy of this expression can be removed through a constrained optimization that resembles the variational optimization of the ground-state 2-RDM subject to a set of known N-representability conditions. Electronic excitation energies can then be obtained by propagating the EOM for the 1-RDM and following the dipole moment after the system interacts with an oscillating external electric field. For simple systems with well-separated excited states whose symmetry differs from that of the ground state, excitation energies obtained from this method are comparable to those obtained from full configuration interaction computations. Although the optimized 2-RDM satisfies necessary N-representability conditions, the procedure cannot guarantee a unique mapping from the 1-RDM to the 2-RDM. This deficiency is evident in the mean-field-quality description of transitions to states of the same symmetry as the ground state, as well as in the inability of the method to describe Rabi oscillations

  5. N-representability-driven reconstruction of the two-electron reduced-density matrix for a real-time time-dependent electronic structure method

    Science.gov (United States)

    Jeffcoat, David B.; DePrince, A. Eugene

    2014-12-01

    Propagating the equations of motion (EOM) for the one-electron reduced-density matrix (1-RDM) requires knowledge of the corresponding two-electron RDM (2-RDM). We show that the indeterminacy of this expression can be removed through a constrained optimization that resembles the variational optimization of the ground-state 2-RDM subject to a set of known N-representability conditions. Electronic excitation energies can then be obtained by propagating the EOM for the 1-RDM and following the dipole moment after the system interacts with an oscillating external electric field. For simple systems with well-separated excited states whose symmetry differs from that of the ground state, excitation energies obtained from this method are comparable to those obtained from full configuration interaction computations. Although the optimized 2-RDM satisfies necessary N-representability conditions, the procedure cannot guarantee a unique mapping from the 1-RDM to the 2-RDM. This deficiency is evident in the mean-field-quality description of transitions to states of the same symmetry as the ground state, as well as in the inability of the method to describe Rabi oscillations.

  6. Strategy BMT Al-Ittihad Using Matrix IE, Matrix SWOT 8K, Matrix SPACE and Matrix TWOS

    Directory of Open Access Journals (Sweden)

    Nofrizal Nofrizal

    2018-03-01

    Full Text Available This research aims to formulate and select BMT Al-Ittihad Rumbai strategy to face the changing of business environment both from internal environment such as organization resources, finance, member and external business such as competitor, economy, politics and others. This research method used Analysis of EFAS, IFAS, IE Matrix, SWOT-8K Matrix, SPACE Matrix and TWOS Matrix. our hope from this research it can assist BMT Al-Ittihad in formulating and selecting strategies for the sustainability of BMT Al-Ittihad in the future. The sample in this research is using purposive sampling technique that is the manager and leader of BMT Al-IttihadRumbaiPekanbaru. The result of this research shows that the position of BMT Al-Ittihad using IE Matrix, SWOT-8K Matrix and SPACE Matrix is in growth position, stabilization and aggressive. The choice of strategy after using TWOS Matrix is market penetration, market development, vertical integration, horizontal integration, and stabilization (careful.

  7. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying

    2015-01-01

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  8. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-11-30

    We approximate large non-structured Matérn covariance matrices of size n×n in the H-matrix format with a log-linear computational cost and storage O(kn log n), where rank k ≪ n is a small integer. Applications are: spatial statistics, machine learning and image analysis, kriging and optimal design.

  9. Recursive Principal Components Analysis Using Eigenvector Matrix Perturbation

    Directory of Open Access Journals (Sweden)

    Deniz Erdogmus

    2004-10-01

    Full Text Available Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance, and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated from the current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditional methods like Sanger's rule and APEX, as well as a structurally similar matrix perturbation-based method.

  10. EVALUATION OF A BUFFERED SOLID PHASE DISPERSION PROCEDURE ADAPTED FOR PESTICIDE ANALYSES IN THE SOIL MATRIX

    Directory of Open Access Journals (Sweden)

    Ana María Domínguez

    2015-08-01

    Full Text Available An evaluation of the pesticides extracted from the soil matrix was conducted using a citrate-buffered solid phase dispersion sample preparation method (QuEChERS. The identification and quantitation of pesticide compounds was performed using gas chromatography-mass spectrometry. Because of the occurrence of the matrix effect in 87% of the analyzed pesticides, the quantification was performed using matrix-matched calibration. The method's quantification limits were between 0.01 and 0.5 mg kg-1. Repeatability and intermediate precision, expressed as a relative standard deviation percentage, were less than 20%. The recoveries in general ranged between 62% and 99%, with a relative standard deviation < 20%. All the responses were linear, with a correlation coefficient (r ≥0.99.

  11. U matrix construction for Quantum Chromodynamics through Dirac brackets

    International Nuclear Information System (INIS)

    Santos, M.A. dos.

    1987-09-01

    A procedure for obtaining the U matrix using Dirac brackets, recently developed by Kiefer and Rothe, is applied for Quantum Chromodynamics. The correspondent interaction Lagrangian is the same obtained by Schwinger, Christ and Lee, using independent methods. (L.C.J.A.)

  12. Optimal Brain Surgeon on Artificial Neural Networks in

    DEFF Research Database (Denmark)

    Christiansen, Niels Hørbye; Job, Jonas Hultmann; Klyver, Katrine

    2012-01-01

    It is shown how the procedure know as optimal brain surgeon can be used to trim and optimize artificial neural networks in nonlinear structural dynamics. Beside optimizing the neural network, and thereby minimizing computational cost in simulation, the surgery procedure can also serve as a quick...

  13. Multiparameter optimization of mammography: an update

    Science.gov (United States)

    Jafroudi, Hamid; Muntz, E. P.; Jennings, Robert J.

    1994-05-01

    Previously in this forum we have reported the application of multiparameter optimization techniques to the design of a minimum dose mammography system. The approach used a reference system to define the physical imaging performance required and the dose to which the dose for the optimized system should be compared. During the course of implementing the resulting design in hardware suitable for laboratory testing, the state of the art in mammographic imaging changed, so that the original reference system, which did not have a grid, was no longer appropriate. A reference system with a grid was selected in response to this change, and at the same time the optimization procedure was modified, to make it more general and to facilitate study of the optimized design under a variety of conditions. We report the changes in the procedure, and the results obtained using the revised procedure and the up- to-date reference system. Our results, which are supported by laboratory measurements, indicate that the optimized design can image small objects as well as the reference system using only about 30% of the dose required by the reference system. Hardware meeting the specification produced by the optimization procedure and suitable for clinical use is currently under evaluation in the Diagnostic Radiology Department at the Clinical Center, NH.

  14. Inherited complex I deficiency is associated with faster protein diffusion in the matrix of moving mitochondria

    NARCIS (Netherlands)

    Koopman, W.J.H.; Distelmaier, F.; Hink, M.A.; Verkaart, S.; Wijers, M.; Fransen, J.; Smeitink, J.A.M.; Willems, P.H.G.M.

    2008-01-01

    Mitochondria continuously change shape, position, and matrix configuration for optimal metabolite exchange. It is well established that changes in mitochondrial metabolism influence mitochondrial shape and matrix configuration. We demonstrated previously that inhibition of mitochondrial complex I

  15. Novel matrix for REEs recovery from waste disposal

    International Nuclear Information System (INIS)

    Hareendran, K.; Singha, Mousumi; Roy, S.B.; Pal, Sangita

    2014-01-01

    Sorption of lanthanides (98%-99%) onto a novel matrix (polyacrylamide-carboxylate hydroxamate-PAMCHO) not only remove REE's before effluent disposal but also reduces the chance of contamination of potable water, nuclear plant generated shut down or gadolinium containing effluent during controlled fission reaction, in pharmaceutical diagnosis (MRI) and many other useful process effluents. By using such sorbent, 88% of the lanthanides can be recovered using HCl solution less than pH 1 from the laden matrix and can be concentrated more than 5 times. However, sorption into the interlayer's and diffusion of the REE's during leaching depends on the cross-linked structure of the gel matrix and tortuous path of the porous micro-channel (using scanning electron microscope-SEM study). The sequestration of matrix with REE's has been well established by using instrument FT-IR and gadolinium (cation-lanthanide) exchange method. To understand interaction of REE with sorbent, matrix have been prepared with cross-linking amount variation, such as 85:15, 90:10, 95:05 and 98:02 (matrix: cross-linker). A detailed sorption study of cross-linked matrix with gadolinium in feed solution (184 ppm), filtrate, leached and laden sorbent establishes mass balance (using ICP-AES for quantitative determination). This optimized sorbent (PAMCHO) indicates recovery of valuable REEs with elution factor of more than 0.9 when HCl solution of pH1.5 was used. (author)

  16. Measurement configuration optimization for dynamic metrology using Stokes polarimetry

    Science.gov (United States)

    Liu, Jiamin; Zhang, Chuanwei; Zhong, Zhicheng; Gu, Honggang; Chen, Xiuguo; Jiang, Hao; Liu, Shiyuan

    2018-05-01

    As dynamic loading experiments such as a shock compression test are usually characterized by short duration, unrepeatability and high costs, high temporal resolution and precise accuracy of the measurements is required. Due to high temporal resolution up to a ten-nanosecond-scale, a Stokes polarimeter with six parallel channels has been developed to capture such instantaneous changes in optical properties in this paper. Since the measurement accuracy heavily depends on the configuration of the probing beam incident angle and the polarizer azimuth angle, it is important to select an optimal combination from the numerous options. In this paper, a systematic error propagation-based measurement configuration optimization method corresponding to the Stokes polarimeter was proposed. The maximal Frobenius norm of the combinatorial matrix of the configuration error propagating matrix and the intrinsic error propagating matrix is introduced to assess the measurement accuracy. The optimal configuration for thickness measurement of a SiO2 thin film deposited on a Si substrate has been achieved by minimizing the merit function. Simulation and experimental results show a good agreement between the optimal measurement configuration achieved experimentally using the polarimeter and the theoretical prediction. In particular, the experimental result shows that the relative error in the thickness measurement can be reduced from 6% to 1% by using the optimal polarizer azimuth angle when the incident angle is 45°. Furthermore, the optimal configuration for the dynamic metrology of a nickel foil under quasi-dynamic loading is investigated using the proposed optimization method.

  17. An application of the Proper Orthogonal Decomposition method to the thermo-economic optimization of a dual pressure, combined cycle powerplant

    International Nuclear Information System (INIS)

    Melli, Roberto; Sciubba, Enrico; Toro, Claudia

    2014-01-01

    Highlights: • The CCGT is modelled and simulated in CAMEL-Pro. • Economic costs of the system product are computed. • The POD–RBF procedure is applied to the thermoeconomic optimization of a CCGT power plant. • Economic optimal configuration is identified with POD–RBF procedure. - Abstract: This paper presents a thermo-economic optimization of a combined cycle power plant obtained via the Proper Orthogonal Decomposition–Radial Basis Functions (POD–RBF) procedure. POD, also known as “Karhunen–Loewe decomposition” or as “Method of Snapshots” is a powerful mathematical method for the low-order approximation of highly dimensional processes for which a set of initial data is known in the form of a discrete and finite set of experimental (or simulated) data: the procedure consists in constructing an approximated representation of a matricial operator that optimally “represents” the original data set on the basis of the eigenvalues and eigenvectors of the properly re-assembled data set. By combining POD and RBF it is possible to construct, by interpolation, a functional (parametric) approximation of such a representation. In this paper the set of starting data for the POD–RBF procedure has been obtained by the CAMEL-Pro™ process simulator. The proposed procedure does not require the generation of a complete simulated set of results at each iteration step of the optimization, because POD constructs a very accurate approximation to the function described by a relatively small number of initial simulations, and thus “new” points in design space can be extrapolated without recurring to additional and expensive process simulations. Thus, the often taxing computational effort needed to iteratively generate numerical process simulations of incrementally different configurations is substantially reduced by replacing much of it by easy-to-perform matrix operations. The object of the study was a fossil-fuelled, combined cycle powerplant of

  18. Optimization of Training Signal Transmission for Estimating MIMO Channel under Antenna Mutual Coupling Conditions

    Directory of Open Access Journals (Sweden)

    Xia Liu

    2010-01-01

    Full Text Available This paper reports investigations on the effect of antenna mutual coupling on performance of training-based Multiple-Input Multiple-Output (MIMO channel estimation. The influence of mutual coupling is assessed for two training-based channel estimation methods, Scaled Least Square (SLS and Minimum Mean Square Error (MMSE. It is shown that the accuracy of MIMO channel estimation is governed by the sum of eigenvalues of channel correlation matrix which in turn is influenced by the mutual coupling in transmitting and receiving array antennas. A water-filling-based procedure is proposed to optimize the training signal transmission to minimize the MIMO channel estimation errors.

  19. Topology optimization of microwave waveguide filters

    DEFF Research Database (Denmark)

    Aage, Niels; Johansen, Villads Egede

    2017-01-01

    We present a density based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap optimizat......We present a density based topology optimization approach for the design of metallic microwave insert filters. A two-phase optimization procedure is proposed in which we, starting from a uniform design, first optimize to obtain a set of spectral varying resonators followed by a band gap...... little resemblance to standard filter layouts and hence the proposed design method offers a new design tool in microwave engineering....

  20. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir

    2018-02-24

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  1. Exploiting Data Sparsity for Large-Scale Matrix Computations

    KAUST Repository

    Akbudak, Kadir; Ltaief, Hatem; Mikhalev, Aleksandr; Charara, Ali; Keyes, David E.

    2018-01-01

    Exploiting data sparsity in dense matrices is an algorithmic bridge between architectures that are increasingly memory-austere on a per-core basis and extreme-scale applications. The Hierarchical matrix Computations on Manycore Architectures (HiCMA) library tackles this challenging problem by achieving significant reductions in time to solution and memory footprint, while preserving a specified accuracy requirement of the application. HiCMA provides a high-performance implementation on distributed-memory systems of one of the most widely used matrix factorization in large-scale scientific applications, i.e., the Cholesky factorization. It employs the tile low-rank data format to compress the dense data-sparse off-diagonal tiles of the matrix. It then decomposes the matrix computations into interdependent tasks and relies on the dynamic runtime system StarPU for asynchronous out-of-order scheduling, while allowing high user-productivity. Performance comparisons and memory footprint on matrix dimensions up to eleven million show a performance gain and memory saving of more than an order of magnitude for both metrics on thousands of cores, against state-of-the-art open-source and vendor optimized numerical libraries. This represents an important milestone in enabling large-scale matrix computations toward solving big data problems in geospatial statistics for climate/weather forecasting applications.

  2. Neutron-deuteron scattering calculations with W-matrix representation of the two-body input

    International Nuclear Information System (INIS)

    Bartnik, E.A.; Haberzettl, H.; Januschke, T.; Kerwath, U.; Sandhas, W.

    1987-05-01

    Employing the W-matrix representation of the partial-wave T matrix introduced by Bartnik, Haberzettl, and Sandhas, we show for the example of the Malfliet-Tjon potentials I and III that the single-term separable part of the W-matrix representation, when used as input in three-nucleon neutron-deuteron scattering calculations, is fully capable of reproducing the exact results obtained by Kloet and Tjon. This approximate two-body input not only satisfies the two-body off-shell unitarity relation but, moreover, it also contains a parameter which may be used in optimizing the three-body data. We present numerical evidence that there exists a variational (minimum) principle for the determination of the three-body binding energy which allows one to choose this parameter also in the absence of an exact reference calculation. Our results for neutron-deuteron scattering show that it is precisely this choice of the parameter which provides optimal scattering data. We conclude that the W-matrix approach, despite its simplicity, is a remarkably efficient tool for high-quality three-nucleon calculations. (orig.)

  3. Strong synergistic effects in PLA/PCL blends: Impact of PLA matrix viscosity.

    Science.gov (United States)

    Ostafinska, Aleksandra; Fortelný, Ivan; Hodan, Jiří; Krejčíková, Sabina; Nevoralová, Martina; Kredatusová, Jana; Kruliš, Zdeněk; Kotek, Jiří; Šlouf, Miroslav

    2017-05-01

    Blends of two biodegradable polymers, poly(lactic acid) (PLA) and poly(ϵ-caprolactone) (PCL), with strong synergistic improvement in mechanical performance were prepared by melt-mixing using the optimized composition (80/20) and the optimized preparation procedure (a melt-mixing followed by a compression molding) according to our previous study. Three different PLA polymers were employed, whose viscosity decreased in the following order: PLC ≈ PLA1 > PLA2 > PLA3. The blends with the highest viscosity matrix (PLA1/PCL) exhibited the smallest PCL particles (d∼0.6μm), an elastic-plastic stable fracture (as determined from instrumented impact testing) and the strongest synergistic improvement in toughness (>16× with respect to pure PLA, exceeding even the toughness of pure PCL). According to the available literature, this was the highest toughness improvement in non-compatiblized PLA/PCL blends ever achieved. The decrease in the matrix viscosity resulted in an increase in the average PCL particle size and a dramatic decrease in the overall toughness: the completely stable fracture (for PLA1/PCL) changed to the stable fracture followed by unstable crack propagation (for PLA2/PCL) and finally to the completely brittle fracture (for PLA3/PCL). The stiffness of all blends remained at well acceptable level, slightly above the theoretical predictions based on the equivalent box model. Despite several previous studies, the results confirmed that PLA and PCL could behave as compatible polymers, but the final PLA/PCL toughness is extremely sensitive to the PCL particle size distribution, which is influenced by both processing conditions and PLA viscosity. PLA/PCL blends with high stiffness (due to PLA) and toughness (due to PCL) are very promising materials for medical applications, namely for the bone tissue engineering. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Interactive Reliability-Based Optimal Design

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard; Thoft-Christensen, Palle; Siemaszko, A.

    1994-01-01

    Interactive design/optimization of large, complex structural systems is considered. The objective function is assumed to model the expected costs. The constraints are reliability-based and/or related to deterministic code requirements. Solution of this optimization problem is divided in four main...... tasks, namely finite element analyses, sensitivity analyses, reliability analyses and application of an optimization algorithm. In the paper it is shown how these four tasks can be linked effectively and how existing information on design variables, Lagrange multipliers and the Hessian matrix can...

  5. Hybrid transfer-matrix FDTD method for layered periodic structures.

    Science.gov (United States)

    Deinega, Alexei; Belousov, Sergei; Valuev, Ilya

    2009-03-15

    A hybrid transfer-matrix finite-difference time-domain (FDTD) method is proposed for modeling the optical properties of finite-width planar periodic structures. This method can also be applied for calculation of the photonic bands in infinite photonic crystals. We describe the procedure of evaluating the transfer-matrix elements by a special numerical FDTD simulation. The accuracy of the new method is tested by comparing computed transmission spectra of a 32-layered photonic crystal composed of spherical or ellipsoidal scatterers with the results of direct FDTD and layer-multiple-scattering calculations.

  6. Comparison of matrix methods for elastic wave scattering problems

    International Nuclear Information System (INIS)

    Tsao, S.J.; Varadan, V.K.; Varadan, V.V.

    1983-01-01

    This article briefly describes the T-matrix method and the MOOT (method of optimal truncation) of elastic wave scattering as they apply to A-D, SH- wave problems as well as 3-D elastic wave problems. Two methods are compared for scattering by elliptical cylinders as well as oblate spheroids of various eccentricity as a function of frequency. Convergence, and symmetry of the scattering cross section are also compared for ellipses and spheroidal cavities of different aspect ratios. Both the T-matrix approach and the MOOT were programmed on an AMDHL 470 computer using double precision arithmetic. Although the T-matrix method and MOOT are not always in agreement, it is in no way implied that any of the published results using MOOT are in error

  7. Fumed silica nanoparticle mediated biomimicry for optimal cell-material interactions for artificial organ development.

    Science.gov (United States)

    de Mel, Achala; Ramesh, Bala; Scurr, David J; Alexander, Morgan R; Hamilton, George; Birchall, Martin; Seifalian, Alexander M

    2014-03-01

    Replacement of irreversibly damaged organs due to chronic disease, with suitable tissue engineered implants is now a familiar area of interest to clinicians and multidisciplinary scientists. Ideal tissue engineering approaches require scaffolds to be tailor made to mimic physiological environments of interest with specific surface topographical and biological properties for optimal cell-material interactions. This study demonstrates a single-step procedure for inducing biomimicry in a novel nanocomposite base material scaffold, to re-create the extracellular matrix, which is required for stem cell integration and differentiation to mature cells. Fumed silica nanoparticle mediated procedure of scaffold functionalization, can be potentially adapted with multiple bioactive molecules to induce cellular biomimicry, in the development human organs. The proposed nanocomposite materials already in patients for number of implants, including world first synthetic trachea, tear ducts and vascular bypass graft. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Minimally invasive esthetic ridge preservation with growth-factor enhanced bone matrix.

    Science.gov (United States)

    Nevins, Marc L; Said, Sherif

    2017-12-28

    Extraction socket preservation procedures are critical to successful esthetic implant therapy. Conventional surgical approaches are technique sensitive and often result in alteration of the soft tissue architecture, which then requires additional corrective surgical procedures. This case series report presents the ability of flapless surgical techniques combined with a growth factor-enhanced bone matrix to provide esthetic ridge preservation at the time of extraction for compromised sockets. When considering esthetic dental implant therapy, preservation, or further enhancement of the available tissue support at the time of tooth extraction may provide an improved esthetic outcome with reduced postoperative sequelae and decreased treatment duration. Advances in minimally invasive surgical techniques combined with recombinant growth factor technology offer an alternative for bone reconstruction while maintaining the gingival architecture for enhanced esthetic outcome. The combination of freeze-dried bone allograft (FDBA) and rhPDGF-BB (platelet-derived growth factor-BB) provides a growth-factor enhanced matrix to induce bone and soft tissue healing. The use of a growth-factor enhanced matrix is an option for minimally invasive ridge preservation procedures for sites with advanced bone loss. Further studies including randomized clinical trials are needed to better understand the extent and limits of these procedures. The use of minimally invasive techniques with growth factors for esthetic ridge preservation reduces patient morbidity associated with more invasive approaches and increases the predictability for enhanced patient outcomes. By reducing the need for autogenous bone grafts the use of this technology is favorable for patient acceptance and ease of treatment process for esthetic dental implant therapy. © 2017 Wiley Periodicals, Inc.

  9. Linear systems optimal and robust control

    CERN Document Server

    Sinha, Alok

    2007-01-01

    Introduction Overview Contents of the Book State Space Description of a Linear System Transfer Function of a Single Input/Single Output (SISO) System State Space Realizations of a SISO System SISO Transfer Function from a State Space Realization Solution of State Space Equations Observability and Controllability of a SISO System Some Important Similarity Transformations Simultaneous Controllability and Observability Multiinput/Multioutput (MIMO) Systems State Space Realizations of a Transfer Function Matrix Controllability and Observability of a MIMO System Matrix-Fraction Description (MFD) MFD of a Transfer Function Matrix for the Minimal Order of a State Space Realization Controller Form Realization from a Right MFD Poles and Zeros of a MIMO Transfer Function Matrix Stability Analysis State Feedback Control and Optimization State Variable Feedback for a Single Input System Computation of State Feedback Gain Matrix for a Multiinput System State Feedback Gain Matrix for a Multi...

  10. Determination of As, Cd, and Pb in Tap Water and Bottled Water Samples by Using Optimized GFAAS System with Pd-Mg and Ni as Matrix Modifiers

    Directory of Open Access Journals (Sweden)

    Sezgin Bakırdere

    2013-01-01

    Full Text Available Arsenic, lead, and cadmium were determined in tap and bottled water samples consumed in the west part of Turkey at trace levels. Graphite furnace atomic absorption spectrometry (GFAAS was used in all detections. All of the system parameters for each element were optimized to increase sensitivity. Pd-Mg mixture was selected as the best matrix modifier for As, while the highest signals were obtained for Pb and Cd in the case of Ni used as matrix modifier. Detection limits for As, Cd, and Pb were found to be 2.0, 0.036, and 0.25 ng/mL, respectively. 78 tap water and 17 different brands of bottled water samples were analyzed for their As, Cd, and Pb contents under the optimized conditions. In all water samples, concentration of cadmium was found to be lower than detection limits. Lead concentration in the samples analyzed varied between N.D. and 12.66 ± 0.68 ng/mL. The highest concentration of arsenic was determined as 11.54 ± 2.79 ng/mL. Accuracy of the methods was verified by using a certified reference material, namely, Trace Element in Water, 1643e. Results found for As, Cd, and Pb in reference materials were in satisfactory agreement with the certified values.

  11. Influence of model errors in optimal sensor placement

    Science.gov (United States)

    Vincenzi, Loris; Simonini, Laura

    2017-02-01

    The paper investigates the role of model errors and parametric uncertainties in optimal or near optimal sensor placements for structural health monitoring (SHM) and modal testing. The near optimal set of measurement locations is obtained by the Information Entropy theory; the results of placement process considerably depend on the so-called covariance matrix of prediction error as well as on the definition of the correlation function. A constant and an exponential correlation function depending on the distance between sensors are firstly assumed; then a proposal depending on both distance and modal vectors is presented. With reference to a simple case-study, the effect of model uncertainties on results is described and the reliability and the robustness of the proposed correlation function in the case of model errors are tested with reference to 2D and 3D benchmark case studies. A measure of the quality of the obtained sensor configuration is considered through the use of independent assessment criteria. In conclusion, the results obtained by applying the proposed procedure on a real 5-spans steel footbridge are described. The proposed method also allows to better estimate higher modes when the number of sensors is greater than the number of modes of interest. In addition, the results show a smaller variation in the sensor position when uncertainties occur.

  12. Fracture Toughness of Carbon Nanotube-Reinforced Metal- and Ceramic-Matrix Composites

    International Nuclear Information System (INIS)

    Chen, Y.L.; Liu, B.; Hwang, K.C.; Chen, Y.L.; Huang, Y.

    2011-01-01

    Hierarchical analysis of the fracture toughness enhancement of carbon nanotube- (CNT-) reinforced hard matrix composites is carried out on the basis of shear-lag theory and fracture mechanics. It is found that stronger CNT/matrix interfaces cannot definitely lead to the better fracture toughness of these composites, and the optimal interfacial chemical bond density is that making the failure mode just in the transition from CNT pull-out to CNT break. For hard matrix composites, the fracture toughness of composites with weak interfaces can be improved effectively by increasing the CNT length. However, for soft matrix composite, the fracture toughness improvement due to the reinforcing CNTs quickly becomes saturated with an increase in CNT length. The proposed theoretical model is also applicable to short fiber-reinforced composites.

  13. Compliance matrix for the mixed waste disposal facilities, Trenches 31 ampersand 34, burial ground 218-W-5

    International Nuclear Information System (INIS)

    Carlyle, D.W.

    1994-01-01

    The purpose of the Trench 31 ampersand 34 Mixed Waste Disposal Facility Compliance Matrix is to provide objective evidence of implementation of all regulatory and procedural-institutional requirements for the disposal facilities. This matrix provides a listing of the individual regulatory and procedural-institutional requirements that were addressed. Subject matter experts reviewed pertinent documents that had direct or indirect impact on the facility. Those found to be applicable were so noted and listed in Appendix A. Subject matter experts then extracted individual requirements from the documents deemed applicable and listed them in the matrix tables. The results of this effort are documented in Appendix B

  14. On-matrix derivatization extraction of chemical weapons convention relevant alcohols from soil.

    Science.gov (United States)

    Chinthakindi, Sridhar; Purohit, Ajay; Singh, Varoon; Dubey, D K; Pardasani, Deepak

    2013-10-11

    Present study deals with the on-matrix derivatization-extraction of aminoalcohols and thiodiglycols, which are important precursors and/or degradation products of VX analogues and vesicants class of chemical warfare agents (CWAs). The method involved hexamethyldisilazane (HMDS) mediated in situ silylation of analytes on the soil. Subsequent extraction and gas chromatography-mass spectrometry analysis of derivatized analytes offered better recoveries in comparison to the procedure recommended by the Organization for the Prohibition of Chemical Weapons (OPCW). Various experimental conditions such as extraction solvent, reagent and catalyst amount, reaction time and temperature were optimized. Best recoveries of analytes ranging from 45% to 103% were obtained with DCM solvent containing 5%, v/v HMDS and 0.01%, w/v iodine as catalyst. The limits of detection (LOD) and limit of quantification (LOQ) with selected analytes ranged from 8 to 277 and 21 to 665ngmL(-1), respectively, in selected ion monitoring mode. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. An Optimization Method for Condition Based Maintenance of Aircraft Fleet Considering Prognostics Uncertainty

    Directory of Open Access Journals (Sweden)

    Qiang Feng

    2014-01-01

    Full Text Available An optimization method for condition based maintenance (CBM of aircraft fleet considering prognostics uncertainty is proposed. The CBM and dispatch process of aircraft fleet is analyzed first, and the alternative strategy sets for single aircraft are given. Then, the optimization problem of fleet CBM with lower maintenance cost and dispatch risk is translated to the combinatorial optimization problem of single aircraft strategy. Remain useful life (RUL distribution of the key line replaceable Module (LRM has been transformed into the failure probability of the aircraft and the fleet health status matrix is established. And the calculation method of the costs and risks for mission based on health status matrix and maintenance matrix is given. Further, an optimization method for fleet dispatch and CBM under acceptable risk is proposed based on an improved genetic algorithm. Finally, a fleet of 10 aircrafts is studied to verify the proposed method. The results shows that it could realize optimization and control of the aircraft fleet oriented to mission success.

  16. Vibration analysis of pipes conveying fluid by transfer matrix method

    International Nuclear Information System (INIS)

    Li, Shuai-jun; Liu, Gong-min; Kong, Wei-tao

    2014-01-01

    Highlights: • A theoretical study on vibration analysis of pipes with FSI is presented. • Pipelines with high fluid pressure and velocity can be solved by developed method. • Several pipeline schemes are discussed to illustrate the application of the method. • The proposed method is easier to apply compared to most existing procedures. • Influence laws of structural and fluid parameters on FSI of pipe are analyzed. -- Abstract: Considering the effects of pipe wall thickness, fluid pressure and velocity, a developed 14-equation model is presented, which describes the fluid–structure interaction behavior of pipelines. The transfer matrix method has been used for numerical modeling of both hydraulic and structural equations. Based on these models and algorithms, several pipeline schemes are presented to illustrate the application of the proposed method. Furthermore, the influence laws of supports, structural properties and fluid parameters on the dynamic response and natural frequencies of pipeline are analyzed, which shows using the optimal supports and structural properties is beneficial to reduce vibration of pipelines

  17. Fast matrix multiplication and its algebraic neighbourhood

    Science.gov (United States)

    Pan, V. Ya.

    2017-11-01

    Matrix multiplication is among the most fundamental operations of modern computations. By 1969 it was still commonly believed that the classical algorithm was optimal, although the experts already knew that this was not so. Worldwide interest in matrix multiplication instantly exploded in 1969, when Strassen decreased the exponent 3 of cubic time to 2.807. Then everyone expected to see matrix multiplication performed in quadratic or nearly quadratic time very soon. Further progress, however, turned out to be capricious. It was at stalemate for almost a decade, then a combination of surprising techniques (completely independent of Strassen's original ones and much more advanced) enabled a new decrease of the exponent in 1978-1981 and then again in 1986, to 2.376. By 2017 the exponent has still not passed through the barrier of 2.373, but most disturbing was the curse of recursion — even the decrease of exponents below 2.7733 required numerous recursive steps, and each of them squared the problem size. As a result, all algorithms supporting such exponents supersede the classical algorithm only for inputs of immense sizes, far beyond any potential interest for the user. We survey the long study of fast matrix multiplication, focusing on neglected algorithms for feasible matrix multiplication. We comment on their design, the techniques involved, implementation issues, the impact of their study on the modern theory and practice of Algebraic Computations, and perspectives for fast matrix multiplication. Bibliography: 163 titles.

  18. Optimization of simulated moving bed (SMB) chromatography: a multi-level optimization procedure

    DEFF Research Database (Denmark)

    Jørgensen, Sten Bay; Lim, Young-il

    2004-01-01

    objective functions (productivity and desorbent consumption), employing the standing wave analysis, the true moving bed (TMB) model and the simulated moving bed (SMB) model. The procedure is constructed on a non-worse solution property advancing level by level and its solution does not mean a global optimum...

  19. Diagonalizing sensing matrix of broadband RSE

    International Nuclear Information System (INIS)

    Sato, Shuichi; Kokeyama, Keiko; Kawazoe, Fumiko; Somiya, Kentaro; Kawamura, Seiji

    2006-01-01

    For a broadband-operated RSE interferometer, a simple and smart length sensing and control scheme was newly proposed. The sensing matrix could be diagonal, owing to a simple allocation of two RF modulations and to a macroscopic displacement of cavity mirrors, which cause a detuning of the RF modulation sidebands. In this article, the idea of the sensing scheme and an optimization of the relevant parameters will be described

  20. GRAVITATIONAL LENS MODELING WITH GENETIC ALGORITHMS AND PARTICLE SWARM OPTIMIZERS

    International Nuclear Information System (INIS)

    Rogers, Adam; Fiege, Jason D.

    2011-01-01

    Strong gravitational lensing of an extended object is described by a mapping from source to image coordinates that is nonlinear and cannot generally be inverted analytically. Determining the structure of the source intensity distribution also requires a description of the blurring effect due to a point-spread function. This initial study uses an iterative gravitational lens modeling scheme based on the semilinear method to determine the linear parameters (source intensity profile) of a strongly lensed system. Our 'matrix-free' approach avoids construction of the lens and blurring operators while retaining the least-squares formulation of the problem. The parameters of an analytical lens model are found through nonlinear optimization by an advanced genetic algorithm (GA) and particle swarm optimizer (PSO). These global optimization routines are designed to explore the parameter space thoroughly, mapping model degeneracies in detail. We develop a novel method that determines the L-curve for each solution automatically, which represents the trade-off between the image χ 2 and regularization effects, and allows an estimate of the optimally regularized solution for each lens parameter set. In the final step of the optimization procedure, the lens model with the lowest χ 2 is used while the global optimizer solves for the source intensity distribution directly. This allows us to accurately determine the number of degrees of freedom in the problem to facilitate comparison between lens models and enforce positivity on the source profile. In practice, we find that the GA conducts a more thorough search of the parameter space than the PSO.

  1. Use of gradient dilution to flag and overcome matrix interferences in axial-viewing inductively coupled plasma-atomic emission spectrometry

    International Nuclear Information System (INIS)

    Cheung, Yan; Schwartz, Andrew J.; Hieftje, Gary M.

    2014-01-01

    Despite the undisputed power of inductively coupled plasma-atomic emission spectrometry (ICP-AES), its users still face serious challenges in obtaining accurate analytical results. Matrix interference is perhaps the most important challenge. Dilution of a matrix-containing sample is a common practice to reduce matrix interference. However, determining the optimal dilution factor requires tedious and time-consuming offline sample preparation, since emission lines and the effect of matrix interferences are affected differently by the dilution. The current study exploits this difference by employing a high-performance liquid chromatography gradient pump prior to the nebulizer to perform on-line mixing of a sample solution and diluent. Linear gradient dilution is performed on both the calibration standard and the matrix-containing sample. By ratioing the signals from two emission lines (from the same or different elements) as a function of dilution factor, the analyst can not only identify the presence of a matrix interference, but also determine the optimal dilution factor needed to overcome the interference. A ratio that does not change with dilution signals the absence of a matrix interference, whereas a changing ratio indicates the presence of an interference. The point on the dilution profile where the ratio stabilizes indicates the optimal dilution factor to correct the interference. The current study was performed on axial-viewing ICP-AES with o-xylene as the solvent

  2. Cucheb: A GPU implementation of the filtered Lanczos procedure

    Science.gov (United States)

    Aurentz, Jared L.; Kalantzis, Vassilis; Saad, Yousef

    2017-11-01

    This paper describes the software package Cucheb, a GPU implementation of the filtered Lanczos procedure for the solution of large sparse symmetric eigenvalue problems. The filtered Lanczos procedure uses a carefully chosen polynomial spectral transformation to accelerate convergence of the Lanczos method when computing eigenvalues within a desired interval. This method has proven particularly effective for eigenvalue problems that arise in electronic structure calculations and density functional theory. We compare our implementation against an equivalent CPU implementation and show that using the GPU can reduce the computation time by more than a factor of 10. Program Summary Program title: Cucheb Program Files doi:http://dx.doi.org/10.17632/rjr9tzchmh.1 Licensing provisions: MIT Programming language: CUDA C/C++ Nature of problem: Electronic structure calculations require the computation of all eigenvalue-eigenvector pairs of a symmetric matrix that lie inside a user-defined real interval. Solution method: To compute all the eigenvalues within a given interval a polynomial spectral transformation is constructed that maps the desired eigenvalues of the original matrix to the exterior of the spectrum of the transformed matrix. The Lanczos method is then used to compute the desired eigenvectors of the transformed matrix, which are then used to recover the desired eigenvalues of the original matrix. The bulk of the operations are executed in parallel using a graphics processing unit (GPU). Runtime: Variable, depending on the number of eigenvalues sought and the size and sparsity of the matrix. Additional comments: Cucheb is compatible with CUDA Toolkit v7.0 or greater.

  3. Local System Matrix Compression for Efficient Reconstruction in Magnetic Particle Imaging

    Directory of Open Access Journals (Sweden)

    T. Knopp

    2015-01-01

    Full Text Available Magnetic particle imaging (MPI is a quantitative method for determining the spatial distribution of magnetic nanoparticles, which can be used as tracers for cardiovascular imaging. For reconstructing a spatial map of the particle distribution, the system matrix describing the magnetic particle imaging equation has to be known. Due to the complex dynamic behavior of the magnetic particles, the system matrix is commonly measured in a calibration procedure. In order to speed up the reconstruction process, recently, a matrix compression technique has been proposed that makes use of a basis transformation in order to compress the MPI system matrix. By thresholding the resulting matrix and storing the remaining entries in compressed row storage format, only a fraction of the data has to be processed when reconstructing the particle distribution. In the present work, it is shown that the image quality of the algorithm can be considerably improved by using a local threshold for each matrix row instead of a global threshold for the entire system matrix.

  4. Optimization the machining parameters by using VIKOR and Entropy Weight method during EDM process of Al–18% SiCp Metal matrix composit

    Directory of Open Access Journals (Sweden)

    Rajesh Kumar Bhuyan

    2016-06-01

    Full Text Available The objective of this paper is to optimize the process parameters by combined approach of VIKOR and Entropy weight measurement method during Electrical discharge machining (EDM process of Al-18wt.%SiCp metal matrix composite (MMC. The central composite design (CCD method is considered to evaluate the effect of three process parameters; namely pulse on time (Ton, peak current (Ip and flushing pressure (Fp on the responses like material removal rate (MRR, tool wear rate (TWR, Radial over cut (ROC and surface roughness (Ra. The Entropy weight measurement method evaluates the individual weights of each response and, using VIKOR method, the multi-objective responses are optimized to get a single numerical index known as VIKOR Index. Then the Analysis of Variance (ANOVA technique is used to determine the significance of the process parameters on the VIKOR Index. Finally, the result of the VIKOR Indexed is validated by conformation test using the liner mathematical model equation develop by responses surface methodology to identify the effectiveness of the proposed method.

  5. Methodological approach to strategic performance optimization

    OpenAIRE

    Hell, Marko; Vidačić, Stjepan; Garača, Željko

    2009-01-01

    This paper presents a matrix approach to the measuring and optimization of organizational strategic performance. The proposed model is based on the matrix presentation of strategic performance, which follows the theoretical notions of the balanced scorecard (BSC) and strategy map methodologies, initially developed by Kaplan and Norton. Development of a quantitative record of strategic objectives provides an arena for the application of linear programming (LP), which is a mathematical tech...

  6. Optimal learning with Bernstein online aggregation

    DEFF Research Database (Denmark)

    Wintenberger, Olivier

    2017-01-01

    batch version achieves the fast rate of convergence log (M) / n in deviation. The BOA procedure is the first online algorithm that satisfies this optimal fast rate. The second order refinement is required to achieve the optimality in deviation as the classical exponential weights cannot be optimal, see...... is shown to be sufficiently small to assert the fast rate in the iid setting when the loss is Lipschitz and strongly convex. We also introduce a multiple learning rates version of BOA. This fully adaptive BOA procedure is also optimal, up to a log log (n) factor....

  7. Comparison of VFA titration procedures used for monitoring the biogas process.

    Science.gov (United States)

    Lützhøft, Hans-Christian Holten; Boe, Kanokwan; Fang, Cheng; Angelidaki, Irini

    2014-05-01

    Titrimetric determination of volatile fatty acids (VFAs) contents is a common way to monitor a biogas process. However, digested manure from co-digestion biogas plants has a complex matrix with high concentrations of interfering components, resulting in varying results when using different titration procedures. Currently, no standardized procedure is used and it is therefore difficult to compare the performance among plants. The aim of this study was to evaluate four titration procedures (for determination of VFA-levels of digested manure samples) and compare results with gas chromatographic (GC) analysis. Two of the procedures are commonly used in biogas plants and two are discussed in literature. The results showed that the optimal titration results were obtained when 40 mL of four times diluted digested manure was gently stirred (200 rpm). Results from samples with different VFA concentrations (1-11 g/L) showed linear correlation between titration results and GC measurements. However, determination of VFA by titration generally overestimated the VFA contents compared with GC measurements when samples had low VFA concentrations, i.e. around 1 g/L. The accuracy of titration increased when samples had high VFA concentrations, i.e. around 5 g/L. It was further found that the studied ionisable interfering components had lowest effect on titration when the sample had high VFA concentration. In contrast, bicarbonate, phosphate and lactate had significant effect on titration accuracy at low VFA concentration. An extended 5-point titration procedure with pH correction was best to handle interferences from bicarbonate, phosphate and lactate at low VFA concentrations. Contrary, the simplest titration procedure with only two pH end-points showed the highest accuracy among all titration procedures at high VFA concentrations. All in all, if the composition of the digested manure sample is not known, the procedure with only two pH end-points should be the procedure of

  8. Optimal truss and frame design from projected homogenization-based topology optimization

    DEFF Research Database (Denmark)

    Larsen, S. D.; Sigmund, O.; Groen, J. P.

    2018-01-01

    In this article, we propose a novel method to obtain a near-optimal frame structure, based on the solution of a homogenization-based topology optimization model. The presented approach exploits the equivalence between Michell’s problem of least-weight trusses and a compliance minimization problem...... using optimal rank-2 laminates in the low volume fraction limit. In a fully automated procedure, a discrete structure is extracted from the homogenization-based continuum model. This near-optimal structure is post-optimized as a frame, where the bending stiffness is continuously decreased, to allow...

  9. Structured decomposition design of partial Mueller matrix polarimeters.

    Science.gov (United States)

    Alenin, Andrey S; Scott Tyo, J

    2015-07-01

    Partial Mueller matrix polarimeters (pMMPs) are active sensing instruments that probe a scattering process with a set of polarization states and analyze the scattered light with a second set of polarization states. Unlike conventional Mueller matrix polarimeters, pMMPs do not attempt to reconstruct the entire Mueller matrix. With proper choice of generator and analyzer states, a subset of the Mueller matrix space can be reconstructed with fewer measurements than that of the full Mueller matrix polarimeter. In this paper we consider the structure of the Mueller matrix and our ability to probe it using a reduced number of measurements. We develop analysis tools that allow us to relate the particular choice of generator and analyzer polarization states to the portion of Mueller matrix space that the instrument measures, as well as develop an optimization method that is based on balancing the signal-to-noise ratio of the resulting instrument with the ability of that instrument to accurately measure a particular set of desired polarization components with as few measurements as possible. In the process, we identify 10 classes of pMMP systems, for which the space coverage is immediately known. We demonstrate the theory with a numerical example that designs partial polarimeters for the task of monitoring the damage state of a material as presented earlier by Hoover and Tyo [Appl. Opt.46, 8364 (2007)10.1364/AO.46.008364APOPAI1559-128X]. We show that we can reduce the polarimeter to making eight measurements while still covering the Mueller matrix subspace spanned by the objects.

  10. Design Optimization and Evaluation of Gastric Floating Matrix Tablet ...

    African Journals Online (AJOL)

    HP

    Abstract. Purpose: To formulate an optimized gastric floating drug delivery system (GFDDS) containing glipizide ... Index Medicus, JournalSeek, Journal Citation Reports/Science Edition, Directory of Open Access Journals ... Sodium bicarbonate by geometric mixing then .... order polynomial equation (Eq 4) with added.

  11. Abdominoplasty for Ladd's procedure: optimizing access and esthetics

    African Journals Online (AJOL)

    often-displeasing incision and visible scar. ... inception and is traditionally performed by making a ... procedure, these highly visible surgical approaches can .... reportedthat she remained symptom free and was pleased with her nearly invisible.

  12. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    International Nuclear Information System (INIS)

    Tian, Z; Shi, F; Jia, X; Jiang, S; Peng, F

    2014-01-01

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires access to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use

  13. SU-E-T-395: Multi-GPU-Based VMAT Treatment Plan Optimization Using a Column-Generation Approach

    Energy Technology Data Exchange (ETDEWEB)

    Tian, Z; Shi, F; Jia, X; Jiang, S [UT Southwestern Medical Ctr at Dallas, Dallas, TX (United States); Peng, F [Carnegie Mellon University, Pittsburgh, PA (United States)

    2014-06-01

    Purpose: GPU has been employed to speed up VMAT optimizations from hours to minutes. However, its limited memory capacity makes it difficult to handle cases with a huge dose-deposition-coefficient (DDC) matrix, e.g. those with a large target size, multiple arcs, small beam angle intervals and/or small beamlet size. We propose multi-GPU-based VMAT optimization to solve this memory issue to make GPU-based VMAT more practical for clinical use. Methods: Our column-generation-based method generates apertures sequentially by iteratively searching for an optimal feasible aperture (referred as pricing problem, PP) and optimizing aperture intensities (referred as master problem, MP). The PP requires access to the large DDC matrix, which is implemented on a multi-GPU system. Each GPU stores a DDC sub-matrix corresponding to one fraction of beam angles and is only responsible for calculation related to those angles. Broadcast and parallel reduction schemes are adopted for inter-GPU data transfer. MP is a relatively small-scale problem and is implemented on one GPU. One headand- neck cancer case was used for test. Three different strategies for VMAT optimization on single GPU were also implemented for comparison: (S1) truncating DDC matrix to ignore its small value entries for optimization; (S2) transferring DDC matrix part by part to GPU during optimizations whenever needed; (S3) moving DDC matrix related calculation onto CPU. Results: Our multi-GPU-based implementation reaches a good plan within 1 minute. Although S1 was 10 seconds faster than our method, the obtained plan quality is worse. Both S2 and S3 handle the full DDC matrix and hence yield the same plan as in our method. However, the computation time is longer, namely 4 minutes and 30 minutes, respectively. Conclusion: Our multi-GPU-based VMAT optimization can effectively solve the limited memory issue with good plan quality and high efficiency, making GPUbased ultra-fast VMAT planning practical for real clinical use.

  14. Robust estimation of the correlation matrix of longitudinal data

    KAUST Repository

    Maadooliat, Mehdi

    2011-09-23

    We propose a double-robust procedure for modeling the correlation matrix of a longitudinal dataset. It is based on an alternative Cholesky decomposition of the form Σ=DLL⊤D where D is a diagonal matrix proportional to the square roots of the diagonal entries of Σ and L is a unit lower-triangular matrix determining solely the correlation matrix. The first robustness is with respect to model misspecification for the innovation variances in D, and the second is robustness to outliers in the data. The latter is handled using heavy-tailed multivariate t-distributions with unknown degrees of freedom. We develop a Fisher scoring algorithm for computing the maximum likelihood estimator of the parameters when the nonredundant and unconstrained entries of (L,D) are modeled parsimoniously using covariates. We compare our results with those based on the modified Cholesky decomposition of the form LD2L⊤ using simulations and a real dataset. © 2011 Springer Science+Business Media, LLC.

  15. Determination of optimal ultrasound planes for the initialisation of image registration during endoscopic ultrasound-guided procedures.

    Science.gov (United States)

    Bonmati, Ester; Hu, Yipeng; Gibson, Eli; Uribarri, Laura; Keane, Geri; Gurusami, Kurinchi; Davidson, Brian; Pereira, Stephen P; Clarkson, Matthew J; Barratt, Dean C

    2018-06-01

    Navigation of endoscopic ultrasound (EUS)-guided procedures of the upper gastrointestinal (GI) system can be technically challenging due to the small fields-of-view of ultrasound and optical devices, as well as the anatomical variability and limited number of orienting landmarks during navigation. Co-registration of an EUS device and a pre-procedure 3D image can enhance the ability to navigate. However, the fidelity of this contextual information depends on the accuracy of registration. The purpose of this study was to develop and test the feasibility of a simulation-based planning method for pre-selecting patient-specific EUS-visible anatomical landmark locations to maximise the accuracy and robustness of a feature-based multimodality registration method. A registration approach was adopted in which landmarks are registered to anatomical structures segmented from the pre-procedure volume. The predicted target registration errors (TREs) of EUS-CT registration were estimated using simulated visible anatomical landmarks and a Monte Carlo simulation of landmark localisation error. The optimal planes were selected based on the 90th percentile of TREs, which provide a robust and more accurate EUS-CT registration initialisation. The method was evaluated by comparing the accuracy and robustness of registrations initialised using optimised planes versus non-optimised planes using manually segmented CT images and simulated ([Formula: see text]) or retrospective clinical ([Formula: see text]) EUS landmarks. The results show a lower 90th percentile TRE when registration is initialised using the optimised planes compared with a non-optimised initialisation approach (p value [Formula: see text]). The proposed simulation-based method to find optimised EUS planes and landmarks for EUS-guided procedures may have the potential to improve registration accuracy. Further work will investigate applying the technique in a clinical setting.

  16. On Attainability of Optimal Solutions for Linear Elliptic Equations with Unbounded Coefficients

    Directory of Open Access Journals (Sweden)

    P. I. Kogut

    2011-12-01

    Full Text Available We study an optimal boundary control problem (OCP associated to a linear elliptic equation —div (Vj/ + A(xVy = f describing diffusion in a turbulent flow. The characteristic feature of this equation is the fact that, in applications, the stream matrix A(x = [a,ij(x]i,j=i,...,N is skew-symmetric, ац(х = —a,ji(x, measurable, and belongs to L -space (rather than L°°. An optimal solution to such problem can inherit a singular character of the original stream matrix A. We show that optimal solutions can be attainable by solutions of special optimal boundary control problems.

  17. A Higher Harmonic Optimal Controller to Optimise Rotorcraft Aeromechanical Behaviour

    Science.gov (United States)

    Leyland, Jane Anne

    1996-01-01

    Three methods to optimize rotorcraft aeromechanical behavior for those cases where the rotorcraft plant can be adequately represented by a linear model system matrix were identified and implemented in a stand-alone code. These methods determine the optimal control vector which minimizes the vibration metric subject to constraints at discrete time points, and differ from the commonly used non-optimal constraint penalty methods such as those employed by conventional controllers in that the constraints are handled as actual constraints to an optimization problem rather than as just additional terms in the performance index. The first method is to use a Non-linear Programming algorithm to solve the problem directly. The second method is to solve the full set of non-linear equations which define the necessary conditions for optimality. The third method is to solve each of the possible reduced sets of equations defining the necessary conditions for optimality when the constraints are pre-selected to be either active or inactive, and then to simply select the best solution. The effects of maneuvers and aeroelasticity on the systems matrix are modelled by using a pseudo-random pseudo-row-dependency scheme to define the systems matrix. Cases run to date indicate that the first method of solution is reliable, robust, and easiest to use, and that it was superior to the conventional controllers which were considered.

  18. Engineering applications of heuristic multilevel optimization methods

    Science.gov (United States)

    Barthelemy, Jean-Francois M.

    1989-01-01

    Some engineering applications of heuristic multilevel optimization methods are presented and the discussion focuses on the dependency matrix that indicates the relationship between problem functions and variables. Coordination of the subproblem optimizations is shown to be typically achieved through the use of exact or approximate sensitivity analysis. Areas for further development are identified.

  19. METMET fuel with Zirconium matrix alloys

    International Nuclear Information System (INIS)

    Savchenko, A.; Konovalov, I.; Totev, T.

    2008-01-01

    The novel type of WWER-1000 fuel has been designed at A.A. Bochvar Institute. Instead of WWER-1000 UO 2 pelletized fuel rod we apply dispersion type fuel element with uniformly distributed high uranium content granules of U9Mo, U5Nb5Zr, U3Si alloys metallurgically bonded between themselves and to cladding by a specially developed Zr-base matrix alloy. The fuel meat retains a controllable porosity to accommodate fuel swelling. The optimal volume ratios between the components are: 64% fuel, 18% matrix, 18% pores. Properties of novel materials as well as fuel compositions on their base have been investigated. Method of fuel elements fabrication by capillary impregnation has been developed. The primary advantages of novel fuel are high uranium content (more than 15% in comparison with the standard UO 2 pelletized fuel rod), low temperature of fuel ( * d/tU) and serviceability under transient conditions. The use of the novel fuel might lead to natural uranium saving and reduced amounts of spent fuel as well as to optimization of Nuclear Plant operation conditions and improvements of their operation reliability and safety. As a result the economic efficiency shall increase and the cost of electric power shall decrease. (authors)

  20. The time-dependent density matrix renormalisation group method

    Science.gov (United States)

    Ma, Haibo; Luo, Zhen; Yao, Yao

    2018-04-01

    Substantial progress of the time-dependent density matrix renormalisation group (t-DMRG) method in the recent 15 years is reviewed in this paper. By integrating the time evolution with the sweep procedures in density matrix renormalisation group (DMRG), t-DMRG provides an efficient tool for real-time simulations of the quantum dynamics for one-dimensional (1D) or quasi-1D strongly correlated systems with a large number of degrees of freedom. In the illustrative applications, the t-DMRG approach is applied to investigate the nonadiabatic processes in realistic chemical systems, including exciton dissociation and triplet fission in polymers and molecular aggregates as well as internal conversion in pyrazine molecule.

  1. Strain redistribution around holes and notches in fiber-reinforced cross-woven brittle matrix composites

    DEFF Research Database (Denmark)

    Jacobsen, Torben Krogsdal; Brøndsted, Povl

    1997-01-01

    Mechanics, and an identification procedure based on a uni-axial tensile test and a shear test the strain redistribution around a hole or a notch due to matrix cracking can be predicted. Damage due to fiber breakage is not included in the model. Initial matrix damage in the C-f/SiCm material has...

  2. Metal matrix composite fabrication processes for high performance aerospace structures

    Science.gov (United States)

    Ponzi, C.

    A survey is conducted of extant methods of metal matrix composite (MMC) production in order to serve as a basis for prospective MMC users' selection of a matrix/reinforcement combination, cost-effective primary fabrication methods, and secondary fabrication techniques for the achievement of desired performance levels. Attention is given to the illustrative cases of structural fittings, control-surface connecting rods, hypersonic aircraft air inlet ramps, helicopter swash plates, and turbine rotor disks. Methods for technical and cost analysis modeling useful in process optimization are noted.

  3. Reliability Based Optimization of Structural Systems

    DEFF Research Database (Denmark)

    Sørensen, John Dalsgaard

    1987-01-01

    The optimization problem to design structural systems such that the reliability is satisfactory during the whole lifetime of the structure is considered in this paper. Some of the quantities modelling the loads and the strength of the structure are modelled as random variables. The reliability...... is estimated using first. order reliability methods ( FORM ). The design problem is formulated as the optimization problem to minimize a given cost function such that the reliability of the single elements satisfies given requirements or such that the systems reliability satisfies a given requirement....... For these optimization problems it is described how a sensitivity analysis can be performed. Next, new optimization procedures to solve the optimization problems are presented. Two of these procedures solve the system reliability based optimization problem sequentially using quasi-analytical derivatives. Finally...

  4. Protein structure estimation from NMR data by matrix completion.

    Science.gov (United States)

    Li, Zhicheng; Li, Yang; Lei, Qiang; Zhao, Qing

    2017-09-01

    Knowledge of protein structures is very important to understand their corresponding physical and chemical properties. Nuclear Magnetic Resonance (NMR) spectroscopy is one of the main methods to measure protein structure. In this paper, we propose a two-stage approach to calculate the structure of a protein from a highly incomplete distance matrix, where most data are obtained from NMR. We first randomly "guess" a small part of unobservable distances by utilizing the triangle inequality, which is crucial for the second stage. Then we use matrix completion to calculate the protein structure from the obtained incomplete distance matrix. We apply the accelerated proximal gradient algorithm to solve the corresponding optimization problem. Furthermore, the recovery error of our method is analyzed, and its efficiency is demonstrated by several practical examples.

  5. Fracture Toughness of Carbon Nanotube-Reinforced Metal- and Ceramic-Matrix Composites

    Directory of Open Access Journals (Sweden)

    Y. L. Chen

    2011-01-01

    Full Text Available Hierarchical analysis of the fracture toughness enhancement of carbon nanotube- (CNT- reinforced hard matrix composites is carried out on the basis of shear-lag theory and facture mechanics. It is found that stronger CNT/matrix interfaces cannot definitely lead to the better fracture toughness of these composites, and the optimal interfacial chemical bond density is that making the failure mode just in the transition from CNT pull-out to CNT break. For hard matrix composites, the fracture toughness of composites with weak interfaces can be improved effectively by increasing the CNT length. However, for soft matrix composite, the fracture toughness improvement due to the reinforcing CNTs quickly becomes saturated with an increase in CNT length. The proposed theoretical model is also applicable to short fiber-reinforced composites.

  6. Finding a Hadamard matrix by simulated annealing of spin vectors

    Science.gov (United States)

    Bayu Suksmono, Andriyan

    2017-05-01

    Reformulation of a combinatorial problem into optimization of a statistical-mechanics system enables finding a better solution using heuristics derived from a physical process, such as by the simulated annealing (SA). In this paper, we present a Hadamard matrix (H-matrix) searching method based on the SA on an Ising model. By equivalence, an H-matrix can be converted into a seminormalized Hadamard (SH) matrix, whose first column is unit vector and the rest ones are vectors with equal number of -1 and +1 called SH-vectors. We define SH spin vectors as representation of the SH vectors, which play a similar role as the spins on Ising model. The topology of the lattice is generalized into a graph, whose edges represent orthogonality relationship among the SH spin vectors. Starting from a randomly generated quasi H-matrix Q, which is a matrix similar to the SH-matrix without imposing orthogonality, we perform the SA. The transitions of Q are conducted by random exchange of {+, -} spin-pair within the SH-spin vectors that follow the Metropolis update rule. Upon transition toward zeroth energy, the Q-matrix is evolved following a Markov chain toward an orthogonal matrix, at which the H-matrix is said to be found. We demonstrate the capability of the proposed method to find some low-order H-matrices, including the ones that cannot trivially be constructed by the Sylvester method.

  7. A generalization of random matrix theory and its application to statistical physics.

    Science.gov (United States)

    Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H

    2017-02-01

    To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.

  8. The covariance matrix of derived quantities and their combination

    International Nuclear Information System (INIS)

    Zhao, Z.; Perey, F.G.

    1992-06-01

    The covariance matrix of quantities derived from measured data via nonlinear relations are only approximate since they are functions of the measured data taken as estimates for the true values of the measured quantities. The evaluation of such derived quantities entails new estimates for the true values of the measured quantities and consequently implies a modification of the covariance matrix of the derived quantities that was used in the evaluation process. Failure to recognize such an implication can lead to inconsistencies between the results of different evaluation strategies. In this report we show that an iterative procedure can eliminate such inconsistencies

  9. S-AMP: Approximate Message Passing for General Matrix Ensembles

    DEFF Research Database (Denmark)

    Cakmak, Burak; Winther, Ole; Fleury, Bernard H.

    2014-01-01

    the approximate message-passing (AMP) algorithm to general matrix ensembles with a well-defined large system size limit. The generalization is based on the S-transform (in free probability) of the spectrum of the measurement matrix. Furthermore, we show that the optimality of S-AMP follows directly from its......We propose a novel iterative estimation algorithm for linear observation models called S-AMP. The fixed points of S-AMP are the stationary points of the exact Gibbs free energy under a set of (first- and second-) moment consistency constraints in the large system limit. S-AMP extends...

  10. Optimization of Radiological Protection in Pediatric Patients Undergoing Common Conventional Radiological Procedures: Effectiveness of Increasing the Film to Focus Distance (FFD

    Directory of Open Access Journals (Sweden)

    Vahid Karami

    2017-04-01

    Full Text Available Background Increasing the x-ray film to focus distance (FFD, has been recommended as a practical dose optimization tool for patients undergoing conventional radiological procedures. In the previous study, we demonstrated a 32% reduction in absorbed dose is achievable due to increasing the FFD from 100 to 130 cm during pediatric chest radiography. The aim of this study was to examine whether increasing the FFD from 100 to 130 cm is equally effective for other common radiological procedures and performing a literature review of published studies to address the feasibility and probable limitations against implementing this optimization tool in clinical practice. Materials and Methods Radiographic examination of the pelvis (AP view, abdomen (AP view, skull (AP and lateral view, and spine (AP and lateral view, were taken of pediatric patients. The radiation dose and image quality of a radiological procedure is measured in FFD of 100 cm (reference FFD and 130 cm (increased FFD. The thermo-luminescent dosimeters (TLD were used for radiation dose measurements and visual grading analysis (VGA for image quality assessments. Results: Statistically significant reduction in the ESD ranged from 21.91% for the lateral skull projection to 35.24% for the lateral spine projection was obtained, when the FFD was increased from 100 to 130 cm (P0.05. Conclusion Increasing the FFD from 100 to 130 cm has significantly reduced radiation exposure without affecting on image quality. Our findings are commensurate with the literatures and emphasized that radiographers should learn to use of an updated reference FFD of 130 cm in clinical practice.

  11. Direct integration of the S-matrix applied to rigorous diffraction

    International Nuclear Information System (INIS)

    Iff, W; Lindlein, N; Tishchenko, A V

    2014-01-01

    A novel Fourier method for rigorous diffraction computation at periodic structures is presented. The procedure is based on a differential equation for the S-matrix, which allows direct integration of the S-matrix blocks. This results in a new method in Fourier space, which can be considered as a numerically stable and well-parallelizable alternative to the conventional differential method based on T-matrix integration and subsequent conversions from the T-matrices to S-matrix blocks. Integration of the novel differential equation in implicit manner is expounded. The applicability of the new method is shown on the basis of 1D periodic structures. It is clear however, that the new technique can also be applied to arbitrary 2D periodic or periodized structures. The complexity of the new method is O(N 3 ) similar to the conventional differential method with N being the number of diffraction orders. (fast track communication)

  12. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  13. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul

    2015-01-01

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  14. Virtual reality simulation for the optimization of endovascular procedures: current perspectives

    Directory of Open Access Journals (Sweden)

    Rudarakanchana N

    2015-03-01

    Full Text Available Nung Rudarakanchana,1 Isabelle Van Herzeele,2 Liesbeth Desender,2 Nicholas JW Cheshire1 1Department of Surgery, Imperial College London, London, UK; 2Department of Thoracic and Vascular Surgery, Ghent University Hospital, Ghent, BelgiumOn behalf of EVEREST (European Virtual reality Endovascular RESearch TeamAbstract: Endovascular technologies are rapidly evolving, often requiring coordination and cooperation between clinicians and technicians from diverse specialties. These multidisciplinary interactions lead to challenges that are reflected in the high rate of errors occurring during endovascular procedures. Endovascular virtual reality (VR simulation has evolved from simple benchtop devices to full physic simulators with advanced haptics and dynamic imaging and physiological controls. The latest developments in this field include the use of fully immersive simulated hybrid angiosuites to train whole endovascular teams in crisis resource management and novel technologies that enable practitioners to build VR simulations based on patient-specific anatomy. As our understanding of the skills, both technical and nontechnical, required for optimal endovascular performance improves, the requisite tools for objective assessment of these skills are being developed and will further enable the use of VR simulation in the training and assessment of endovascular interventionalists and their entire teams. Simulation training that allows deliberate practice without danger to patients may be key to bridging the gap between new endovascular technology and improved patient outcomes.Keywords: virtual reality, simulation, endovascular, aneurysm

  15. Direct determination of scattering time delays using the R-matrix propagation method

    International Nuclear Information System (INIS)

    Walker, R.B.; Hayes, E.F.

    1989-01-01

    A direct method for determining time delays for scattering processes is developed using the R-matrix propagation method. The procedure involves the simultaneous generation of the global R matrix and its energy derivative. The necessary expressions to obtain the energy derivative of the S matrix are relatively simple and involve many of the same matrix elements required for the R-matrix propagation method. This method is applied to a simple model for a chemical reaction that displays sharp resonance features. The test results of the direct method are shown to be in excellent agreement with the traditional numerical differentiation method for scattering energies near the resonance energy. However, for sharp resonances the numerical differentiation method requires calculation of the S-matrix elements at many closely spaced energies. Since the direct method presented here involves calculations at only a single energy, one is able to generate accurate energy derivatives and time delays much more efficiently and reliably

  16. Simultaneous determination of fumonisins B1 and B2 in different types of maize by matrix solid phase dispersion and HPLC-MS/MS.

    Science.gov (United States)

    de Oliveira, Gabriel Barros; de Castro Gomes Vieira, Carolyne Menezes; Orlando, Ricardo Mathias; Faria, Adriana Ferreira

    2017-10-15

    This work involved the optimization and validation of a method, according to Directive 2002/657/EC and the Analytical Quality Assurance Manual of Ministério da Agricultura, Pecuária e Abastecimento, Brazil, for simultaneous extraction and determination of fumonisins B1 and B2 in maize. The extraction procedure was based on a matrix solid phase dispersion approach, the optimization of which employed a sequence of different factorial designs. A liquid chromatography-tandem mass spectrometry method was developed for determining these analytes using the selected reaction monitoring mode. The optimized method employed only 1g of silica gel for dispersion and elution with 70% ammonium formate aqueous buffer (50mmolL -1 , pH 9), representing a simple, cheap and chemically friendly sample preparation method. Trueness (recoveries: 86-106%), precision (RSD ≤19%), decision limits, detection capabilities and measurement uncertainties were calculated for the validated method. The method scope was expanded to popcorn kernels, white maize kernels and yellow maize grits. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Optimal robust stabilizer design based on UPFC for interconnected power systems considering time delay

    Directory of Open Access Journals (Sweden)

    Koofigar Hamid Reza

    2017-09-01

    Full Text Available A robust auxiliary wide area damping controller is proposed for a unified power flow controller (UPFC. The mixed H2 / H∞ problem with regional pole placement, resolved by linear matrix inequality (LMI, is applied for controller design. Based on modal analysis, the optimal wide area input signals for the controller are selected. The time delay of input signals, due to electrical distance from the UPFC location is taken into account in the design procedure. The proposed controller is applied to a multi-machine interconnected power system from the IRAN power grid. It is shown that the both transient and dynamic stability are significantly improved despite different disturbances and loading conditions.

  18. Integrable Floquet dynamics, generalized exclusion processes and "fused" matrix ansatz

    Science.gov (United States)

    Vanicat, Matthieu

    2018-04-01

    We present a general method for constructing integrable stochastic processes, with two-step discrete time Floquet dynamics, from the transfer matrix formalism. The models can be interpreted as a discrete time parallel update. The method can be applied for both periodic and open boundary conditions. We also show how the stationary distribution can be built as a matrix product state. As an illustration we construct parallel discrete time dynamics associated with the R-matrix of the SSEP and of the ASEP, and provide the associated stationary distributions in a matrix product form. We use this general framework to introduce new integrable generalized exclusion processes, where a fixed number of particles is allowed on each lattice site in opposition to the (single particle) exclusion process models. They are constructed using the fusion procedure of R-matrices (and K-matrices for open boundary conditions) for the SSEP and ASEP. We develop a new method, that we named "fused" matrix ansatz, to build explicitly the stationary distribution in a matrix product form. We use this algebraic structure to compute physical observables such as the correlation functions and the mean particle current.

  19. Exact and Optimal Quantum Mechanics/Molecular Mechanics Boundaries.

    Science.gov (United States)

    Sun, Qiming; Chan, Garnet Kin-Lic

    2014-09-09

    Motivated by recent work in density matrix embedding theory, we define exact link orbitals that capture all quantum mechanical (QM) effects across arbitrary quantum mechanics/molecular mechanics (QM/MM) boundaries. Exact link orbitals are rigorously defined from the full QM solution, and their number is equal to the number of orbitals in the primary QM region. Truncating the exact set yields a smaller set of link orbitals optimal with respect to reproducing the primary region density matrix. We use the optimal link orbitals to obtain insight into the limits of QM/MM boundary treatments. We further analyze the popular general hybrid orbital (GHO) QM/MM boundary across a test suite of molecules. We find that GHOs are often good proxies for the most important optimal link orbital, although there is little detailed correlation between the detailed GHO composition and optimal link orbital valence weights. The optimal theory shows that anions and cations cannot be described by a single link orbital. However, expanding to include the second most important optimal link orbital in the boundary recovers an accurate description. The second optimal link orbital takes the chemically intuitive form of a donor or acceptor orbital for charge redistribution, suggesting that optimal link orbitals can be used as interpretative tools for electron transfer. We further find that two optimal link orbitals are also sufficient for boundaries that cut across double bonds. Finally, we suggest how to construct "approximately" optimal link orbitals for practical QM/MM calculations.

  20. Interactions between tungsten carbide (WC) particulates and metal matrix in WC-reinforced composites

    International Nuclear Information System (INIS)

    Lou, D.; Hellman, J.; Luhulima, D.; Liimatainen, J.; Lindroos, V.K.

    2003-01-01

    A variety of experimental techniques have been used to investigate the interactions between tungsten carbide (WC-Co 88/12) particulates and the matrix in some new wear resistant cobalt-based superalloy and steel matrix composites produced by hot isostatic pressing. The results show that the chemical composition of the matrix has a strong influence on the interface reaction between WC and matrix and the structural stability of the WC particulates in the composite. Some characteristics of the interaction between matrix and reinforcement are explained by the calculation of diffusion kinetics. The three-body abrasion wear resistance of the composites has been examined based on the ASTM G65-91 standard procedure. The wear behavior of the best composites of this study shows great potential for wear protection applications

  1. Efficient network-matrix architecture for general flow transport inspired by natural pinnate leaves.

    Science.gov (United States)

    Hu, Liguo; Zhou, Han; Zhu, Hanxing; Fan, Tongxiang; Zhang, Di

    2014-11-14

    Networks embedded in three dimensional matrices are beneficial to deliver physical flows to the matrices. Leaf architectures, pervasive natural network-matrix architectures, endow leaves with high transpiration rates and low water pressure drops, providing inspiration for efficient network-matrix architectures. In this study, the network-matrix model for general flow transport inspired by natural pinnate leaves is investigated analytically. The results indicate that the optimal network structure inspired by natural pinnate leaves can greatly reduce the maximum potential drop and the total potential drop caused by the flow through the network while maximizing the total flow rate through the matrix. These results can be used to design efficient networks in network-matrix architectures for a variety of practical applications, such as tissue engineering, cell culture, photovoltaic devices and heat transfer.

  2. Redundant interferometric calibration as a complex optimization problem

    Science.gov (United States)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  3. Numerical modeling and optimization of the Iguassu gas centrifuge

    Science.gov (United States)

    Bogovalov, S. V.; Borman, V. D.; Borisevich, V. D.; Tronin, V. N.; Tronin, I. V.

    2017-07-01

    The full procedure of the numerical calculation of the optimized parameters of the Iguassu gas centrifuge (GC) is under discussion. The procedure consists of a few steps. On the first step the problem of a hydrodynamical flow of the gas in the rotating rotor of the GC is solved numerically. On the second step the problem of diffusion of the binary mixture of isotopes is solved. The separation power of the gas centrifuge is calculated after that. On the last step the time consuming procedure of optimization of the GC is performed providing us the maximum of the separation power. The optimization is based on the BOBYQA method exploring the results of numerical simulations of the hydrodynamics and diffusion of the mixture of isotopes. Fast convergence of calculations is achieved due to exploring of a direct solver at the solution of the hydrodynamical and diffusion parts of the problem. Optimized separative power and optimal internal parameters of the Iguassu GC with 1 m rotor were calculated using the developed approach. Optimization procedure converges in 45 iterations taking 811 minutes.

  4. Optimization of the Brillouin operator on the KNL architecture

    Science.gov (United States)

    Dürr, Stephan

    2018-03-01

    Experiences with optimizing the matrix-times-vector application of the Brillouin operator on the Intel KNL processor are reported. Without adjustments to the memory layout, performance figures of 360 Gflop/s in single and 270 Gflop/s in double precision are observed. This is with Nc = 3 colors, Nv = 12 right-hand-sides, Nthr = 256 threads, on lattices of size 323 × 64, using exclusively OMP pragmas. Interestingly, the same routine performs quite well on Intel Core i7 architectures, too. Some observations on the much harderWilson fermion matrix-times-vector optimization problem are added.

  5. Optimization of a particle optical system in a mutilprocessor environment

    International Nuclear Information System (INIS)

    Wei Lei; Yin Hanchun; Wang Baoping; Tong Linsu

    2002-01-01

    In the design of a charged particle optical system, many geometrical and electric parameters have to be optimized to improve the performance characteristics. In every optimization cycle, the electromagnetic field and particle trajectories have to be calculated. Therefore, the optimization of a charged particle optical system is limited by the computer resources seriously. Apart from this, numerical errors of calculation may also influence the convergence of merit function. This article studies how to improve the optimization of charged particle optical systems. A new method is used to determine the gradient matrix. With this method, the accuracy of the Jacobian matrix can be improved. In this paper, the charged particle optical system is optimized with a Message Passing Interface (MPI). The electromagnetic field, particle trajectories and gradients of optimization variables are calculated on networks of workstations. Therefore, the speed of optimization has been increased largely. It is possible to design a complicated charged particle optical system with optimum quality on a MPI environment. Finally, an electron gun for a cathode ray tube has been optimized on a MPI environment to verify the method proposed in this paper

  6. Robot-assisted procedures in pediatric neurosurgery.

    Science.gov (United States)

    De Benedictis, Alessandro; Trezza, Andrea; Carai, Andrea; Genovese, Elisabetta; Procaccini, Emidio; Messina, Raffaella; Randi, Franco; Cossu, Silvia; Esposito, Giacomo; Palma, Paolo; Amante, Paolina; Rizzi, Michele; Marras, Carlo Efisio

    2017-05-01

    OBJECTIVE During the last 3 decades, robotic technology has rapidly spread across several surgical fields due to the continuous evolution of its versatility, stability, dexterity, and haptic properties. Neurosurgery pioneered the development of robotics, with the aim of improving the quality of several procedures requiring a high degree of accuracy and safety. Moreover, robot-guided approaches are of special interest in pediatric patients, who often have altered anatomy and challenging relationships between the diseased and eloquent structures. Nevertheless, the use of robots has been rarely reported in children. In this work, the authors describe their experience using the ROSA device (Robotized Stereotactic Assistant) in the neurosurgical management of a pediatric population. METHODS Between 2011 and 2016, 116 children underwent ROSA-assisted procedures for a variety of diseases (epilepsy, brain tumors, intra- or extraventricular and tumor cysts, obstructive hydrocephalus, and movement and behavioral disorders). Each patient received accurate preoperative planning of optimal trajectories, intraoperative frameless registration, surgical treatment using specific instruments held by the robotic arm, and postoperative CT or MR imaging. RESULTS The authors performed 128 consecutive surgeries, including implantation of 386 electrodes for stereo-electroencephalography (36 procedures), neuroendoscopy (42 procedures), stereotactic biopsy (26 procedures), pallidotomy (12 procedures), shunt placement (6 procedures), deep brain stimulation procedures (3 procedures), and stereotactic cyst aspiration (3 procedures). For each procedure, the authors analyzed and discussed accuracy, timing, and complications. CONCLUSIONS To the best their knowledge, the authors present the largest reported series of pediatric neurosurgical cases assisted by robotic support. The ROSA system provided improved safety and feasibility of minimally invasive approaches, thus optimizing the surgical

  7. Topology optimized electrothermal polysilicon microgrippers

    DEFF Research Database (Denmark)

    Sardan Sukas, Özlem; Petersen, Dirch Hjorth; Mølhave, Kristian

    2008-01-01

    This paper presents the topology optimized design procedure and fabrication of electrothermal polysilicon microgrippers for nanomanipulation purposes. Performance of the optimized microactuators is compared with a conventional three-beam microactuator design through finite element analysis...

  8. Exploring multicollinearity using a random matrix theory approach.

    Science.gov (United States)

    Feher, Kristen; Whelan, James; Müller, Samuel

    2012-01-01

    Clustering of gene expression data is often done with the latent aim of dimension reduction, by finding groups of genes that have a common response to potentially unknown stimuli. However, what is poorly understood to date is the behaviour of a low dimensional signal embedded in high dimensions. This paper introduces a multicollinear model which is based on random matrix theory results, and shows potential for the characterisation of a gene cluster's correlation matrix. This model projects a one dimensional signal into many dimensions and is based on the spiked covariance model, but rather characterises the behaviour of the corresponding correlation matrix. The eigenspectrum of the correlation matrix is empirically examined by simulation, under the addition of noise to the original signal. The simulation results are then used to propose a dimension estimation procedure of clusters from data. Moreover, the simulation results warn against considering pairwise correlations in isolation, as the model provides a mechanism whereby a pair of genes with `low' correlation may simply be due to the interaction of high dimension and noise. Instead, collective information about all the variables is given by the eigenspectrum.

  9. Estimation of covariance matrix on the experimental data for nuclear data evaluation

    International Nuclear Information System (INIS)

    Murata, T.

    1985-01-01

    In order to evaluate fission and capture cross sections of some U and Pu isotopes for JENDL-3, we have a plan for evaluating them simultaneously with a least-squares method. For the simultaneous evaluation, the covariance matrix is required for each experimental data set. In the present work, we have studied the procedures for deriving the covariance matrix from the error data given in the experimental papers. The covariance matrices were obtained using the partial errors and estimated correlation coefficients between the same type partial errors for different neutron energy. Some examples of the covariance matrix estimation are explained and the preliminary results of the simultaneous evaluation are presented. (author)

  10. Statistical analysis of latent generalized correlation matrix estimation in transelliptical distribution.

    Science.gov (United States)

    Han, Fang; Liu, Han

    2017-02-01

    Correlation matrix plays a key role in many multivariate methods (e.g., graphical model estimation and factor analysis). The current state-of-the-art in estimating large correlation matrices focuses on the use of Pearson's sample correlation matrix. Although Pearson's sample correlation matrix enjoys various good properties under Gaussian models, its not an effective estimator when facing heavy-tail distributions with possible outliers. As a robust alternative, Han and Liu (2013b) advocated the use of a transformed version of the Kendall's tau sample correlation matrix in estimating high dimensional latent generalized correlation matrix under the transelliptical distribution family (or elliptical copula). The transelliptical family assumes that after unspecified marginal monotone transformations, the data follow an elliptical distribution. In this paper, we study the theoretical properties of the Kendall's tau sample correlation matrix and its transformed version proposed in Han and Liu (2013b) for estimating the population Kendall's tau correlation matrix and the latent Pearson's correlation matrix under both spectral and restricted spectral norms. With regard to the spectral norm, we highlight the role of "effective rank" in quantifying the rate of convergence. With regard to the restricted spectral norm, we for the first time present a "sign subgaussian condition" which is sufficient to guarantee that the rank-based correlation matrix estimator attains the optimal rate of convergence. In both cases, we do not need any moment condition.

  11. Optimal unit sizing for small-scale integrated energy systems using multi-objective interval optimization and evidential reasoning approach

    International Nuclear Information System (INIS)

    Wei, F.; Wu, Q.H.; Jing, Z.X.; Chen, J.J.; Zhou, X.X.

    2016-01-01

    This paper proposes a comprehensive framework including a multi-objective interval optimization model and evidential reasoning (ER) approach to solve the unit sizing problem of small-scale integrated energy systems, with uncertain wind and solar energies integrated. In the multi-objective interval optimization model, interval variables are introduced to tackle the uncertainties of the optimization problem. Aiming at simultaneously considering the cost and risk of a business investment, the average and deviation of life cycle cost (LCC) of the integrated energy system are formulated. In order to solve the problem, a novel multi-objective optimization algorithm, MGSOACC (multi-objective group search optimizer with adaptive covariance matrix and chaotic search), is developed, employing adaptive covariance matrix to make the search strategy adaptive and applying chaotic search to maintain the diversity of group. Furthermore, ER approach is applied to deal with multiple interests of an investor at the business decision making stage and to determine the final unit sizing solution from the Pareto-optimal solutions. This paper reports on the simulation results obtained using a small-scale direct district heating system (DH) and a small-scale district heating and cooling system (DHC) optimized by the proposed framework. The results demonstrate the superiority of the multi-objective interval optimization model and ER approach in tackling the unit sizing problem of integrated energy systems considering the integration of uncertian wind and solar energies. - Highlights: • Cost and risk of investment in small-scale integrated energy systems are considered. • A multi-objective interval optimization model is presented. • A novel multi-objective optimization algorithm (MGSOACC) is proposed. • The evidential reasoning (ER) approach is used to obtain the final optimal solution. • The MGSOACC and ER can tackle the unit sizing problem efficiently.

  12. Printing three-dimensional tissue analogues with decellularized extracellular matrix bioink

    Science.gov (United States)

    Pati, Falguni; Jang, Jinah; Ha, Dong-Heon; Won Kim, Sung; Rhie, Jong-Won; Shim, Jin-Hyung; Kim, Deok-Ho; Cho, Dong-Woo

    2014-06-01

    The ability to print and pattern all the components that make up a tissue (cells and matrix materials) in three dimensions to generate structures similar to tissues is an exciting prospect of bioprinting. However, the majority of the matrix materials used so far for bioprinting cannot represent the complexity of natural extracellular matrix (ECM) and thus are unable to reconstitute the intrinsic cellular morphologies and functions. Here, we develop a method for the bioprinting of cell-laden constructs with novel decellularized extracellular matrix (dECM) bioink capable of providing an optimized microenvironment conducive to the growth of three-dimensional structured tissue. We show the versatility and flexibility of the developed bioprinting process using tissue-specific dECM bioinks, including adipose, cartilage and heart tissues, capable of providing crucial cues for cells engraftment, survival and long-term function. We achieve high cell viability and functionality of the printed dECM structures using our bioprinting method.

  13. Matrix Training of Receptive Language Skills with a Toddler with Autism Spectrum Disorder: A Case Study

    Science.gov (United States)

    Curiel, Emily S. L.; Sainato, Diane M.; Goldstein, Howard

    2016-01-01

    Matrix training is a systematic teaching approach that can facilitate generalized language. Specific responses are taught that result in the emergence of untrained responses. This type of training facilitates the use of generalized language in children with autism spectrum disorder (ASD). This study used a matrix training procedure with a toddler…

  14. Optimal transformation for correcting partial volume averaging effects in magnetic resonance imaging

    International Nuclear Information System (INIS)

    Soltanian-Zadeh, H.; Windham, J.P.; Yagle, A.E.

    1993-01-01

    Segmentation of a feature of interest while correcting for partial volume averaging effects is a major tool for identification of hidden abnormalities, fast and accurate volume calculation, and three-dimensional visualization in the field of magnetic resonance imaging (MRI). The authors present the optimal transformation for simultaneous segmentation of a desired feature and correction of partial volume averaging effects, while maximizing the signal-to-noise ratio (SNR) of the desired feature. It is proved that correction of partial volume averaging effects requires the removal of the interfering features from the scene. It is also proved that correction of partial volume averaging effects can be achieved merely by a linear transformation. It is finally shown that the optimal transformation matrix is easily obtained using the Gram-Schmidt orthogonalization procedure, which is numerically stable. Applications of the technique to MRI simulation, phantom, and brain images are shown. They show that in all cases the desired feature is segmented from the interfering features and partial volume information is visualized in the resulting transformed images

  15. OD Matrix Acquisition Based on Mobile Phone Positioning Data

    Directory of Open Access Journals (Sweden)

    Xiaoqing ZUO

    2014-06-01

    Full Text Available Dynamic OD matrix is basic data of traffic travel guidance, traffic control, traffic management and traffic planning, and reflects the basic needs of travelers on the traffic network. With the rising popularity of positioning technology and the communication technology and the generation of huge mobile phone users, the mining and use of mobile phone positioning data, can get more traffic intersections and import and export data. These data will be integrated into obtaining the regional OD matrix, which is bound to bring convenience. In this article, mobile phone positioning data used in the data acquisition of intelligent transportation system, research a kind of regional dynamic OD matrix acquisition method based on the mobile phone positioning data. The method based on purpose of transportation, using time series similarity classification algorithm based on piecewise linear representation of the corner point (CP-PLR, mapping each base station cell to traffic zone of different traffic characteristics, and through a series of mapping optimization of base station cell to traffic zone to realize city traffic zone division based on mobile phone traffic data, on the basis, adjacency matrix chosen as the physical data structure of OD matrix storage, the principle of obtaining regional dynamic OD matrix based on the mobile phone positioning data are expounded, and the algorithm of obtaining regional dynamic OD matrix based on mobile phone positioning data are designed and verified.

  16. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX - RAdiation Dose study (RAD-MATRIX).

    Science.gov (United States)

    Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco

    2014-06-01

    Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes. Copyright © 2014 Elsevier Inc. All rights reserved.

  17. Optimized molecular reconstruction procedure combining hybrid reverse Monte Carlo and molecular dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bousige, Colin; Boţan, Alexandru; Coasne, Benoît, E-mail: coasne@mit.edu [Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); UMI 3466 CNRS-MIT, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Ulm, Franz-Josef [Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); Pellenq, Roland J.-M. [Department of Civil and Environmental Engineering, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); UMI 3466 CNRS-MIT, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139 (United States); CINaM, CNRS/Aix Marseille Université, Campus de Luminy, 13288 Marseille Cedex 09 (France)

    2015-03-21

    We report an efficient atom-scale reconstruction method that consists of combining the Hybrid Reverse Monte Carlo algorithm (HRMC) with Molecular Dynamics (MD) in the framework of a simulated annealing technique. In the spirit of the experimentally constrained molecular relaxation technique [Biswas et al., Phys. Rev. B 69, 195207 (2004)], this modified procedure offers a refined strategy in the field of reconstruction techniques, with special interest for heterogeneous and disordered solids such as amorphous porous materials. While the HRMC method generates physical structures, thanks to the use of energy penalties, the combination with MD makes the method at least one order of magnitude faster than HRMC simulations to obtain structures of similar quality. Furthermore, in order to ensure the transferability of this technique, we provide rational arguments to select the various input parameters such as the relative weight ω of the energy penalty with respect to the structure optimization. By applying the method to disordered porous carbons, we show that adsorption properties provide data to test the global texture of the reconstructed sample but are only weakly sensitive to the presence of defects. In contrast, the vibrational properties such as the phonon density of states are found to be very sensitive to the local structure of the sample.

  18. Optimal Substrate Preheating Model for Thermal Spray Deposition of Thermosets onto Polymer Matrix Composites

    Science.gov (United States)

    Ivosevic, M.; Knight, R.; Kalidindi, S. R.; Palmese, G. R.; Tsurikov, A.; Sutter, J. K.

    2003-01-01

    High velocity oxy-fuel (HVOF) sprayed, functionally graded polyimide/WC-Co composite coatings on polymer matrix composites (PMC's) are being investigated for applications in turbine engine technologies. This requires that the polyimide, used as the matrix material, be fully crosslinked during deposition in order to maximize its engineering properties. The rapid heating and cooling nature of the HVOF spray process and the high heat flux through the coating into the substrate typically do not allow sufficient time at temperature for curing of the thermoset. It was hypothesized that external substrate preheating might enhance the deposition behavior and curing reaction during the thermal spraying of polyimide thermosets. A simple analytical process model for the deposition of thermosetting polyimide onto polymer matrix composites by HVOF thermal spray technology has been developed. The model incorporates various heat transfer mechanisms and enables surface temperature profiles of the coating to be simulated, primarily as a function of substrate preheating temperature. Four cases were modeled: (i) no substrate preheating; (ii) substrates electrically preheated from the rear; (iii) substrates preheated by hot air from the front face; and (iv) substrates electrically preheated from the rear and by hot air from the front.

  19. Matrix-induced autologous chondrocyte implantation for a large chondral defect in a professional football player: a case report

    Directory of Open Access Journals (Sweden)

    Beyzadeoglu Tahsin

    2012-06-01

    Full Text Available Abstract Introduction Matrix-assisted autologous chondrocyte implantation is a well-known procedure for the treatment of cartilage defects, which aims to establish a regenerative milieu and restore hyaline cartilage. However, much less is known about third-generation autologous chondrocyte implantation application in high-level athletes. We report on the two-year follow-up outcome after matrix-assisted autologous chondrocyte implantation to treat a large cartilage lesion of the lateral femoral condyle in a male Caucasian professional football player. Case presentation A 27-year-old male Caucasian professional football player was previously treated for cartilage problems of his left knee with two failed microfracture procedures resulting in a 9 cm2 Outerbridge Grade 4 chondral lesion at his lateral femoral condyle. Preoperative Tegner-Lysholm and Brittberg-Peterson scores were 64 and 58, and by the second year they were 91 and 6. An evaluation with magnetic resonance imaging demonstrated filling of the defect with the signal intensity of the repair tissue resembling healthy cartilage. Second-look arthroscopy revealed robust, smooth cartilage covering his lateral femoral condyle. He returned to his former competitive level without restrictions or complaints one year after the procedure. Conclusions This case illustrates that robust cartilage tissue can be obtained with a matrix-assisted autologous chondrocyte implantation procedure even after two failed microfracture procedures in a large (9 cm2 cartilage defect. To the best of our knowledge, this is the first case report on the application of the third-generation cell therapy treatment technique, matrix-assisted autologous chondrocyte implantation, in a professional football player.

  20. Optimal Tikhonov Regularization in Finite-Frequency Tomography

    Science.gov (United States)

    Fang, Y.; Yao, Z.; Zhou, Y.

    2017-12-01

    The last decade has witnessed a progressive transition in seismic tomography from ray theory to finite-frequency theory which overcomes the resolution limit of the high-frequency approximation in ray theory. In addition to approximations in wave propagation physics, a main difference between ray-theoretical tomography and finite-frequency tomography is the sparseness of the associated sensitivity matrix. It is well known that seismic tomographic problems are ill-posed and regularizations such as damping and smoothing are often applied to analyze the tradeoff between data misfit and model uncertainty. The regularizations depend on the structure of the matrix as well as noise level of the data. Cross-validation has been used to constrain data uncertainties in body-wave finite-frequency inversions when measurements at multiple frequencies are available to invert for a common structure. In this study, we explore an optimal Tikhonov regularization in surface-wave phase-velocity tomography based on minimization of an empirical Bayes risk function using theoretical training datasets. We exploit the structure of the sensitivity matrix in the framework of singular value decomposition (SVD) which also allows for the calculation of complete resolution matrix. We compare the optimal Tikhonov regularization in finite-frequency tomography with traditional tradeo-off analysis using surface wave dispersion measurements from global as well as regional studies.

  1. Study of structure modification influence of polymer matrix on the impurity centers luminescence

    International Nuclear Information System (INIS)

    Akylbaev, Zh.S.; Karitskay, S.G.; Nikitina, L.A.; Kobzev, G.I.

    2002-01-01

    Data on study of influence of polymer matrix structure change on impurity centers fluorescence are cited. In the capacity of polymer matrix the polyvinyl butyryl (PVB) serves, and as fluorescence centers the dye molecules the crystal violet (CV) are serving. Computerized simulation of processes of PVB matrixes structuring under the oxygen action from air, produced under annealing of the film at thermal treatment of liquid polymer. Calculation of KV spectral lines wave length under optimization of the dye-polymer system is carried out by the Mm+ molecular mechanics method, and then by semi-empiric method ZINDO1

  2. Determination of ibuprofen enantiomers in breast milk using vortex-assisted matrix solid-phase dispersion and direct chiral liquid chromatography.

    Science.gov (United States)

    León-González, M E; Rosales-Conrado, N

    2017-09-08

    A mixture of β-cyclodextrin (β-CD) and primary and secondary amine (PSA) sorbents was employed for the extraction and quantification of ibuprofen enantiomers from human breast milk, combining a vortex-assisted matrix solid-phase dispersion method (MSPD) and direct chiral liquid chromatography (CLC) with ultraviolet detection (UV). The MSPD sample preparation procedure was optimized focusing on both the type and amount of dispersion/sorption sorbents and the nature of the elution solvent, in order to obtain acceptable recoveries and avoiding enantiomer conversion. These MSPD parameters were optimized with the aid of an experimental design approach. Hence, a factorial design was used for identification of the main variables affecting the extraction process of ibuprofen enantiomers. Under optimum selected conditions, MSPD combined with direct CLC-UV was successfully applied for ibuprofen enantiomeric determination in breast milk at enantiomer levels between 0.15 and 6.0μgg -1 . The proposed analytical method also provided good repeatability, with relative standard deviations of 6.4% and 8.3% for the intra-day and inter-day precision, respectively. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. A Novel Method to Implement the Matrix Pencil Super Resolution Algorithm for Indoor Positioning

    Directory of Open Access Journals (Sweden)

    Tariq Jamil Saifullah Khanzada

    2011-10-01

    Full Text Available This article highlights the estimation of the results for the algorithms implemented in order to estimate the delays and distances for the indoor positioning system. The data sets for the transmitted and received signals are captured at a typical outdoor and indoor area. The estimation super resolution algorithms are applied. Different state of art and super resolution techniques based algorithms are applied to avail the optimal estimates of the delays and distances between the transmitted and received signals and a novel method for matrix pencil algorithm is devised. The algorithms perform variably at different scenarios of transmitted and received positions. Two scenarios are experienced, for the single antenna scenario the super resolution techniques like ESPRIT (Estimation of Signal Parameters via Rotational Invariance Technique and theMatrix Pencil algorithms give optimal performance compared to the conventional techniques. In two antenna scenario RootMUSIC and Matrix Pencil algorithm performed better than other algorithms for the distance estimation, however, the accuracy of all the algorithms is worst than the single antenna scenario. In all cases our devised Matrix Pencil algorithm achieved the best estimation results.

  4. Optimal pole shifting controller for interconnected power system

    International Nuclear Information System (INIS)

    Yousef, Ali M.; Kassem, Ahmed M.

    2011-01-01

    Research highlights: → Mathematical model represents a power system which consists of synchronous machine connected to infinite bus through transmission line. → Power system stabilizer was designed based on optimal pole shifting controller. → The system performances was tested through load disturbances at different operating conditions. → The system performance with the proposed optimal pole shifting controller is compared with the conventional pole placement controller. → The digital simulation results indicated that the proposed controller has a superior performance. -- Abstract: Power system stabilizer based on optimal pole shifting is proposed. An approach for shifting the real parts of the open-loop poles to any desired positions while preserving the imaginary parts is presented. In each step of this approach, it is required to solve a first-order or a second-order linear matrix Lyapunov equation for shifting one real pole or two complex conjugate poles, respectively. This presented method yields a solution, which is optimal with respect to a quadratic performance index. The attractive feature of this method is that it enables solutions of the complex problem to be easily found without solving any non-linear algebraic Riccati equation. The present power system stabilizer is based on Riccati equation approach. The control law depends on finding the feedback gain matrix, and then the control signal is synthesized by multiplying the state variables of the power system with determined gain matrix. The gain matrix is calculated one time only, and it works over wide range of operating conditions. To validate the power of the proposed PSS, a linearized model of a simple power system consisted of a single synchronous machine connected to infinite bus bar through transmission line is simulated. The studied power system is subjected to various operating points and power system parameters changes.

  5. Optimal pole shifting controller for interconnected power system

    Energy Technology Data Exchange (ETDEWEB)

    Yousef, Ali M., E-mail: drali_yousef@yahoo.co [Electrical Eng. Dept., Faculty of Engineering, Assiut University (Egypt); Kassem, Ahmed M., E-mail: kassem_ahmed53@hotmail.co [Control Technology Dep., Industrial Education College, Beni-Suef University (Egypt)

    2011-05-15

    Research highlights: {yields} Mathematical model represents a power system which consists of synchronous machine connected to infinite bus through transmission line. {yields} Power system stabilizer was designed based on optimal pole shifting controller. {yields} The system performances was tested through load disturbances at different operating conditions. {yields} The system performance with the proposed optimal pole shifting controller is compared with the conventional pole placement controller. {yields} The digital simulation results indicated that the proposed controller has a superior performance. -- Abstract: Power system stabilizer based on optimal pole shifting is proposed. An approach for shifting the real parts of the open-loop poles to any desired positions while preserving the imaginary parts is presented. In each step of this approach, it is required to solve a first-order or a second-order linear matrix Lyapunov equation for shifting one real pole or two complex conjugate poles, respectively. This presented method yields a solution, which is optimal with respect to a quadratic performance index. The attractive feature of this method is that it enables solutions of the complex problem to be easily found without solving any non-linear algebraic Riccati equation. The present power system stabilizer is based on Riccati equation approach. The control law depends on finding the feedback gain matrix, and then the control signal is synthesized by multiplying the state variables of the power system with determined gain matrix. The gain matrix is calculated one time only, and it works over wide range of operating conditions. To validate the power of the proposed PSS, a linearized model of a simple power system consisted of a single synchronous machine connected to infinite bus bar through transmission line is simulated. The studied power system is subjected to various operating points and power system parameters changes.

  6. Aerodynamic shape optimization using preconditioned conjugate gradient methods

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay

    1993-01-01

    In an effort to further improve upon the latest advancements made in aerodynamic shape optimization procedures, a systematic study is performed to examine several current solution methodologies as applied to various aspects of the optimization procedure. It is demonstrated that preconditioned conjugate gradient-like methodologies dramatically decrease the computational efforts required for such procedures. The design problem investigated is the shape optimization of the upper and lower surfaces of an initially symmetric (NACA-012) airfoil in inviscid transonic flow and at zero degree angle-of-attack. The complete surface shape is represented using a Bezier-Bernstein polynomial. The present optimization method then automatically obtains supercritical airfoil shapes over a variety of freestream Mach numbers. Furthermore, the best optimization strategy examined resulted in a factor of 8 decrease in computational time as well as a factor of 4 decrease in memory over the most efficient strategies in current use.

  7. Improving the ensemble optimization method through covariance matrix adaptation (CMA-EnOpt)

    NARCIS (Netherlands)

    Fonseca, R.M.; Leeuwenburgh, O.; Hof, P.M.J. van den; Jansen, J.D.

    2013-01-01

    Ensemble Optimization (EnOpt) is a rapidly emerging method for reservoir model based production optimization. EnOpt uses an ensemble of controls to approximate the gradient of the objective function with respect to the controls. Current implementations of EnOpt use a Gaussian ensemble with a

  8. Improving the efficiency of aerodynamic shape optimization

    Science.gov (United States)

    Burgreen, Greg W.; Baysal, Oktay; Eleshaky, Mohamed E.

    1994-01-01

    The computational efficiency of an aerodynamic shape optimization procedure that is based on discrete sensitivity analysis is increased through the implementation of two improvements. The first improvement involves replacing a grid-point-based approach for surface representation with a Bezier-Bernstein polynomial parameterization of the surface. Explicit analytical expressions for the grid sensitivity terms are developed for both approaches. The second improvement proposes the use of Newton's method in lieu of an alternating direction implicit methodology to calculate the highly converged flow solutions that are required to compute the sensitivity coefficients. The modified design procedure is demonstrated by optimizing the shape of an internal-external nozzle configuration. Practically identical optimization results are obtained that are independent of the method used to represent the surface. A substantial factor of 8 decrease in computational time for the optimization process is achieved by implementing both of the design procedure improvements.

  9. Optimization and characterization of woven kevlar reinforced epoxy matrix composite materials

    International Nuclear Information System (INIS)

    Imran, A.; Aslam, S.

    2007-01-01

    Composite materials are actually well established materials that have demonstrated their promising advantages among the light weight structural materials used for aerospace and advanced applications. An effort is now being made to develop and characterize the Kevlar Epoxy Composite Materials by changing the vol. fraction of Kevlar in epoxy matrix. The optimum characteristics were observed with 37% fiber with resin by applying hand-lay-up process. The composites produced were subjected to mechanical testing to evaluate the mechanical characteristics. (author)

  10. Global quantum discord and matrix product density operators

    Science.gov (United States)

    Huang, Hai-Lin; Cheng, Hong-Guang; Guo, Xiao; Zhang, Duo; Wu, Yuyin; Xu, Jian; Sun, Zhao-Yu

    2018-06-01

    In a previous study, we have proposed a procedure to study global quantum discord in 1D chains whose ground states are described by matrix product states [Z.-Y. Sun et al., Ann. Phys. 359, 115 (2015)]. In this paper, we show that with a very simple generalization, the procedure can be used to investigate quantum mixed states described by matrix product density operators, such as quantum chains at finite temperatures and 1D subchains in high-dimensional lattices. As an example, we study the global discord in the ground state of a 2D transverse-field Ising lattice, and pay our attention to the scaling behavior of global discord in 1D sub-chains of the lattice. We find that, for any strength of the magnetic field, global discord always shows a linear scaling behavior as the increase of the length of the sub-chains. In addition, global discord and the so-called "discord density" can be used to indicate the quantum phase transition in the model. Furthermore, based upon our numerical results, we make some reliable predictions about the scaling of global discord defined on the n × n sub-squares in the lattice.

  11. Research on Operating Procedure Development in View of RCM Theory

    International Nuclear Information System (INIS)

    Shi, J.

    2015-01-01

    The operation of NPPs (nuclear power plants) is closely related to SSCs (Structure, System and Component) function implementations and failure recoveries, and strictly follows operating procedure. The philosophy of RCM (Reliability Centered Maintenance) which is a widely-used systematic engineering approach in industry focusing on likewise facility functions and effectiveness of maintenance is accepted in relative analysis of NPPs operation in this paper. Based on the theory of RCM, the paper will discuss general logic of operating procedure development and framework optimization as well combining NPPs engineering design. Since the quality of operating procedures has a significant impact on the safe and reliable operation of NPPs, the paper provides a proposed operating procedure development logic diagramme for reference for the procedure optimization task ahead. (author)

  12. An innovative procedure of genome-wide association analysis fits studies on germplasm population and plant breeding.

    Science.gov (United States)

    He, Jianbo; Meng, Shan; Zhao, Tuanjie; Xing, Guangnan; Yang, Shouping; Li, Yan; Guan, Rongzhan; Lu, Jiangjie; Wang, Yufeng; Xia, Qiuju; Yang, Bing; Gai, Junyi

    2017-11-01

    The innovative RTM-GWAS procedure provides a relatively thorough detection of QTL and their multiple alleles for germplasm population characterization, gene network identification, and genomic selection strategy innovation in plant breeding. The previous genome-wide association studies (GWAS) have been concentrated on finding a handful of major quantitative trait loci (QTL), but plant breeders are interested in revealing the whole-genome QTL-allele constitution in breeding materials/germplasm (in which tremendous historical allelic variation has been accumulated) for genome-wide improvement. To match this requirement, two innovations were suggested for GWAS: first grouping tightly linked sequential SNPs into linkage disequilibrium blocks (SNPLDBs) to form markers with multi-allelic haplotypes, and second utilizing two-stage association analysis for QTL identification, where the markers were preselected by single-locus model followed by multi-locus multi-allele model stepwise regression. Our proposed GWAS procedure is characterized as a novel restricted two-stage multi-locus multi-allele GWAS (RTM-GWAS, https://github.com/njau-sri/rtm-gwas ). The Chinese soybean germplasm population (CSGP) composed of 1024 accessions with 36,952 SNPLDBs (generated from 145,558 SNPs, with reduced linkage disequilibrium decay distance) was used to demonstrate the power and efficiency of RTM-GWAS. Using the CSGP marker information, simulation studies demonstrated that RTM-GWAS achieved the highest QTL detection power and efficiency compared with the previous procedures, especially under large sample size and high trait heritability conditions. A relatively thorough detection of QTL with their multiple alleles was achieved by RTM-GWAS compared with the linear mixed model method on 100-seed weight in CSGP. A QTL-allele matrix (402 alleles of 139 QTL × 1024 accessions) was established as a compact form of the population genetic constitution. The 100-seed weight QTL-allele matrix was

  13. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    Science.gov (United States)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  14. Low emittance lattice optimization using a multi-objective evolutionary algorithm

    International Nuclear Information System (INIS)

    Gao Weiwei; Wang Lin; Li Weimin; He Duohui

    2011-01-01

    A low emittance lattice design and optimization procedure are systematically studied with a non-dominated sorting-based multi-objective evolutionary algorithm which not only globally searches the low emittance lattice, but also optimizes some beam quantities such as betatron tunes, momentum compaction factor and dispersion function simultaneously. In this paper the detailed algorithm and lattice design procedure are presented. The Hefei light source upgrade project storage ring lattice, with fixed magnet layout, is designed to illustrate this optimization procedure. (authors)

  15. The detection of influential subsets in linear regression using an influence matrix

    OpenAIRE

    Peña, Daniel; Yohai, Víctor J.

    1991-01-01

    This paper presents a new method to identify influential subsets in linear regression problems. The procedure uses the eigenstructure of an influence matrix which is defined as the matrix of uncentered covariance of the effect on the whole data set of deleting each observation, normalized to include the univariate Cook's statistics in the diagonal. It is shown that points in an influential subset will appear with large weight in at least one of the eigenvector linked to the largest eigenvalue...

  16. Direct numerical methods of mathematical modeling in mechanical structural design

    International Nuclear Information System (INIS)

    Sahili, Jihad; Verchery, Georges; Ghaddar, Ahmad; Zoaeter, Mohamed

    2002-01-01

    Full text.Structural design and numerical methods are generally interactive; requiring optimization procedures as the structure is analyzed. This analysis leads to define some mathematical terms, as the stiffness matrix, which are resulting from the modeling and then used in numerical techniques during the dimensioning procedure. These techniques and many others involve the calculation of the generalized inverse of the stiffness matrix, called also the 'compliance matrix'. The aim of this paper is to introduce first, some different existing mathematical procedures, used to calculate the compliance matrix from the stiffness matrix, then apply direct numerical methods to solve the obtained system with the lowest computational time, and to compare the obtained results. The results show a big difference of the computational time between the different procedures

  17. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX – RAdiation Dose study (RAD-MATRIX)

    International Nuclear Information System (INIS)

    Sciahbasi, Alessandro; Calabrò, Paolo; Sarandrea, Alessandro; Rigattieri, Stefano; Tomassini, Francesco; Sardella, Gennaro; Zavalloni, Dennis; Cortese, Bernardo; Limbruno, Ugo; Tebaldi, Matteo; Gagnor, Andrea; Rubartelli, Paolo; Zingarelli, Antonio; Valgimigli, Marco

    2014-01-01

    Background: Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. Methods: The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. Conclusions: The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes

  18. Randomized comparison of operator radiation exposure comparing transradial and transfemoral approach for percutaneous coronary procedures: rationale and design of the minimizing adverse haemorrhagic events by TRansradial access site and systemic implementation of angioX – RAdiation Dose study (RAD-MATRIX)

    Energy Technology Data Exchange (ETDEWEB)

    Sciahbasi, Alessandro, E-mail: alessandro.sciahbasi@fastwebnet.it [Interventional Cardiology, Sandro Pertini Hospital – ASL RMB, Rome (Italy); Calabrò, Paolo [Division of Cardiology - Department of Cardio-Thoracic Sciences - Second University of Naples (Italy); Sarandrea, Alessandro [HSE Management, Rome (Italy); Rigattieri, Stefano [Interventional Cardiology, Sandro Pertini Hospital – ASL RMB, Rome (Italy); Tomassini, Francesco [Department of Cardiology, Infermi Hospital, Rivoli (Italy); Sardella, Gennaro [La Sapienza University, Rome (Italy); Zavalloni, Dennis [UO Emodinamica e Cardiologia Invasiva, IRCCS, Istituto Clinico Humanitas, Rozzano (Italy); Cortese, Bernardo [Interventional Cardiology, Fatebenefratelli Hospital, Milan (Italy); Limbruno, Ugo [Cardiology Unit, Misericordia Hospital, Grosseto (Italy); Tebaldi, Matteo [Cardiology Department, University of Ferrara, Department of Cardiology (Italy); Gagnor, Andrea [Department of Cardiology, Infermi Hospital, Rivoli (Italy); Rubartelli, Paolo [Villa Scassi Hospital, Genova (Italy); Zingarelli, Antonio [San Martino Hospital, Genova (Italy); Valgimigli, Marco [Thoraxcenter, Rotterdam (Netherlands)

    2014-06-15

    Background: Radiation absorbed by interventional cardiologists is a frequently under-evaluated important issue. Aim is to compare radiation dose absorbed by interventional cardiologists during percutaneous coronary procedures for acute coronary syndromes comparing transradial and transfemoral access. Methods: The randomized multicentre MATRIX (Minimizing Adverse Haemorrhagic Events by TRansradial Access Site and Systemic Implementation of angioX) trial has been designed to compare the clinical outcome of patients with acute coronary syndromes treated invasively according to the access site (transfemoral vs. transradial) and to the anticoagulant therapy (bivalirudin vs. heparin). Selected experienced interventional cardiologists involved in this study have been equipped with dedicated thermoluminescent dosimeters to evaluate the radiation dose absorbed during transfemoral or right transradial or left transradial access. For each access we evaluate the radiation dose absorbed at wrist, at thorax and at eye level. Consequently the operator is equipped with three sets (transfemoral, right transradial or left transradial access) of three different dosimeters (wrist, thorax and eye dosimeter). Primary end-point of the study is the procedural radiation dose absorbed by operators at thorax. An important secondary end-point is the procedural radiation dose absorbed by operators comparing the right or left radial approach. Patient randomization is performed according to the MATRIX protocol for the femoral or radial approach. A further randomization for the radial approach is performed to compare right and left transradial access. Conclusions: The RAD-MATRIX study will probably consent to clarify the radiation issue for interventional cardiologist comparing transradial and transfemoral access in the setting of acute coronary syndromes.

  19. Biaxial testing for fabrics and foils optimizing devices and procedures

    CERN Document Server

    Beccarelli, Paolo

    2015-01-01

    This book offers a well-structured, critical review of current design practice for tensioned membrane structures, including a detailed analysis of the experimental data required and critical issues relating to the lack of a set of design codes and testing procedures. The technical requirements for biaxial testing equipment are analyzed in detail, and aspects that need to be considered when developing biaxial testing procedures are emphasized. The analysis is supported by the results of a round-robin exercise comparing biaxial testing machines that involved four of the main research laboratories in the field. The biaxial testing devices and procedures presently used in Europe are extensively discussed, and information is provided on the design and implementation of a biaxial testing rig for architectural fabrics at Politecnico di Milano, which represents a benchmark in the field. The significance of the most recent developments in biaxial testing is also explored.

  20. Topology optimization of radio frequency and microwave structures

    DEFF Research Database (Denmark)

    Aage, Niels

    in this thesis, concerns the optimization of devices for wireless energy transfer via strongly coupled magnetic resonators. A single design problem is considered to demonstrate proof of concept. The resulting design illustrates the possibilities of the optimization method, but also reveals its numerical...... of efficient antennas and power supplies. A topology optimization methodology is proposed based on a design parameterization which incorporates the skin effect. The numerical optimization procedure is implemented in Matlab, for 2D problems, and in a parallel C++ optimization framework, for 3D design problems...... formalism, a two step optimization procedure is presented. This scheme is applied to the design and optimization of a hemispherical sub-wavelength antenna. The optimized antenna configuration displayed a ratio of radiated power to input power in excess of 99 %. The third, and last, design problem considered...

  1. Development of ISA procedure for uranium fuel fabrication and enrichment facilities

    International Nuclear Information System (INIS)

    Yamate, Kazuki; Arakawa, Tomoyuki; Yamashita, Masahiro; Sasaki, Noriaki; Hirano, Mitsumasa

    2011-01-01

    The integrated safety analysis (ISA) procedure has been developed to apply risk-informed regulation to uranium fuel fabrication and enrichment facilities. The major development efforts are as follows: (a) preparing the risk level matrix as an index for items-relied-on-for-safety (IROFS) identification, (b) defining requirements of IROFS, and (c) determining methods of IROFS importance based on the results of risk- and scenario-based analyses. For the risk level matrix, the consequence and likelihood categories have been defined by taking into account the Japanese regulatory laws, rules, and safety standards. The trial analyses using the developed procedure have been performed for several representative processes of the reference uranium fuel fabrication and enrichment facilities. This paper presents the results of the ISA for the sintering process of the reference fabrication facility. The results of the trial analyses have demonstrated the applicability of the procedure to the risk-informed regulation of these facilities. (author)

  2. A CFD-based aerodynamic design procedure for hypersonic wind-tunnel nozzles

    Science.gov (United States)

    Korte, John J.

    1993-01-01

    A new procedure which unifies the best of current classical design practices, computational fluid dynamics (CFD), and optimization procedures is demonstrated for designing the aerodynamic lines of hypersonic wind-tunnel nozzles. The new procedure can be used to design hypersonic wind tunnel nozzles with thick boundary layers where the classical design procedure has been shown to break down. An efficient CFD code, which solves the parabolized Navier-Stokes (PNS) equations using an explicit upwind algorithm, is coupled to a least-squares (LS) optimization procedure. A LS problem is formulated to minimize the difference between the computed flow field and the objective function, consisting of the centerline Mach number distribution and the exit Mach number and flow angle profiles. The aerodynamic lines of the nozzle are defined using a cubic spline, the slopes of which are optimized with the design procedure. The advantages of the new procedure are that it allows full use of powerful CFD codes in the design process, solves an optimization problem to determine the new contour, can be used to design new nozzles or improve sections of existing nozzles, and automatically compensates the nozzle contour for viscous effects as part of the unified design procedure. The new procedure is demonstrated by designing two Mach 15, a Mach 12, and a Mach 18 helium nozzles. The flexibility of the procedure is demonstrated by designing the two Mach 15 nozzles using different constraints, the first nozzle for a fixed length and exit diameter and the second nozzle for a fixed length and throat diameter. The computed flow field for the Mach 15 least squares parabolized Navier-Stokes (LS/PNS) designed nozzle is compared with the classically designed nozzle and demonstrates a significant improvement in the flow expansion process and uniform core region.

  3. Robust buckling optimization of laminated composite structures using discrete material optimization considering “worst” shape imperfections

    DEFF Research Database (Denmark)

    Henrichsen, Søren Randrup; Lindgaard, Esben; Lund, Erik

    2015-01-01

    Robust buckling optimal design of laminated composite structures is conducted in this work. Optimal designs are obtained by considering geometric imperfections in the optimization procedure. Discrete Material Optimization is applied to obtain optimal laminate designs. The optimal geometric...... imperfection is represented by the “worst” shape imperfection. The two optimization problems are combined through the recurrence optimization. Hereby the imperfection sensitivity of the considered structures can be studied. The recurrence optimization is demonstrated through a U-profile and a cylindrical panel...... example. The imperfection sensitivity of the optimized structure decreases during the recurrence optimization for both examples, hence robust buckling optimal structures are designed....

  4. Protein crystallization with microseed matrix screening: application to human germline antibody Fabs

    International Nuclear Information System (INIS)

    Obmolova, Galina; Malia, Thomas J.; Teplyakov, Alexey; Sweet, Raymond W.; Gilliland, Gary L.

    2014-01-01

    The power of microseed matrix screening is demonstrated in the crystallization of a panel of antibody Fab fragments. The crystallization of 16 human antibody Fab fragments constructed from all pairs of four different heavy chains and four different light chains was enabled by employing microseed matrix screening (MMS). In initial screening, diffraction-quality crystals were obtained for only three Fabs, while many Fabs produced hits that required optimization. Application of MMS, using the initial screens and/or refinement screens, resulted in diffraction-quality crystals of these Fabs. Five Fabs that failed to give hits in the initial screen were crystallized by cross-seeding MMS followed by MMS optimization. The crystallization protocols and strategies that resulted in structure determination of all 16 Fabs are presented. These results illustrate the power of MMS and provide a basis for developing future strategies for macromolecular crystallization

  5. A Class of Weighted Low Rank Approximation of the Positive Semidefinite Hankel Matrix

    Directory of Open Access Journals (Sweden)

    Jianchao Bai

    2015-01-01

    Full Text Available We consider the weighted low rank approximation of the positive semidefinite Hankel matrix problem arising in signal processing. By using the Vandermonde representation, we firstly transform the problem into an unconstrained optimization problem and then use the nonlinear conjugate gradient algorithm with the Armijo line search to solve the equivalent unconstrained optimization problem. Numerical examples illustrate that the new method is feasible and effective.

  6. Approximate Reanalysis in Topology Optimization

    DEFF Research Database (Denmark)

    Amir, Oded; Bendsøe, Martin P.; Sigmund, Ole

    2009-01-01

    In the nested approach to structural optimization, most of the computational effort is invested in the solution of the finite element analysis equations. In this study, the integration of an approximate reanalysis procedure into the framework of topology optimization of continuum structures...

  7. Matrix pentagons

    Science.gov (United States)

    Belitsky, A. V.

    2017-10-01

    The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  8. Matrix pentagons

    Directory of Open Access Journals (Sweden)

    A.V. Belitsky

    2017-10-01

    Full Text Available The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang–Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4 matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  9. A Current Control Approach for an Abnormal Grid Supplied Ultra Sparse Z-Source Matrix Converter with a Particle Swarm Optimization Proportional-Integral Induction Motor Drive Controller

    Directory of Open Access Journals (Sweden)

    Seyed Sina Sebtahmadi

    2016-11-01

    Full Text Available A rotational d-q current control scheme based on a Particle Swarm Optimization- Proportional-Integral (PSO-PI controller, is used to drive an induction motor (IM through an Ultra Sparse Z-source Matrix Converter (USZSMC. To minimize the overall size of the system, the lowest feasible values of Z-source elements are calculated by considering the both timing and aspects of the circuit. A meta-heuristic method is integrated to the control system in order to find optimal coefficient values in a single multimodal problem. Henceforth, the effect of all coefficients in minimizing the total harmonic distortion (THD and balancing the stator current are considered simultaneously. Through changing the reference point of magnitude or frequency, the modulation index can be automatically adjusted and respond to changes without heavy computational cost. The focus of this research is on a reliable and lightweight system with low computational resources. The proposed scheme is validated through both simulation and experimental results.

  10. Silica Modified with Polyaniline as a Potential Sorbent for Matrix Solid Phase Dispersion (MSPD) and Dispersive Solid Phase Extraction (d-SPE) of Plant Samples

    Science.gov (United States)

    Sowa, Ireneusz; Wójciak-Kosior, Magdalena; Strzemski, Maciej; Sawicki, Jan; Staniak, Michał; Dresler, Sławomir; Szwerc, Wojciech; Mołdoch, Jarosław; Latalski, Michał

    2018-01-01

    Polyaniline (PANI) is one of the best known conductive polymers with multiple applications. Recently, it was also used in separation techniques, mostly as a component of composites for solid-phase microextraction (SPME). In the present paper, sorbent obtained by in situ polymerization of aniline directly on silica gel particles (Si-PANI) was used for dispersive solid phase extraction (d-SPE) and matrix solid–phase extraction (MSPD). The efficiency of both techniques was evaluated with the use of high performance liquid chromatography with diode array detection (HPLC-DAD) quantitative analysis. The quality of the sorbent was verified by Raman spectroscopy and microscopy combined with automated procedure using computer image analysis. For extraction experiments, triterpenes were chosen as model compounds. The optimal conditions were as follows: protonated Si-PANI impregnated with water, 160/1 sorbent/analyte ratio, 3 min of extraction time, 4 min of desorption time and methanolic solution of ammonia for elution of analytes. The proposed procedure was successfully used for pretreatment of plant samples. PMID:29565297

  11. Enhanced selectivity in mixed matrix membranes for CO2 capture through efficient dispersion of amine-functionalized MOF nanoparticles

    Science.gov (United States)

    Ghalei, Behnam; Sakurai, Kento; Kinoshita, Yosuke; Wakimoto, Kazuki; Isfahani, Ali Pournaghshband; Song, Qilei; Doitomi, Kazuki; Furukawa, Shuhei; Hirao, Hajime; Kusuda, Hiromu; Kitagawa, Susumu; Sivaniah, Easan

    2017-07-01

    Mixed matrix membranes (MMMs) for gas separation applications have enhanced selectivity when compared with the pure polymer matrix, but are commonly reported with low intrinsic permeability, which has major cost implications for implementation of membrane technologies in large-scale carbon capture projects. High-permeability polymers rarely generate sufficient selectivity for energy-efficient CO2 capture. Here we report substantial selectivity enhancements within high-permeability polymers as a result of the efficient dispersion of amine-functionalized, nanosized metal-organic framework (MOF) additives. The enhancement effects under optimal mixing conditions occur with minimal loss in overall permeability. Nanosizing of the MOF enhances its dispersion within the polymer matrix to minimize non-selective microvoid formation around the particles. Amination of such MOFs increases their interaction with thepolymer matrix, resulting in a measured rigidification and enhanced selectivity of the overall composite. The optimal MOF MMM performance was verified in three different polymer systems, and also over pressure and temperature ranges suitable for carbon capture.

  12. Analytical Tools to Improve Optimization Procedures for Lateral Flow Assays

    Directory of Open Access Journals (Sweden)

    Helen V. Hsieh

    2017-05-01

    Full Text Available Immunochromatographic or lateral flow assays (LFAs are inexpensive, easy to use, point-of-care medical diagnostic tests that are found in arenas ranging from a doctor’s office in Manhattan to a rural medical clinic in low resource settings. The simplicity in the LFA itself belies the complex task of optimization required to make the test sensitive, rapid and easy to use. Currently, the manufacturers develop LFAs by empirical optimization of material components (e.g., analytical membranes, conjugate pads and sample pads, biological reagents (e.g., antibodies, blocking reagents and buffers and the design of delivery geometry. In this paper, we will review conventional optimization and then focus on the latter and outline analytical tools, such as dynamic light scattering and optical biosensors, as well as methods, such as microfluidic flow design and mechanistic models. We are applying these tools to find non-obvious optima of lateral flow assays for improved sensitivity, specificity and manufacturing robustness.

  13. Optimization of labelling procedure of 188rE-DMSA(v)

    International Nuclear Information System (INIS)

    Dantas, Danielle M.; Brambilla, Tania P.; Reis, Nicoli F.; Osso Junior, Joao A.

    2011-01-01

    Radionuclide therapy (RNT) is emerging as an important tool of nuclear medicine. Apart from the well established 131 I, several other promising radionuclides have been identified, among them 188 Re, 90 Y and 177 Lu. 188 Re has received a lot of attention in the past decade, due to its favourable nuclear characteristics [t 1/2 16.9 h, E b eta m ax 2.12 MeV and E g amma 155 keV (15%) suitable for imaging, including the fact that it is carrier-free and can be obtained cost-effectively through the generator 188 W- 188 Re. Biodistribution studies of 188 Re-DMSA(V) have shown that its general pharmacokinetic properties are similar to that of 99m Tc-DMSA(V), so this agent could be used for targeted radiotherapy of medullary thyroid carcinoma, bone metastases, soft tissue and others tumors. The aim of this work is to evaluate two labeling procedures for the preparation of 188 Re-DMSA(V). The first method was prepared using a commercial kit of DMSA(III) for labeling with 99m Tc at high temperature (100 deg C). The second method was prepared in a vial containing 2.5 mg of DMSA, 1.00 mg of SnCl 2 .2H 2 O and 10 mg of sodium oxalate, 10 mg of cyclodextrin, in a total volume of 2.0 mL. The pH was adjusted to 3 with 37% HCl. After labeling the solution was stirred and incubated for 30 min at room temperature. The radiochemical purity was determined using TLC-SG developed with two different solvent systems: Acetone and glycine. Preliminary results for both methods of labeling 188 Re-DMSA(V) showed that the labeling yield was >95%. Further experiments are also necessary to optimize the labeling methodology of 188 Re-DMSA(V).author)

  14. Optimal Computing Budget Allocation for Particle Swarm Optimization in Stochastic Optimization.

    Science.gov (United States)

    Zhang, Si; Xu, Jie; Lee, Loo Hay; Chew, Ek Peng; Wong, Wai Peng; Chen, Chun-Hung

    2017-04-01

    Particle Swarm Optimization (PSO) is a popular metaheuristic for deterministic optimization. Originated in the interpretations of the movement of individuals in a bird flock or fish school, PSO introduces the concept of personal best and global best to simulate the pattern of searching for food by flocking and successfully translate the natural phenomena to the optimization of complex functions. Many real-life applications of PSO cope with stochastic problems. To solve a stochastic problem using PSO, a straightforward approach is to equally allocate computational effort among all particles and obtain the same number of samples of fitness values. This is not an efficient use of computational budget and leaves considerable room for improvement. This paper proposes a seamless integration of the concept of optimal computing budget allocation (OCBA) into PSO to improve the computational efficiency of PSO for stochastic optimization problems. We derive an asymptotically optimal allocation rule to intelligently determine the number of samples for all particles such that the PSO algorithm can efficiently select the personal best and global best when there is stochastic estimation noise in fitness values. We also propose an easy-to-implement sequential procedure. Numerical tests show that our new approach can obtain much better results using the same amount of computational effort.

  15. Syrio. A program for the calculation of the inverse of a matrix

    International Nuclear Information System (INIS)

    Garcia de Viedma Alonso, L.

    1963-01-01

    SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)

  16. Complete removal of uranyl nitrate from tissue matrix using supercritical fluid extraction

    International Nuclear Information System (INIS)

    Kumar, R.; Sivaraman, N.; Senthil Vadivu, E.; Srinivasan, T.G.; Vasudeva Rao, P.R.

    2003-01-01

    The removal of uranyl nitrate from tissue matrix has been studied with supercritical carbon dioxide modified with methanol alone as well as complexing reagents dissolved in methanol. A systematic study of various complexing agents led to the development of an extraction procedure for the quantitative recovery of uranium from tissue matrix with supercritical carbon dioxide modified with methanol containing small quantities of acetylacetone. The drying time and temperature employed in loading of uranyl nitrate onto tissue paper were found to influence the extraction efficiency significantly

  17. Solution of quadratic matrix equations for free vibration analysis of structures.

    Science.gov (United States)

    Gupta, K. K.

    1973-01-01

    An efficient digital computer procedure and the related numerical algorithm are presented herein for the solution of quadratic matrix equations associated with free vibration analysis of structures. Such a procedure enables accurate and economical analysis of natural frequencies and associated modes of discretized structures. The numerically stable algorithm is based on the Sturm sequence method, which fully exploits the banded form of associated stiffness and mass matrices. The related computer program written in FORTRAN V for the JPL UNIVAC 1108 computer proves to be substantially more accurate and economical than other existing procedures of such analysis. Numerical examples are presented for two structures - a cantilever beam and a semicircular arch.

  18. Maximal imaginery eigenvalues in optimal systems

    Directory of Open Access Journals (Sweden)

    David Di Ruscio

    1991-07-01

    Full Text Available In this note we present equations that uniquely determine the maximum possible imaginary value of the closed loop eigenvalues in an LQ-optimal system, irrespective of how the state weight matrix is chosen, provided a real symmetric solution of the algebraic Riccati equation exists. In addition, the corresponding state weight matrix and the solution to the algebraic Riccati equation are derived for a class of linear systems. A fundamental lemma for the existence of a real symmetric solution to the algebraic Riccati equation is derived for this class of linear systems.

  19. PROCEDURE FOR ANALYSIS AND EVALUATION OF MARKET POSITION PRODUCTION ORGANIZATION

    Directory of Open Access Journals (Sweden)

    A. N. Polozova

    2014-01-01

    Full Text Available Summary. Methodical procedures economic monitoring market position of industrial organization, particularly those relating to food production, including the 5 elements: matrix «component of business processes», matrix «materiality – efficiency», matrix «materiality – relevant», matrix emption and hindering factors matrix operation scenarios. Substantiated components assess the strengths and weaknesses of the business activities of organizations that characterize the state of internal business environment on the elements: production, organization, personnel, finance, marketing. The advantages of the matrix «materiality – relevance» consisting of 2 materiality level - high and low, and 3 directions relevance – «no change», «gain importance in the future», «lose importance in the future». Presented the contents of the matrix «scenarios functioning of the organization», involving 6 attribute levels, 10 classes of scenarios, 19 activities, including an optimistic and pessimistic. The evaluation of primary classes of scenarios, characterized by the properties of «development», «dynamic equilibrium», «quality improvement», «competitiveness», «favorable realization of opportunities», «competition resistance».

  20. Quantitative proteomics reveals altered expression of extracellular matrix related proteins of human primary dermal fibroblasts in response to sulfated hyaluronan and collagen applied as artificial extracellular matrix.

    Science.gov (United States)

    Müller, Stephan A; van der Smissen, Anja; von Feilitzsch, Margarete; Anderegg, Ulf; Kalkhof, Stefan; von Bergen, Martin

    2012-12-01

    Fibroblasts are the main matrix producing cells of the dermis and are also strongly regulated by their matrix environment which can be used to improve and guide skin wound healing processes. Here, we systematically investigated the molecular effects on primary dermal fibroblasts in response to high-sulfated hyaluronan [HA] (hsHA) by quantitative proteomics. The comparison of non- and high-sulfated HA revealed regulation of 84 of more than 1,200 quantified proteins. Based on gene enrichment we found that sulfation of HA alters extracellular matrix remodeling. The collagen degrading enzymes cathepsin K, matrix metalloproteinases-2 and -14 were found to be down-regulated on hsHA. Additionally protein expression of thrombospondin-1, decorin, collagen types I and XII were reduced, whereas the expression of trophoblast glycoprotein and collagen type VI were slightly increased. This study demonstrates that global proteomics provides a valuable tool for revealing proteins involved in molecular effects of growth substrates for further material optimization.

  1. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Xiaolei; Gao, Xin

    2013-01-01

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  2. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-03-24

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  3. Isolation and quantification of Quillaja saponaria Molina saponins and lipids in iscom-matrix and iscoms.

    Science.gov (United States)

    Behboudi, S; Morein, B; Rönnberg, B

    1995-12-01

    In the iscom, multiple copies of antigen are attached by hydrophobic interaction to a matrix which is built up by Quillaja triterpenoid saponins and lipids. Thus, the iscom presents antigen in multimeric form in a small particle with a built-in adjuvant resulting in a highly immunogenic antigen formulation. We have designed a chloroform-methanol-water extraction procedure to isolate the triterpenoid saponins and lipids incorporated into iscom-matrix and iscoms. The triterpenoids in the triterpenoid phase were quantitated using orcinol sulfuric acid detecting their carbohydrate chains and by HPLC. The cholesterol and phosphatidylcholine in the lipid phase were quantitated by HPLC and a commercial colorimetric method for the cholesterol. The quantitative methods showed an almost total separation and recovery of triterpenoids and lipids in their respective phases, while protein was detected in all phases after extraction. The protein content was determined by the method of Lowry and by amino acid analysis. Amino acid analysis was shown to be the reliable method of the two to quantitate proteins in iscoms. In conclusion, simple, reproducible and efficient procedures have been designed to isolate and quantitate the triterpenoids and lipids added for preparation of iscom-matrix and iscoms. The procedures described should also be useful to adequately define constituents in prospective vaccines.

  4. Analysis of impurities in silver matrix by atomic absorption spectrophotometry

    International Nuclear Information System (INIS)

    Hussain, R.; Ishaque, M.; Mohammad, D.

    1999-01-01

    A procedure for the analysis of aluminium, chromium, copper, lead, mercury, nickel and zinc mainly using flame lens atomic absorption spectrophotometry has been described. The results depict that the presence of silver does not introduce any significant interference, when standards are prepared in matching silver matrix solutions. The calibration curves obey the straight-line equations passing through the origin. Thus the separation of silver matrix from the analyte solutions is not necessary. The method has successfully been applied for the analysis of silver foils, wires, battery grade silver oxides and silver nitrate samples containing analyte elements in the concentration range 2 to 40 ppm. (author)

  5. Q-Matrix Optimization Based on the Linear Logistic Test Model.

    Science.gov (United States)

    Ma, Lin; Green, Kelly E

    This study explored optimization of item-attribute matrices with the linear logistic test model (Fischer, 1973), with optimal models explaining more variance in item difficulty due to identified item attributes. Data were 8th-grade mathematics test item responses of two TIMSS 2007 booklets. The study investigated three categories of attributes (content, cognitive process, and comprehensive cognitive process) at two grain levels (larger, smaller) and also compared results with random attribute matrices. The proposed attributes accounted for most of the variance in item difficulty for two assessment booklets (81% and 65%). The variance explained by the content attributes was very small (13% to 31%), less than variance explained by the comprehensive cognitive process attributes which explained much more variance than the content and cognitive process attributes. The variances explained by the grain level were similar to each other. However, the attributes did not predict the item difficulties of two assessment booklets equally.

  6. H{infinity} Filtering for Dynamic Compensation of Self-Powered Neutron Detectors - A Linear Matrix Inequality Based Method -

    Energy Technology Data Exchange (ETDEWEB)

    Park, M.G.; Kim, Y.H.; Cha, K.H.; Kim, M.K. [Korea Electric Power Research Institute, Taejon (Korea)

    1999-07-01

    A method is described to develop and H{infinity} filtering method for the dynamic compensation of self-powered neutron detectors normally used for fixed incore instruments. An H{infinity} norm of the filter transfer matrix is used as the optimization criteria in the worst-case estimation error sense. Filter modeling is performed for both continuous- and discrete-time models. The filter gains are optimized in the sense of noise attenuation level of H{infinity} setting. By introducing Bounded Real Lemma, the conventional algebraic Riccati inequalities are converted into Linear Matrix Inequalities (LMIs). Finally, the filter design problem is solved via the convex optimization framework using LMIs. The simulation results show that remarkable improvements are achieved in view of the filter response time and the filter design efficiency. (author). 15 refs., 4 figs., 3 tabs.

  7. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    International Nuclear Information System (INIS)

    Brau-Avila, A; Valenzuela-Galvan, M; Herrera-Jimenez, V M; Santolaria, J; Aguilar, J J; Acero, R

    2017-01-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs. (paper)

  8. Mathematical calibration procedure of a capacitive sensor-based indexed metrology platform

    Science.gov (United States)

    Brau-Avila, A.; Santolaria, J.; Acero, R.; Valenzuela-Galvan, M.; Herrera-Jimenez, V. M.; Aguilar, J. J.

    2017-03-01

    The demand for faster and more reliable measuring tasks for the control and quality assurance of modern production systems has created new challenges for the field of coordinate metrology. Thus, the search for new solutions in coordinate metrology systems and the need for the development of existing ones still persists. One example of such a system is the portable coordinate measuring machine (PCMM), the use of which in industry has considerably increased in recent years, mostly due to its flexibility for accomplishing in-line measuring tasks as well as its reduced cost and operational advantages compared to traditional coordinate measuring machines. Nevertheless, PCMMs have a significant drawback derived from the techniques applied in the verification and optimization procedures of their kinematic parameters. These techniques are based on the capture of data with the measuring instrument from a calibrated gauge object, fixed successively in various positions so that most of the instrument measuring volume is covered, which results in time-consuming, tedious and expensive verification and optimization procedures. In this work the mathematical calibration procedure of a capacitive sensor-based indexed metrology platform (IMP) is presented. This calibration procedure is based on the readings and geometric features of six capacitive sensors and their targets with nanometer resolution. The final goal of the IMP calibration procedure is to optimize the geometric features of the capacitive sensors and their targets in order to use the optimized data in the verification procedures of PCMMs.

  9. Radiation safety in nuclear medicine procedures

    International Nuclear Information System (INIS)

    Cho, Sang Geon; Kim, Ja Hae; Song, Ho Chun

    2017-01-01

    Since the nuclear disaster at the Fukushima Daiichi Nuclear Power Plant in 2011, radiation safety has become an important issue in nuclear medicine. Many structured guidelines or recommendations of various academic societies or international campaigns demonstrate important issues of radiation safety in nuclear medicine procedures. There are ongoing efforts to fulfill the basic principles of radiation protection in daily nuclear medicine practice. This article reviews important principles of radiation protection in nuclear medicine procedures. Useful references, important issues, future perspectives of the optimization of nuclear medicine procedures, and diagnostic reference level are also discussed

  10. Radiation safety in nuclear medicine procedures

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Sang Geon; Kim, Ja Hae; Song, Ho Chun [Dept. of Nuclear Medicine, Medical Radiation Safety Research Center, Chonnam National University Hospital, Gwangju (Korea, Republic of)

    2017-03-15

    Since the nuclear disaster at the Fukushima Daiichi Nuclear Power Plant in 2011, radiation safety has become an important issue in nuclear medicine. Many structured guidelines or recommendations of various academic societies or international campaigns demonstrate important issues of radiation safety in nuclear medicine procedures. There are ongoing efforts to fulfill the basic principles of radiation protection in daily nuclear medicine practice. This article reviews important principles of radiation protection in nuclear medicine procedures. Useful references, important issues, future perspectives of the optimization of nuclear medicine procedures, and diagnostic reference level are also discussed.

  11. DLCQ and plane wave matrix Big Bang models

    Science.gov (United States)

    Blau, Matthias; O'Loughlin, Martin

    2008-09-01

    We study the generalisations of the Craps-Sethi-Verlinde matrix big bang model to curved, in particular plane wave, space-times, beginning with a careful discussion of the DLCQ procedure. Singular homogeneous plane waves are ideal toy-models of realistic space-time singularities since they have been shown to arise universally as their Penrose limits, and we emphasise the role played by the symmetries of these plane waves in implementing the flat space Seiberg-Sen DLCQ prescription for these curved backgrounds. We then analyse various aspects of the resulting matrix string Yang-Mills theories, such as the relation between strong coupling space-time singularities and world-sheet tachyonic mass terms. In order to have concrete examples at hand, in an appendix we determine and analyse the IIA singular homogeneous plane wave - null dilaton backgrounds.

  12. DLCQ and plane wave matrix Big Bang models

    International Nuclear Information System (INIS)

    Blau, Matthias; O'Loughlin, Martin

    2008-01-01

    We study the generalisations of the Craps-Sethi-Verlinde matrix big bang model to curved, in particular plane wave, space-times, beginning with a careful discussion of the DLCQ procedure. Singular homogeneous plane waves are ideal toy-models of realistic space-time singularities since they have been shown to arise universally as their Penrose limits, and we emphasise the role played by the symmetries of these plane waves in implementing the flat space Seiberg-Sen DLCQ prescription for these curved backgrounds. We then analyse various aspects of the resulting matrix string Yang-Mills theories, such as the relation between strong coupling space-time singularities and world-sheet tachyonic mass terms. In order to have concrete examples at hand, in an appendix we determine and analyse the IIA singular homogeneous plane wave - null dilaton backgrounds.

  13. NERI FINAL TECHNICAL REPORT, DE-FC07-O5ID14647. OPTIMIZATION OF OXIDE COMPOUNDS FOR ADVANCED INERT MATRIX MATERIALS

    International Nuclear Information System (INIS)

    Nino, Juan C.

    2009-01-01

    In order to reduce the current excesses of plutonium (both weapon grade and reactor grade) and other transuranium elements, a concept of inert matrix fuel (IMF) has been proposed for an uranium free transmutation of fissile actinides which excludes continuous uranium-plutonium conversion in thermal reactors and advanced systems. Magnesium oxide (MgO) is a promising candidate for inert matrix (IM) materials due to its high melting point (2827 C), high thermal conductivity (13 W/K · m at 1000 C), good neutronic properties, and irradiation stability However, MgO reacts with water and hydrates easily, which prevents it from being used in light water reactors (LWRs) as an IM. To improve the hydration resistance of MgO-based inert matrix materials, Medvedev and coworkers have recently investigated the introduction of a secondary phase that acts as a hydration barrier. An MgO-ZrO 2 composite was specifically studied and the results showed that the composite exhibited improved hydration resistance than pure MgO. However, ZrO 2 is insoluble in most acids except HF, which is undesirable for fuel reprocessing. Moreover, the thermal conductivity of ZrO 2 is low and typically less than 3 W · m -1 · K -1 at 1000 C. In search for an alternative composite strategy, Nd 2 Zr 2 O 7 , an oxide compound with pyrochlore structure, has been proposed recently as a corrosion resistant phase, and MgO-Nd 2 Zr 2 O 7 composites have been investigated as potential IM materials. An adequate thermal conductivity of 6 W · m - 1 · K -1 at 1000 C for the MgO-Nd 2 Zr 2 O 7 composite with 90 vol% MgO was recently calculated and reported. Other simulations proposed that the MgO-pyrochlore composites could exhibit higher radiation stability than previously reported. Final optimization of the composite microstructure was performed on the 70 vol% MgO-Nd 2 Zr 2 O 7 composite that burnup calculations had shown to have the closest profile to that of MOX fuel. Theoretical calculations also indicated that

  14. Matrix product operators, matrix product states, and ab initio density matrix renormalization group algorithms

    Science.gov (United States)

    Chan, Garnet Kin-Lic; Keselman, Anna; Nakatani, Naoki; Li, Zhendong; White, Steven R.

    2016-07-01

    Current descriptions of the ab initio density matrix renormalization group (DMRG) algorithm use two superficially different languages: an older language of the renormalization group and renormalized operators, and a more recent language of matrix product states and matrix product operators. The same algorithm can appear dramatically different when written in the two different vocabularies. In this work, we carefully describe the translation between the two languages in several contexts. First, we describe how to efficiently implement the ab initio DMRG sweep using a matrix product operator based code, and the equivalence to the original renormalized operator implementation. Next we describe how to implement the general matrix product operator/matrix product state algebra within a pure renormalized operator-based DMRG code. Finally, we discuss two improvements of the ab initio DMRG sweep algorithm motivated by matrix product operator language: Hamiltonian compression, and a sum over operators representation that allows for perfect computational parallelism. The connections and correspondences described here serve to link the future developments with the past and are important in the efficient implementation of continuing advances in ab initio DMRG and related algorithms.

  15. Efficiency of performing pulmonary procedures in a shared endoscopy unit: procedure time, turnaround time, delays, and procedure waiting time.

    Science.gov (United States)

    Verma, Akash; Lee, Mui Yok; Wang, Chunhong; Hussein, Nurmalah B M; Selvi, Kalai; Tee, Augustine

    2014-04-01

    The purpose of this study was to assess the efficiency of performing pulmonary procedures in the endoscopy unit in a large teaching hospital. A prospective study from May 20 to July 19, 2013, was designed. The main outcome measures were procedure delays and their reasons, duration of procedural steps starting from patient's arrival to endoscopy unit, turnaround time, total case durations, and procedure wait time. A total of 65 procedures were observed. The most common procedure was BAL (61%) followed by TBLB (31%). Overall procedures for 35 (53.8%) of 65 patients were delayed by ≥ 30 minutes, 21/35 (60%) because of "spillover" of the gastrointestinal and surgical cases into the time block of pulmonary procedure. Time elapsed between end of pulmonary procedure and start of the next procedure was ≥ 30 minutes in 8/51 (16%) of cases. In 18/51 (35%) patients there was no next case in the room after completion of the pulmonary procedure. The average idle time of the room after the end of pulmonary procedure and start of next case or end of shift at 5:00 PM if no next case was 58 ± 53 minutes. In 17/51 (33%) patients the room's idle time was >60 minutes. A total of 52.3% of patients had the wait time >2 days and 11% had it ≥ 6 days, reason in 15/21 (71%) being unavailability of the slot. Most pulmonary procedures were delayed due to spillover of the gastrointestinal and surgical cases into the block time allocated to pulmonary procedures. The most common reason for difficulty encountered in scheduling the pulmonary procedure was slot unavailability. This caused increased procedure waiting time. The strategies to reduce procedure delays and turnaround times, along with improved scheduling methods, may have a favorable impact on the volume of procedures performed in the unit thereby optimizing the existing resources.

  16. Development of ISA procedure for uranium fuel fabrication and enrichment facilities: overview of ISA procedure and its application

    International Nuclear Information System (INIS)

    Yamate, Kazuki; Yamada, Takashi; Takanashi, Mitsuhiro; Sasaki, Noriaki

    2013-01-01

    Integrated Safety Analysis (ISA) procedure for uranium fuel fabrication and enrichment facilities has been developed for aiming at applying risk-informed regulation to these uranium facilities. The development has carried out referring to the ISA (NUREG-1520) by the Nuclear Regulatory Commission (NRC). The paper presents purpose, principles and activities for the development of the ISA procedure, including Risk Level (RL) matrix and grading evaluation method of IROFS (Items Relied on for Safety), as well as general description and features of the procedure. Also described in the paper is current status in application of risk information from the ISA. Japanese four licensees of the uranium facilities have been conducting ISA for their representative processes using the developed procedure as their voluntary safety activities. They have been accumulating experiences and knowledge on the ISA procedure and risk information through the field activities. NISA (Nuclear and Industrial Safety Agency) and JNES (Japan Nuclear Energy Safety Organization) are studying how to use such risk information for the safety regulation of the uranium facilities, taking into account the licensees' experiences and knowledge. (authors)

  17. Optimization of the determinant of the Vandermonde matrix and related matrices

    Energy Technology Data Exchange (ETDEWEB)

    Lundengård, Karl; Österberg, Jonas; Silvestrov, Sergei [Division of Applied Mathematics, School of Education, Culture and Communication, Mälardalen University, Box 883, SE-721 23 Västerås (Sweden)

    2014-12-10

    Various techniques for interpolation of data, moment matching in stochastic applications and various methods in numerical analysis can be described using Vandermonde matrices. For this reason the properties of the determinant of the Vandermonde matrix and related matrices are interesting. Here the extreme points of the Vandermonde determinant, and related determinants, on some simple surfaces such as the unit sphere are analyzed, both numerically and analytically. Some results are also visualized in various dimensions. The extreme points of the Vandermonde determinant are also related to the roots of certain orthogonal polynomials such as the Hermite polynomials.

  18. Optimal Control for a Class of Chaotic Systems

    Directory of Open Access Journals (Sweden)

    Jianxiong Zhang

    2012-01-01

    Full Text Available This paper proposes the optimal control methods for a class of chaotic systems via state feedback. By converting the chaotic systems to the form of uncertain piecewise linear systems, we can obtain the optimal controller minimizing the upper bound on cost function by virtue of the robust optimal control method of piecewise linear systems, which is cast as an optimization problem under constraints of bilinear matrix inequalities (BMIs. In addition, the lower bound on cost function can be achieved by solving a semidefinite programming (SDP. Finally, numerical examples are given to illustrate the results.

  19. Optimal configuration of microstructure in ferroelectric materials by stochastic optimization

    Science.gov (United States)

    Jayachandran, K. P.; Guedes, J. M.; Rodrigues, H. C.

    2010-07-01

    An optimization procedure determining the ideal configuration at the microstructural level of ferroelectric (FE) materials is applied to maximize piezoelectricity. Piezoelectricity in ceramic FEs differs significantly from that of single crystals because of the presence of crystallites (grains) possessing crystallographic axes aligned imperfectly. The piezoelectric properties of a polycrystalline (ceramic) FE is inextricably related to the grain orientation distribution (texture). The set of combination of variables, known as solution space, which dictates the texture of a ceramic is unlimited and hence the choice of the optimal solution which maximizes the piezoelectricity is complicated. Thus, a stochastic global optimization combined with homogenization is employed for the identification of the optimal granular configuration of the FE ceramic microstructure with optimum piezoelectric properties. The macroscopic equilibrium piezoelectric properties of polycrystalline FE is calculated using mathematical homogenization at each iteration step. The configuration of grains characterized by its orientations at each iteration is generated using a randomly selected set of orientation distribution parameters. The optimization procedure applied to the single crystalline phase compares well with the experimental data. Apparent enhancement of piezoelectric coefficient d33 is observed in an optimally oriented BaTiO3 single crystal. Based on the good agreement of results with the published data in single crystals, we proceed to apply the methodology in polycrystals. A configuration of crystallites, simultaneously constraining the orientation distribution of the c-axis (polar axis) while incorporating ab-plane randomness, which would multiply the overall piezoelectricity in ceramic BaTiO3 is also identified. The orientation distribution of the c-axes is found to be a narrow Gaussian distribution centered around 45°. The piezoelectric coefficient in such a ceramic is found to

  20. New method based on combining ultrasonic assisted miniaturized matrix solid-phase dispersion and homogeneous liquid-liquid extraction for the determination of some organochlorinated pesticides in fish

    International Nuclear Information System (INIS)

    Rezaei, Farahnaz; Hosseini, Mohammad-Reza Milani

    2011-01-01

    Highlights: → Ultrasonic assisted miniaturized matrix solid-phase dispersion combined with HLLE was developed as a new method for the extraction of OCPs in fish. → The goal of this combination was to enhance the selectivity of HLLE procedure and to extend its application in biological samples. → This method proposed the advantages of good detection limits, lower consumption of reagents, and does not need any special instrumentation. - Abstract: In this study, ultrasonic assisted miniaturized matrix solid-phase dispersion (US-MMSPD) combined with homogeneous liquid-liquid extraction (HLLE) has been developed as a new method for the extraction of organochlorinated pesticides (OCPs) in fish prior to gas chromatography with electron capture detector (GC-ECD). In the proposed method, OCPs (heptachlor, aldrin, DDE, DDD, lindane and endrin) were first extracted from fish sample into acetonitrile by US-MMSPD procedure, and the extract was then used as consolute solvent in HLLE process. Optimal condition for US-MMSPD step was as follows: volume of acetonitrile, 1.5 mL; temperature of ultrasound, 40 deg. C; time of ultrasound, 10 min. For HLLE step, optimal results were obtained at the following conditions: volume of chloroform, 35 μL; volume of aqueous phase, 1.5 mL; volume of double distilled water, 0.5 mL; time of centrifuge, 10 min. Under the optimum conditions, the enrichment factors for the studied compounds were obtained in the range of 185-240, and the overall recoveries were ranged from 39.1% to 81.5%. The limits of detection were 0.4-1.2 ng g -1 and the relative standard deviations for 20 ng g -1 of the OCPs, varied from 3.2% to 8% (n = 4). Finally, the proposed method has been successfully applied to the analysis of the OCPs in real fish sample, and satisfactory results were obtained.

  1. New method based on combining ultrasonic assisted miniaturized matrix solid-phase dispersion and homogeneous liquid-liquid extraction for the determination of some organochlorinated pesticides in fish

    Energy Technology Data Exchange (ETDEWEB)

    Rezaei, Farahnaz [Department of Analytical Chemistry, Faculty of Chemistry, Iran University of Science and Technology, Narmak, Tehran 16846 (Iran, Islamic Republic of); Hosseini, Mohammad-Reza Milani, E-mail: drmilani@iust.ac.ir [Department of Analytical Chemistry, Faculty of Chemistry, Iran University of Science and Technology, Narmak, Tehran 16846 (Iran, Islamic Republic of); Electroanalytical Chemistry Research Center, Iran University of Science and Technology, Narmak, Tehran 16846 (Iran, Islamic Republic of)

    2011-09-30

    Highlights: {yields} Ultrasonic assisted miniaturized matrix solid-phase dispersion combined with HLLE was developed as a new method for the extraction of OCPs in fish. {yields} The goal of this combination was to enhance the selectivity of HLLE procedure and to extend its application in biological samples. {yields} This method proposed the advantages of good detection limits, lower consumption of reagents, and does not need any special instrumentation. - Abstract: In this study, ultrasonic assisted miniaturized matrix solid-phase dispersion (US-MMSPD) combined with homogeneous liquid-liquid extraction (HLLE) has been developed as a new method for the extraction of organochlorinated pesticides (OCPs) in fish prior to gas chromatography with electron capture detector (GC-ECD). In the proposed method, OCPs (heptachlor, aldrin, DDE, DDD, lindane and endrin) were first extracted from fish sample into acetonitrile by US-MMSPD procedure, and the extract was then used as consolute solvent in HLLE process. Optimal condition for US-MMSPD step was as follows: volume of acetonitrile, 1.5 mL; temperature of ultrasound, 40 deg. C; time of ultrasound, 10 min. For HLLE step, optimal results were obtained at the following conditions: volume of chloroform, 35 {mu}L; volume of aqueous phase, 1.5 mL; volume of double distilled water, 0.5 mL; time of centrifuge, 10 min. Under the optimum conditions, the enrichment factors for the studied compounds were obtained in the range of 185-240, and the overall recoveries were ranged from 39.1% to 81.5%. The limits of detection were 0.4-1.2 ng g{sup -1} and the relative standard deviations for 20 ng g{sup -1} of the OCPs, varied from 3.2% to 8% (n = 4). Finally, the proposed method has been successfully applied to the analysis of the OCPs in real fish sample, and satisfactory results were obtained.

  2. Genetic algorithm based separation cascade optimization

    International Nuclear Information System (INIS)

    Mahendra, A.K.; Sanyal, A.; Gouthaman, G.; Bera, T.K.

    2008-01-01

    The conventional separation cascade design procedure does not give an optimum design because of squaring-off, variation of flow rates and separation factor of the element with respect to stage location. Multi-component isotope separation further complicates the design procedure. Cascade design can be stated as a constrained multi-objective optimization. Cascade's expectation from the separating element is multi-objective i.e. overall separation factor, cut, optimum feed and separative power. Decision maker may aspire for more comprehensive multi-objective goals where optimization of cascade is coupled with the exploration of separating element optimization vector space. In real life there are many issues which make it important to understand the decision maker's perception of cost-quality-speed trade-off and consistency of preferences. Genetic algorithm (GA) is one such evolutionary technique that can be used for cascade design optimization. This paper addresses various issues involved in the GA based multi-objective optimization of the separation cascade. Reference point based optimization methodology with GA based Pareto optimality concept for separation cascade was found pragmatic and promising. This method should be explored, tested, examined and further developed for binary as well as multi-component separations. (author)

  3. Measurement and Optimization of Local Coupling from RHIC BPM Data

    CERN Document Server

    Calaga, Rama; Bai, Mei; Fischer, Wolfram; Franchi, Andrea; Tomas, Rogelio

    2005-01-01

    Global coupling in RHIC is routinely corrected by using three skew quadrupole families to minimize the tune split. In this paper we aim to re-optimize the coupling at top energy by minimizing resonance driving terms and the C-matrix in two steps: 1. Find the best configuration of the three skew quadrupole families and 2. Identify locations with coupling sources by inspection of the driving terms and the C-matrix around the ring. The measurements of resonance terms and C-matrix are presented.

  4. A Simple Sonication Improves Protein Signal in Matrix-Assisted Laser Desorption Ionization Imaging

    Science.gov (United States)

    Lin, Li-En; Su, Pin-Rui; Wu, Hsin-Yi; Hsu, Cheng-Chih

    2018-02-01

    Proper matrix application is crucial in obtaining high quality matrix-assisted laser desorption ionization (MALDI) mass spectrometry imaging (MSI). Solvent-free sublimation was essentially introduced as an approach of homogeneous coating that gives small crystal size of the organic matrix. However, sublimation has lower extraction efficiency of analytes. Here, we present that a simple sonication step after the hydration in standard sublimation protocol significantly enhances the sensitivity of MALDI MSI. This modified procedure uses a common laboratory ultrasonicator to immobilize the analytes from tissue sections without noticeable delocalization. Improved imaging quality with additional peaks above 10 kDa in the spectra was thus obtained upon sonication treatment. [Figure not available: see fulltext.

  5. Spin-Projected Matrix Product States: Versatile Tool for Strongly Correlated Systems.

    Science.gov (United States)

    Li, Zhendong; Chan, Garnet Kin-Lic

    2017-06-13

    We present a new wave function ansatz that combines the strengths of spin projection with the language of matrix product states (MPS) and matrix product operators (MPO) as used in the density matrix renormalization group (DMRG). Specifically, spin-projected matrix product states (SP-MPS) are constructed as [Formula: see text], where [Formula: see text] is the spin projector for total spin S and |Ψ MPS (N,M) ⟩ is an MPS wave function with a given particle number N and spin projection M. This new ansatz possesses several attractive features: (1) It provides a much simpler route to achieve spin adaptation (i.e., to create eigenfunctions of Ŝ 2 ) compared to explicitly incorporating the non-Abelian SU(2) symmetry into the MPS. In particular, since the underlying state |Ψ MPS (N,M) ⟩ in the SP-MPS uses only Abelian symmetries, one does not need the singlet embedding scheme for nonsinglet states, as normally employed in spin-adapted DMRG, to achieve a single consistent variationally optimized state. (2) Due to the use of |Ψ MPS (N,M) ⟩ as its underlying state, the SP-MPS can be closely connected to broken-symmetry mean-field states. This allows one to straightforwardly generate the large number of broken-symmetry guesses needed to explore complex electronic landscapes in magnetic systems. Further, this connection can be exploited in the future development of quantum embedding theories for open-shell systems. (3) The sum of MPOs representation for the Hamiltonian and spin projector [Formula: see text] naturally leads to an embarrassingly parallel algorithm for computing expectation values and optimizing SP-MPS. (4) Optimizing SP-MPS belongs to the variation-after-projection (VAP) class of spin-projected theories. Unlike usual spin-projected theories based on determinants, the SP-MPS ansatz can be made essentially exact simply by increasing the bond dimensions in |Ψ MPS (N,M) ⟩. Computing excited states is also simple by imposing orthogonality constraints

  6. Geometrical Optimization Approach to Isomerization: Models and Limitations.

    Science.gov (United States)

    Chang, Bo Y; Shin, Seokmin; Engel, Volker; Sola, Ignacio R

    2017-11-02

    We study laser-driven isomerization reactions through an excited electronic state using the recently developed Geometrical Optimization procedure. Our goal is to analyze whether an initial wave packet in the ground state, with optimized amplitudes and phases, can be used to enhance the yield of the reaction at faster rates, driven by a single picosecond pulse or a pair of femtosecond pulses resonant with the electronic transition. We show that the symmetry of the system imposes limitations in the optimization procedure, such that the method rediscovers the pump-dump mechanism.

  7. Combinatorial theory of the semiclassical evaluation of transport moments. I. Equivalence with the random matrix approach

    Energy Technology Data Exchange (ETDEWEB)

    Berkolaiko, G., E-mail: berko@math.tamu.edu [Department of Mathematics, Texas A and M University, College Station, Texas 77843-3368 (United States); Kuipers, J., E-mail: Jack.Kuipers@physik.uni-regensburg.de [Institut für Theoretische Physik, Universität Regensburg, D-93040 Regensburg (Germany)

    2013-11-15

    To study electronic transport through chaotic quantum dots, there are two main theoretical approaches. One involves substituting the quantum system with a random scattering matrix and performing appropriate ensemble averaging. The other treats the transport in the semiclassical approximation and studies correlations among sets of classical trajectories. There are established evaluation procedures within the semiclassical evaluation that, for several linear and nonlinear transport moments to which they were applied, have always resulted in the agreement with random matrix predictions. We prove that this agreement is universal: any semiclassical evaluation within the accepted procedures is equivalent to the evaluation within random matrix theory. The equivalence is shown by developing a combinatorial interpretation of the trajectory sets as ribbon graphs (maps) with certain properties and exhibiting systematic cancellations among their contributions. Remaining trajectory sets can be identified with primitive (palindromic) factorisations whose number gives the coefficients in the corresponding expansion of the moments of random matrices. The equivalence is proved for systems with and without time reversal symmetry.

  8. A universal optimization strategy for ant colony optimization algorithms based on the Physarum-inspired mathematical model

    International Nuclear Information System (INIS)

    Zhang, Zili; Gao, Chao; Liu, Yuxin; Qian, Tao

    2014-01-01

    Ant colony optimization (ACO) algorithms often fall into the local optimal solution and have lower search efficiency for solving the travelling salesman problem (TSP). According to these shortcomings, this paper proposes a universal optimization strategy for updating the pheromone matrix in the ACO algorithms. The new optimization strategy takes advantages of the unique feature of critical paths reserved in the process of evolving adaptive networks of the Physarum-inspired mathematical model (PMM). The optimized algorithms, denoted as PMACO algorithms, can enhance the amount of pheromone in the critical paths and promote the exploitation of the optimal solution. Experimental results in synthetic and real networks show that the PMACO algorithms are more efficient and robust than the traditional ACO algorithms, which are adaptable to solve the TSP with single or multiple objectives. Meanwhile, we further analyse the influence of parameters on the performance of the PMACO algorithms. Based on these analyses, the best values of these parameters are worked out for the TSP. (paper)

  9. Optimal Design of Gradient Materials and Bi-Level Optimization of Topology Using Targets (BOTT)

    Science.gov (United States)

    Garland, Anthony

    The objective of this research is to understand the fundamental relationships necessary to develop a method to optimize both the topology and the internal gradient material distribution of a single object while meeting constraints and conflicting objectives. Functionally gradient material (FGM) objects possess continuous varying material properties throughout the object, and they allow an engineer to tailor individual regions of an object to have specific mechanical properties by locally modifying the internal material composition. A variety of techniques exists for topology optimization, and several methods exist for FGM optimization, but combining the two together is difficult. Understanding the relationship between topology and material gradient optimization enables the selection of an appropriate model and the development of algorithms, which allow engineers to design high-performance parts that better meet design objectives than optimized homogeneous material objects. For this research effort, topology optimization means finding the optimal connected structure with an optimal shape. FGM optimization means finding the optimal macroscopic material properties within an object. Tailoring the material constitutive matrix as a function of position results in gradient properties. Once, the target macroscopic properties are known, a mesostructure or a particular material nanostructure can be found which gives the target material properties at each macroscopic point. This research demonstrates that topology and gradient materials can both be optimized together for a single part. The algorithms use a discretized model of the domain and gradient based optimization algorithms. In addition, when considering two conflicting objectives the algorithms in this research generate clear 'features' within a single part. This tailoring of material properties within different areas of a single part (automated design of 'features') using computational design tools is a novel benefit

  10. Optimal Design of Gravity Pipeline Systems Using Genetic Algorithm and Mathematical Optimization

    Directory of Open Access Journals (Sweden)

    maryam rohani

    2015-03-01

    Full Text Available In recent years, the optimal design of pipeline systems has become increasingly important in the water industry. In this study, the two methods of genetic algorithm and mathematical optimization were employed for the optimal design of pipeline systems with the objective of avoiding the water hammer effect caused by valve closure. The problem of optimal design of a pipeline system is a constrained one which should be converted to an unconstrained optimization problem using an external penalty function approach in the mathematical programming method. The quality of the optimal solution greatly depends on the value of the penalty factor that is calculated by the iterative method during the optimization procedure such that the computational effort is simultaneously minimized. The results obtained were used to compare the GA and mathematical optimization methods employed to determine their efficiency and capabilities for the problem under consideration. It was found that the mathematical optimization method exhibited a slightly better performance compared to the GA method.

  11. Development and optimization of manufacture process for heat resistant fibre reinforced ceramic matrix composites

    Czech Academy of Sciences Publication Activity Database

    Glogar, Petr; Hron, P.; Burian, M.; Balík, Karel; Černý, Martin; Sucharda, Zbyněk; Vymazalová, Z.; Červencl, J.; Pivoňka, M.

    -, č. 14 (2005), 25-32 ISSN 1214-9691 R&D Projects: GA ČR(CZ) GA106/02/0177 Institutional research plan: CEZ:AV0Z30460519 Keywords : polysiloxane resin * pyrolysis * ceramic matrix composite Subject RIV: JI - Composite Materials

  12. Optimal dynamic detection of explosives

    Energy Technology Data Exchange (ETDEWEB)

    Moore, David Steven [Los Alamos National Laboratory; Mcgrane, Shawn D [Los Alamos National Laboratory; Greenfield, Margo T [Los Alamos National Laboratory; Scharff, R J [Los Alamos National Laboratory; Rabitz, Herschel A [PRINCETON UNIV; Roslund, J [PRINCETON UNIV

    2009-01-01

    The detection of explosives is a notoriously difficult problem, especially at stand-off distances, due to their (generally) low vapor pressure, environmental and matrix interferences, and packaging. We are exploring optimal dynamic detection to exploit the best capabilities of recent advances in laser technology and recent discoveries in optimal shaping of laser pulses for control of molecular processes to significantly enhance the standoff detection of explosives. The core of the ODD-Ex technique is the introduction of optimally shaped laser pulses to simultaneously enhance sensitivity of explosives signatures while reducing the influence of noise and the signals from background interferents in the field (increase selectivity). These goals are being addressed by operating in an optimal nonlinear fashion, typically with a single shaped laser pulse inherently containing within it coherently locked control and probe sub-pulses. With sufficient bandwidth, the technique is capable of intrinsically providing orthogonal broad spectral information for data fusion, all from a single optimal pulse.

  13. k-Means: Random Sampling Procedure

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. k-Means: Random Sampling Procedure. Optimal 1-Mean is. Approximation of Centroid (Inaba et al). S = random sample of size O(1/ ); Centroid of S is a (1+ )-approx centroid of P with constant probability.

  14. Generation of the covariance matrix for a set of nuclear data produced by collapsing a larger parent set through the weighted averaging of equivalent data points

    International Nuclear Information System (INIS)

    Smith, D.L.

    1987-01-01

    A method is described for generating the covariance matrix of a set of experimental nuclear data which has been collapsed in size by the averaging of equivalent data points belonging to a larger parent data set. It is assumed that the data values and covariance matrix for the parent set are provided. The collapsed set is obtained by a proper weighted-averaging procedure based on the method of least squares. It is then shown by means of the law of error propagation that the elements of the covariance matrix for the collapsed set are linear combinations of elements from the parent set covariance matrix. The coefficients appearing in these combinations are binary products of the same coefficients which appear as weighting factors in the data collapsing procedure. As an example, the procedure is applied to a collection of recently-measured integral neutron-fission cross-section ratios. (orig.)

  15. ReplacementMatrix: a web server for maximum-likelihood estimation of amino acid replacement rate matrices.

    Science.gov (United States)

    Dang, Cuong Cao; Lefort, Vincent; Le, Vinh Sy; Le, Quang Si; Gascuel, Olivier

    2011-10-01

    Amino acid replacement rate matrices are an essential basis of protein studies (e.g. in phylogenetics and alignment). A number of general purpose matrices have been proposed (e.g. JTT, WAG, LG) since the seminal work of Margaret Dayhoff and co-workers. However, it has been shown that matrices specific to certain protein groups (e.g. mitochondrial) or life domains (e.g. viruses) differ significantly from general average matrices, and thus perform better when applied to the data to which they are dedicated. This Web server implements the maximum-likelihood estimation procedure that was used to estimate LG, and provides a number of tools and facilities. Users upload a set of multiple protein alignments from their domain of interest and receive the resulting matrix by email, along with statistics and comparisons with other matrices. A non-parametric bootstrap is performed optionally to assess the variability of replacement rate estimates. Maximum-likelihood trees, inferred using the estimated rate matrix, are also computed optionally for each input alignment. Finely tuned procedures and up-to-date ML software (PhyML 3.0, XRATE) are combined to perform all these heavy calculations on our clusters. http://www.atgc-montpellier.fr/ReplacementMatrix/ olivier.gascuel@lirmm.fr Supplementary data are available at http://www.atgc-montpellier.fr/ReplacementMatrix/

  16. COGNITIVE FATIGUE FACILITATES PROCEDURAL SEQUENCE LEARNING

    Directory of Open Access Journals (Sweden)

    Guillermo eBorragán

    2016-03-01

    Full Text Available Enhanced procedural learning has been evidenced in conditions where cognitive control is diminished, including hypnosis, disruption of prefrontal activity and non-optimal time of the day. Another condition depleting the availability of controlled resources is cognitive fatigue. We tested the hypothesis that cognitive fatigue, eventually leading to diminished cognitive control, facilitates procedural sequence learning. In a two-day experiment, twenty-three young healthy adults were administered a serial reaction time task (SRTT following the induction of high or low levels of cognitive fatigue, in a counterbalanced order. Cognitive fatigue was induced using the Time load Dual-back (TloadDback paradigm, a dual working memory task that allows tailoring cognitive load levels to the individual's optimal performance capacity. In line with our hypothesis, reaction times in the SRTT were faster in the high- than in the low-level fatigue condition, and performance improvement showed more of a benefit from the sequential components than from motor. Altogether, our results suggest a paradoxical, facilitating impact of cognitive fatigue on procedural motor sequence learning. We propose that facilitated learning in the high-level fatigue condition stems from a reduction in the cognitive resources devoted to cognitive control processes that normally oppose automatic procedural acquisition mechanisms.

  17. Chemometrics Optimized Extraction Procedures, Phytosynergistic Blending and in vitro Screening of Natural Enzyme Inhibitors Amongst Leaves of Tulsi, Banyan and Jamun.

    Science.gov (United States)

    De, Baishakhi; Bhandari, Koushik; Singla, Rajeev K; Katakam, Prakash; Samanta, Tanmoy; Kushwaha, Dilip Kumar; Gundamaraju, Rohit; Mitra, Analava

    2015-10-01

    Tulsi, Banyan, and Jamun are popular Indian medicinal plants with notable hypoglycemic potentials. Now the work reports chemo-profiling of the three species with in-vitro screening approach for natural enzyme inhibitors (NEIs) against enzymes pathogenic for type 2 diabetes. Further along with the chemometrics optimized extraction process technology, phyto-synergistic studies of the composite polyherbal blends have also been reported. Chemometrically optimized extraction procedures, ratios of polyherbal composites to achieve phyto-synergistic actions, and in-vitro screening of NEIs amongst leaves of Tulsi, Banyan, and Jamun. The extraction process parameters of the leaves of three plant species (Ficus benghalensis, Syzigium cumini and Ocimum sanctum) were optimized by rotatable central composite design of chemometrics so as to get maximal yield of bio-actives. Phyto-blends of three species were prepared so as to achieve synergistic antidiabetic and antioxidant potentials and the ratios were optimized by chemometrics. Next, for in vitro screening of natural enzyme inhibitors the individual leaf extracts as well as composite blends were subjected to assay procedures to see their inhibitory potentials against the enzymes pathogenic in type 2 diabetes. The antioxidant potentials were also estimated by DPPH radical scavenging, ABTS, FRAP and Dot Blot assay. Considering response surface methodology studies and from the solutions obtained using desirability function, it was found that hydro-ethanolic or methanolic solvent ratio of 52.46 ± 1.6 and at a temperature of 20.17 ± 0.6 gave an optimum yield of polyphenols with minimal chlorophyll leaching. The species also showed the presence of glycosides, alkaloids, and saponins. Composites in the ratios of 1:1:1 and 1:1:2 gave synergistic effects in terms of polyphenol yield and anti-oxidant potentials. All composites (1:1:1, 1:2:1, 2:1:1, 1:1:2) showed synergistic anti-oxidant actions. Inhibitory activities against the

  18. Distributed Cooperative Optimal Control for Multiagent Systems on Directed Graphs: An Inverse Optimal Approach.

    Science.gov (United States)

    Zhang, Huaguang; Feng, Tao; Yang, Guang-Hong; Liang, Hongjing

    2015-07-01

    In this paper, the inverse optimal approach is employed to design distributed consensus protocols that guarantee consensus and global optimality with respect to some quadratic performance indexes for identical linear systems on a directed graph. The inverse optimal theory is developed by introducing the notion of partial stability. As a result, the necessary and sufficient conditions for inverse optimality are proposed. By means of the developed inverse optimal theory, the necessary and sufficient conditions are established for globally optimal cooperative control problems on directed graphs. Basic optimal cooperative design procedures are given based on asymptotic properties of the resulting optimal distributed consensus protocols, and the multiagent systems can reach desired consensus performance (convergence rate and damping rate) asymptotically. Finally, two examples are given to illustrate the effectiveness of the proposed methods.

  19. A convex optimization approach for solving large scale linear systems

    Directory of Open Access Journals (Sweden)

    Debora Cores

    2017-01-01

    Full Text Available The well-known Conjugate Gradient (CG method minimizes a strictly convex quadratic function for solving large-scale linear system of equations when the coefficient matrix is symmetric and positive definite. In this work we present and analyze a non-quadratic convex function for solving any large-scale linear system of equations regardless of the characteristics of the coefficient matrix. For finding the global minimizers, of this new convex function, any low-cost iterative optimization technique could be applied. In particular, we propose to use the low-cost globally convergent Spectral Projected Gradient (SPG method, which allow us to extend this optimization approach for solving consistent square and rectangular linear system, as well as linear feasibility problem, with and without convex constraints and with and without preconditioning strategies. Our numerical results indicate that the new scheme outperforms state-of-the-art iterative techniques for solving linear systems when the symmetric part of the coefficient matrix is indefinite, and also for solving linear feasibility problems.

  20. Using Chemicals to Optimize Conformance Control in Fractured Reservoirs; TOPICAL

    International Nuclear Information System (INIS)

    Seright, Randall S.; Liang, Jenn-Tai; Schrader, Richard; Hagstrom II, John; Wang, Ying; Kumar, Ananad; Wavrik, Kathryn

    2001-01-01

    This report describes work performed during the third and final year of the project, Using Chemicals to Optimize Conformance Control in Fractured Reservoirs. This research project had three objectives. The first objective was to develop a capability to predict and optimize the ability of gels to reduce permeability to water more than that to oil or gas. The second objective was to develop procedures for optimizing blocking agent placement in wells where hydraulic fractures cause channeling problems. The third objective was to develop procedures to optimize blocking agent placement in naturally fractured reservoirs