WorldWideScience

Sample records for alignment algorithm determined

  1. Neurient: An Algorithm for Automatic Tracing of Confluent Neuronal Images to Determine Alignment

    Science.gov (United States)

    Mitchel, J.A.; Martin, I.S.

    2013-01-01

    A goal of neural tissue engineering is the development and evaluation of materials that guide neuronal growth and alignment. However, the methods available to quantitatively evaluate the response of neurons to guidance materials are limited and/or expensive, and may require manual tracing to be performed by the researcher. We have developed an open source, automated Matlab-based algorithm, building on previously published methods, to trace and quantify alignment of fluorescent images of neurons in culture. The algorithm is divided into three phases, including computation of a lookup table which contains directional information for each image, location of a set of seed points which may lie along neurite centerlines, and tracing neurites starting with each seed point and indexing into the lookup table. This method was used to obtain quantitative alignment data for complex images of densely cultured neurons. Complete automation of tracing allows for unsupervised processing of large numbers of images. Following image processing with our algorithm, available metrics to quantify neurite alignment include angular histograms, percent of neurite segments in a given direction, and mean neurite angle. The alignment information obtained from traced images can be used to compare the response of neurons to a range of conditions. This tracing algorithm is freely available to the scientific community under the name Neurient, and its implementation in Matlab allows a wide range of researchers to use a standardized, open source method to quantitatively evaluate the alignment of dense neuronal cultures. PMID:23384629

  2. A generalized global alignment algorithm.

    Science.gov (United States)

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  3. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  4. A Clustal Alignment Improver Using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Rene; Fogel, Gary B.; Krink, Thimo

    2002-01-01

    Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...

  5. SPA: a probabilistic algorithm for spliced alignment.

    Directory of Open Access Journals (Sweden)

    2006-04-01

    Full Text Available Recent large-scale cDNA sequencing efforts show that elaborate patterns of splice variation are responsible for much of the proteome diversity in higher eukaryotes. To obtain an accurate account of the repertoire of splice variants, and to gain insight into the mechanisms of alternative splicing, it is essential that cDNAs are very accurately mapped to their respective genomes. Currently available algorithms for cDNA-to-genome alignment do not reach the necessary level of accuracy because they use ad hoc scoring models that cannot correctly trade off the likelihoods of various sequencing errors against the probabilities of different gene structures. Here we develop a Bayesian probabilistic approach to cDNA-to-genome alignment. Gene structures are assigned prior probabilities based on the lengths of their introns and exons, and based on the sequences at their splice boundaries. A likelihood model for sequencing errors takes into account the rates at which misincorporation, as well as insertions and deletions of different lengths, occurs during sequencing. The parameters of both the prior and likelihood model can be automatically estimated from a set of cDNAs, thus enabling our method to adapt itself to different organisms and experimental procedures. We implemented our method in a fast cDNA-to-genome alignment program, SPA, and applied it to the FANTOM3 dataset of over 100,000 full-length mouse cDNAs and a dataset of over 20,000 full-length human cDNAs. Comparison with the results of four other mapping programs shows that SPA produces alignments of significantly higher quality. In particular, the quality of the SPA alignments near splice boundaries and SPA's mapping of the 5' and 3' ends of the cDNAs are highly improved, allowing for more accurate identification of transcript starts and ends, and accurate identification of subtle splice variations. Finally, our splice boundary analysis on the human dataset suggests the existence of a novel non

  6. MEANS FOR DETERMINING CENTRIFUGE ALIGNMENT

    Science.gov (United States)

    Smith, W.Q.

    1958-08-26

    An apparatus is presented for remotely determining the alignment of a centrifuge. The centrifage shaft is provided with a shoulder, upon which two followers ride, one for detecting radial movements, and one upon the shoulder face for determining the axial motion. The followers are attached to separate liquid filled bellows, and a tube connects each bellows to its respective indicating gage at a remote location. Vibrations produced by misalignment of the centrifuge shaft are transmitted to the bellows, and tbence through the tubing to the indicator gage. This apparatus is particularly useful for operation in a hot cell where the materials handled are dangerous to the operating personnel.

  7. Cover song identification by sequence alignment algorithms

    Science.gov (United States)

    Wang, Chih-Li; Zhong, Qian; Wang, Szu-Ying; Roychowdhury, Vwani

    2011-10-01

    Content-based music analysis has drawn much attention due to the rapidly growing digital music market. This paper describes a method that can be used to effectively identify cover songs. A cover song is a song that preserves only the crucial melody of its reference song but different in some other acoustic properties. Hence, the beat/chroma-synchronous chromagram, which is insensitive to the variation of the timber or rhythm of songs but sensitive to the melody, is chosen. The key transposition is achieved by cyclically shifting the chromatic domain of the chromagram. By using the Hidden Markov Model (HMM) to obtain the time sequences of songs, the system is made even more robust. Similar structure or length between the cover songs and its reference are not necessary by the Smith-Waterman Alignment Algorithm.

  8. A study of Hough Transform-based fingerprint alignment algorithms

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-10-01

    Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time...

  9. CSA: An efficient algorithm to improve circular DNA multiple alignment

    Directory of Open Access Journals (Sweden)

    Pereira Luísa

    2009-07-01

    Full Text Available Abstract Background The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools. Results In this paper we propose an efficient algorithm that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms. This algorithm identifies the largest chain of non-repeated longest subsequences common to a set of circular mitochondrial DNA sequences. All the sequences are then rotated and made linear for multiple alignment purposes. To evaluate the effectiveness of this new tool, three different sets of mitochondrial DNA sequences were considered. Other tests considering randomly rotated sequences were also performed. The software package Arlequin was used to evaluate the standard genetic measures of the alignments obtained with and without the use of the CSA algorithm with two well known multiple alignment algorithms, the CLUSTALW and the MAVID tools, and also the visualization tool SinicView. Conclusion The results show that a circularization and rotation pre-processing step significantly improves the efficiency of public available multiple sequence alignment

  10. Genomic multiple sequence alignments: refinement using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2005-08-01

    Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only

  11. Quality measures for HRR alignment based ISAR imaging algorithms

    CSIR Research Space (South Africa)

    Janse van Rensburg, V

    2013-05-01

    Full Text Available Some Inverse Synthetic Aperture Radar (ISAR) algorithms form the image in a two-step process of range alignment and phase conjugation. This paper discusses a comprehensive set of measures used to quantify the quality of range alignment, with the aim...

  12. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  13. Validation of Kalman Filter alignment algorithm with cosmic-ray data using a CMS silicon strip tracker endcap

    CERN Document Server

    Sprenger, D; Adolphi, R; Brauer, R; Feld, L; Klein, K; Ostaptchuk, A; Schael, S; Wittmer, B

    2010-01-01

    A Kalman Filter alignment algorithm has been applied to cosmic-ray data. We discuss the alignment algorithm and an experiment-independent implementation including outlier rejection and treatment of weakly determined parameters. Using this implementation, the algorithm has been applied to data recorded with one CMS silicon tracker endcap. Results are compared to both photogrammetry measurements and data obtained from a dedicated hardware alignment system, and good agreement is observed.

  14. Enhanced Dynamic Algorithm of Genome Sequence Alignments

    OpenAIRE

    Arabi E. keshk

    2014-01-01

    The merging of biology and computer science has created a new field called computational biology that explore the capacities of computers to gain knowledge from biological data, bioinformatics. Computational biology is rooted in life sciences as well as computers, information sciences, and technologies. The main problem in computational biology is sequence alignment that is a way of arranging the sequences of DNA, RNA or protein to identify the region of similarity and relationship between se...

  15. Distributed interference alignment iterative algorithms in symmetric wireless network

    Directory of Open Access Journals (Sweden)

    YANG Jingwen

    2015-02-01

    Full Text Available Interference alignment is a novel interference alignment way,which is widely noted all of the world.Interference alignment overlaps interference in the same signal space at receiving terminal by precoding so as to thoroughly eliminate the influence of interference impacted on expected signals,thus making the desire user achieve the maximum degree of freedom.In this paper we research three typical algorithms for realizing interference alignment,including minimizing the leakage interference,maximizing Signal to Interference plus Noise Ratio (SINR and minimizing mean square error(MSE.All of these algorithms utilize the reciprocity of wireless network,and iterate the precoders between original network and the reverse network so as to achieve interference alignment.We use the uplink transmit rate to analyze the performance of these three algorithms.Numerical simulation results show the advantages of these algorithms.which is the foundation for the further study in the future.The feasibility and future of interference alignment are also discussed at last.

  16. Alignment of Custom Standards by Machine Learning Algorithms

    Directory of Open Access Journals (Sweden)

    Adela Sirbu

    2010-09-01

    Full Text Available Building an efficient model for automatic alignment of terminologies would bring a significant improvement to the information retrieval process. We have developed and compared two machine learning based algorithms whose aim is to align 2 custom standards built on a 3 level taxonomy, using kNN and SVM classifiers that work on a vector representation consisting of several similarity measures. The weights utilized by the kNN were optimized with an evolutionary algorithm, while the SVM classifier's hyper-parameters were optimized with a grid search algorithm. The database used for train was semi automatically obtained by using the Coma++ tool. The performance of our aligners is shown by the results obtained on the test set.

  17. Coarse Alignment Technology on Moving base for SINS Based on the Improved Quaternion Filter Algorithm.

    Science.gov (United States)

    Zhang, Tao; Zhu, Yongyun; Zhou, Feng; Yan, Yaxiong; Tong, Jinwu

    2017-06-17

    Initial alignment of the strapdown inertial navigation system (SINS) is intended to determine the initial attitude matrix in a short time with certain accuracy. The alignment accuracy of the quaternion filter algorithm is remarkable, but the convergence rate is slow. To solve this problem, this paper proposes an improved quaternion filter algorithm for faster initial alignment based on the error model of the quaternion filter algorithm. The improved quaternion filter algorithm constructs the K matrix based on the principle of optimal quaternion algorithm, and rebuilds the measurement model by containing acceleration and velocity errors to make the convergence rate faster. A doppler velocity log (DVL) provides the reference velocity for the improved quaternion filter alignment algorithm. In order to demonstrate the performance of the improved quaternion filter algorithm in the field, a turntable experiment and a vehicle test are carried out. The results of the experiments show that the convergence rate of the proposed improved quaternion filter is faster than that of the tradition quaternion filter algorithm. In addition, the improved quaternion filter algorithm also demonstrates advantages in terms of correctness, effectiveness, and practicability.

  18. Trigger Algorithms for Alignment and Calibration at the CMS Experiment

    CERN Document Server

    Fernandez Perez Tomei, Thiago Rafael

    2017-01-01

    The data needs of the Alignment and Calibration group at the CMS experiment are reasonably different from those of the physics studies groups. Data are taken at CMS through the online event selection system, which is implemented in two steps. The Level-1 Trigger is implemented on custom-made electronics and dedicated to analyse the detector information at a coarse-grained scale, while the High Level Trigger (HLT) is implemented as a series of software algorithms, running in a computing farm, that have access to the full detector information. In this paper we describe the set of trigger algorithms that is deployed to address the needs of the Alignment and Calibration group, how it fits in the general infrastructure of the HLT, and how it feeds the Prompt Calibration Loop (PCL), allowing for a fast turnaround for the alignment and calibration constants.

  19. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  20. ChromAlign: A two-step algorithmic procedure for time alignment of three-dimensional LC-MS chromatographic surfaces.

    Science.gov (United States)

    Sadygov, Rovshan G; Maroto, Fernando Martin; Hühmer, Andreas F R

    2006-12-15

    We present an algorithmic approach to align three-dimensional chromatographic surfaces of LC-MS data of complex mixture samples. The approach consists of two steps. In the first step, we prealign chromatographic profiles: two-dimensional projections of chromatographic surfaces. This is accomplished by correlation analysis using fast Fourier transforms. In this step, a temporal offset that maximizes the overlap and dot product between two chromatographic profiles is determined. In the second step, the algorithm generates correlation matrix elements between full mass scans of the reference and sample chromatographic surfaces. The temporal offset from the first step indicates a range of the mass scans that are possibly correlated, then the correlation matrix is calculated only for these mass scans. The correlation matrix carries information on highly correlated scans, but it does not itself determine the scan or time alignment. Alignment is determined as a path in the correlation matrix that maximizes the sum of the correlation matrix elements. The computational complexity of the optimal path generation problem is reduced by the use of dynamic programming. The program produces time-aligned surfaces. The use of the temporal offset from the first step in the second step reduces the computation time for generating the correlation matrix and speeds up the process. The algorithm has been implemented in a program, ChromAlign, developed in C++ language for the .NET2 environment in WINDOWS XP. In this work, we demonstrate the applications of ChromAlign to alignment of LC-MS surfaces of several datasets: a mixture of known proteins, samples from digests of surface proteins of T-cells, and samples prepared from digests of cerebrospinal fluid. ChromAlign accurately aligns the LC-MS surfaces we studied. In these examples, we discuss various aspects of the alignment by ChromAlign, such as constant time axis shifts and warping of chromatographic surfaces.

  1. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Lihui Jiang

    2015-07-01

    Full Text Available Interference alignment (IA is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs when the signal-to-noise ratio (SNR is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  2. A Rapid Convergent Low Complexity Interference Alignment Algorithm for Wireless Sensor Networks.

    Science.gov (United States)

    Jiang, Lihui; Wu, Zhilu; Ren, Guanghui; Wang, Gangyi; Zhao, Nan

    2015-07-29

    Interference alignment (IA) is a novel technique that can effectively eliminate the interference and approach the sum capacity of wireless sensor networks (WSNs) when the signal-to-noise ratio (SNR) is high, by casting the desired signal and interference into different signal subspaces. The traditional alternating minimization interference leakage (AMIL) algorithm for IA shows good performance in high SNR regimes, however, the complexity of the AMIL algorithm increases dramatically as the number of users and antennas increases, posing limits to its applications in the practical systems. In this paper, a novel IA algorithm, called directional quartic optimal (DQO) algorithm, is proposed to minimize the interference leakage with rapid convergence and low complexity. The properties of the AMIL algorithm are investigated, and it is discovered that the difference between the two consecutive iteration results of the AMIL algorithm will approximately point to the convergence solution when the precoding and decoding matrices obtained from the intermediate iterations are sufficiently close to their convergence values. Based on this important property, the proposed DQO algorithm employs the line search procedure so that it can converge to the destination directly. In addition, the optimal step size can be determined analytically by optimizing a quartic function. Numerical results show that the proposed DQO algorithm can suppress the interference leakage more rapidly than the traditional AMIL algorithm, and can achieve the same level of sum rate as that of AMIL algorithm with far less iterations and execution time.

  3. NN-align. An artificial neural network-based alignment algorithm for MHC class II peptide binding prediction

    DEFF Research Database (Denmark)

    Nielsen, Morten; Lund, Ole

    2009-01-01

    this binding event. RESULTS: Here, we present a novel artificial neural network-based method, NN-align that allows for simultaneous identification of the MHC class II binding core and binding affinity. NN-align is trained using a novel training algorithm that allows for correction of bias in the training data...

  4. A large-scale application of the Kalman alignment algorithm to the CMS tracker

    International Nuclear Information System (INIS)

    Widl, E; Fruehwirth, R

    2008-01-01

    The Kalman alignment algorithm has been specifically developed to cope with the demands that arise from the specifications of the CMS Tracker. The algorithmic concept is based on the Kalman filter formalism and is designed to avoid the inversion of large matrices. Most notably, the algorithm strikes a balance between conventional global and local track-based alignment algorithms, by restricting the computation of alignment parameters not only to alignable objects hit by the same track, but also to all other alignable objects that are significantly correlated. Nevertheless, this feature also comes with various trade-offs: Mechanisms are needed that affect which alignable objects are significantly correlated and keep track of these correlations. Due to the large amount of alignable objects involved at each update (at least compared to local alignment algorithms), the time spent for retrieving and writing alignment parameters as well as the required user memory becomes a significant factor. The large-scale test presented here applies the Kalman alignment algorithm to the (misaligned) CMS Tracker barrel, and demonstrates the feasibility of the algorithm in a realistic scenario. It is shown that both the computation time and the amount of required user memory are within reasonable bounds, given the available computing resources, and that the obtained results are satisfactory

  5. Analysis of computational complexity for HT-based fingerprint alignment algorithms on java card environment

    CSIR Research Space (South Africa)

    Mlambo, CS

    2015-01-01

    Full Text Available In this paper, implementations of three Hough Transform based fingerprint alignment algorithms are analyzed with respect to time complexity on Java Card environment. Three algorithms are: Local Match Based Approach (LMBA), Discretized Rotation Based...

  6. Phase Retrieval Using a Genetic Algorithm on the Systematic Image-Based Optical Alignment Testbed

    Science.gov (United States)

    Taylor, Jaime R.

    2003-01-01

    NASA s Marshall Space Flight Center s Systematic Image-Based Optical Alignment (SIBOA) Testbed was developed to test phase retrieval algorithms and hardware techniques. Individuals working with the facility developed the idea of implementing phase retrieval by breaking the determination of the tip/tilt of each mirror apart from the piston motion (or translation) of each mirror. Presented in this report is an algorithm that determines the optimal phase correction associated only with the piston motion of the mirrors. A description of the Phase Retrieval problem is first presented. The Systematic Image-Based Optical Alignment (SIBOA) Testbeb is then described. A Discrete Fourier Transform (DFT) is necessary to transfer the incoming wavefront (or estimate of phase error) into the spatial frequency domain to compare it with the image. A method for reducing the DFT to seven scalar/matrix multiplications is presented. A genetic algorithm is then used to search for the phase error. The results of this new algorithm on a test problem are presented.

  7. Method to evaluate steering and alignment algorithms for controlling emittance growth

    International Nuclear Information System (INIS)

    Adolphsen, C.; Raubenheimer, T.

    1993-04-01

    Future linear colliders will likely use sophisticated beam-based alignment and/or steering algorithms to control the growth of the beam emittance in the linac. In this paper, a mathematical framework is presented which simplifies the evaluation of the effectiveness of these algorithms. As an application, a quad alignment that uses beam data taken with the nominal linac optics, and with a scaled optics, is evaluated in terms of the dispersive emittance growth remaining after alignment

  8. Robust precision alignment algorithm for micro tube laser forming

    NARCIS (Netherlands)

    Folkersma, Ger; Brouwer, Dannis Michel; Römer, Gerardus Richardus, Bernardus, Engelina; Herder, Justus Laurens

    2016-01-01

    Tube laser forming on a small diameter tube can be used as a high precision actuator to permanently align small (optical)components. Applications, such as the alignment of optical fibers to photonic integrated circuits, often require sub-micron alignment accuracy. Although the process causes

  9. A comprehensive evaluation of alignment algorithms in the context of RNA-seq.

    Directory of Open Access Journals (Sweden)

    Robert Lindner

    Full Text Available Transcriptome sequencing (RNA-Seq overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete.

  10. Iterative local Chi2 alignment algorithm for the ATLAS Pixel detector

    CERN Document Server

    Göttfert, Tobias

    The existing local chi2 alignment approach for the ATLAS SCT detector was extended to the alignment of the ATLAS Pixel detector. This approach is linear, aligns modules separately, and uses distance of closest approach residuals and iterations. The derivation and underlying concepts of the approach are presented. To show the feasibility of the approach for Pixel modules, a simplified, stand-alone track simulation, together with the alignment algorithm, was developed with the ROOT analysis software package. The Pixel alignment software was integrated into Athena, the ATLAS software framework. First results and the achievable accuracy for this approach with a simulated dataset are presented.

  11. CMS silicon tracker alignment strategy with the Millepede II algorithm

    International Nuclear Information System (INIS)

    Flucke, G; Schleper, P; Steinbrueck, G; Stoye, M

    2008-01-01

    The positions of the silicon modules of the CMS tracker will be known to O(100 μm) from survey measurements, mounting precision and the hardware alignment system. However, in order to fully exploit the capabilities of the tracker, these positions need to be known to a precision of a few μm. Only a track-based alignment procedure can reach this required precision. Such an alignment procedure is a major challenge given that about 50000 geometry constants need to be measured. Making use of the novel χ 2 minimization program Millepede II an alignment strategy has been developed in which all detector components are aligned simultaneously and all correlations between their position parameters taken into account. Different simulated data, such as Z 0 decays and muons originated in air showers were used for the study. Additionally information about the mechanical structure of the tracker, and initial position uncertainties have been used as input for the alignment procedure. A proof of concept of this alignment strategy is demonstrated using simulated data

  12. Concept of AHRS Algorithm Designed for Platform Independent Imu Attitude Alignment

    Science.gov (United States)

    Tomaszewski, Dariusz; Rapiński, Jacek; Pelc-Mieczkowska, Renata

    2017-12-01

    Nowadays, along with the advancement of technology one can notice the rapid development of various types of navigation systems. So far the most popular satellite navigation, is now supported by positioning results calculated with use of other measurement system. The method and manner of integration will depend directly on the destination of system being developed. To increase the frequency of readings and improve the operation of outdoor navigation systems, one will support satellite navigation systems (GPS, GLONASS ect.) with inertial navigation. Such method of navigation consists of several steps. The first stage is the determination of initial orientation of inertial measurement unit, called INS alignment. During this process, on the basis of acceleration and the angular velocity readings, values of Euler angles (pitch, roll, yaw) are calculated allowing for unambiguous orientation of the sensor coordinate system relative to external coordinate system. The following study presents the concept of AHRS (Attitude and heading reference system) algorithm, allowing to define the Euler angles.The study were conducted with the use of readings from low-cost MEMS cell phone sensors. Subsequently the results of the study were analyzed to determine the accuracy of featured algorithm. On the basis of performed experiments the legitimacy of developed algorithm was stated.

  13. NN-align. An artificial neural network-based alignment algorithm for MHC class II peptide binding prediction

    Directory of Open Access Journals (Sweden)

    Lund Ole

    2009-09-01

    Full Text Available Abstract Background The major histocompatibility complex (MHC molecule plays a central role in controlling the adaptive immune response to infections. MHC class I molecules present peptides derived from intracellular proteins to cytotoxic T cells, whereas MHC class II molecules stimulate cellular and humoral immunity through presentation of extracellularly derived peptides to helper T cells. Identification of which peptides will bind a given MHC molecule is thus of great importance for the understanding of host-pathogen interactions, and large efforts have been placed in developing algorithms capable of predicting this binding event. Results Here, we present a novel artificial neural network-based method, NN-align that allows for simultaneous identification of the MHC class II binding core and binding affinity. NN-align is trained using a novel training algorithm that allows for correction of bias in the training data due to redundant binding core representation. Incorporation of information about the residues flanking the peptide-binding core is shown to significantly improve the prediction accuracy. The method is evaluated on a large-scale benchmark consisting of six independent data sets covering 14 human MHC class II alleles, and is demonstrated to outperform other state-of-the-art MHC class II prediction methods. Conclusion The NN-align method is competitive with the state-of-the-art MHC class II peptide binding prediction algorithms. The method is publicly available at http://www.cbs.dtu.dk/services/NetMHCII-2.0.

  14. MSuPDA: A Memory Efficient Algorithm for Sequence Alignment.

    Science.gov (United States)

    Khan, Mohammad Ibrahim; Kamal, Md Sarwar; Chowdhury, Linkon

    2016-03-01

    Space complexity is a million dollar question in DNA sequence alignments. In this regard, memory saving under pushdown automata can help to reduce the occupied spaces in computer memory. Our proposed process is that anchor seed (AS) will be selected from given data set of nucleotide base pairs for local sequence alignment. Quick splitting techniques will separate the AS from all the DNA genome segments. Selected AS will be placed to pushdown automata's (PDA) input unit. Whole DNA genome segments will be placed into PDA's stack. AS from input unit will be matched with the DNA genome segments from stack of PDA. Match, mismatch and indel of nucleotides will be popped from the stack under the control unit of pushdown automata. During the POP operation on stack, it will free the memory cell occupied by the nucleotide base pair.

  15. An efficient genetic algorithm for structural RNA pairwise alignment and its application to non-coding RNA discovery in yeast

    Directory of Open Access Journals (Sweden)

    Taneda Akito

    2008-12-01

    Full Text Available Abstract Background Aligning RNA sequences with low sequence identity has been a challenging problem since such a computation essentially needs an algorithm with high complexities for taking structural conservation into account. Although many sophisticated algorithms for the purpose have been proposed to date, further improvement in efficiency is necessary to accelerate its large-scale applications including non-coding RNA (ncRNA discovery. Results We developed a new genetic algorithm, Cofolga2, for simultaneously computing pairwise RNA sequence alignment and consensus folding, and benchmarked it using BRAliBase 2.1. The benchmark results showed that our new algorithm is accurate and efficient in both time and memory usage. Then, combining with the originally trained SVM, we applied the new algorithm to novel ncRNA discovery where we compared S. cerevisiae genome with six related genomes in a pairwise manner. By focusing our search to the relatively short regions (50 bp to 2,000 bp sandwiched by conserved sequences, we successfully predict 714 intergenic and 1,311 sense or antisense ncRNA candidates, which were found in the pairwise alignments with stable consensus secondary structure and low sequence identity (≤ 50%. By comparing with the previous predictions, we found that > 92% of the candidates is novel candidates. The estimated rate of false positives in the predicted candidates is 51%. Twenty-five percent of the intergenic candidates has supports for expression in cell, i.e. their genomic positions overlap those of the experimentally determined transcripts in literature. By manual inspection of the results, moreover, we obtained four multiple alignments with low sequence identity which reveal consensus structures shared by three species/sequences. Conclusion The present method gives an efficient tool complementary to sequence-alignment-based ncRNA finders.

  16. Protein alignment algorithms with an efficient backtracking routine on multiple GPUs

    Directory of Open Access Journals (Sweden)

    Kierzynka Michal

    2011-05-01

    Full Text Available Abstract Background Pairwise sequence alignment methods are widely used in biological research. The increasing number of sequences is perceived as one of the upcoming challenges for sequence alignment methods in the nearest future. To overcome this challenge several GPU (Graphics Processing Unit computing approaches have been proposed lately. These solutions show a great potential of a GPU platform but in most cases address the problem of sequence database scanning and computing only the alignment score whereas the alignment itself is omitted. Thus, the need arose to implement the global and semiglobal Needleman-Wunsch, and Smith-Waterman algorithms with a backtracking procedure which is needed to construct the alignment. Results In this paper we present the solution that performs the alignment of every given sequence pair, which is a required step for progressive multiple sequence alignment methods, as well as for DNA recognition at the DNA assembly stage. Performed tests show that the implementation, with performance up to 6.3 GCUPS on a single GPU for affine gap penalties, is very efficient in comparison to other CPU and GPU-based solutions. Moreover, multiple GPUs support with load balancing makes the application very scalable. Conclusions The article shows that the backtracking procedure of the sequence alignment algorithms may be designed to fit in with the GPU architecture. Therefore, our algorithm, apart from scores, is able to compute pairwise alignments. This opens a wide range of new possibilities, allowing other methods from the area of molecular biology to take advantage of the new computational architecture. Performed tests show that the efficiency of the implementation is excellent. Moreover, the speed of our GPU-based algorithms can be almost linearly increased when using more than one graphics card.

  17. Design of multiple sequence alignment algorithms on parallel, distributed memory supercomputers.

    Science.gov (United States)

    Church, Philip C; Goscinski, Andrzej; Holt, Kathryn; Inouye, Michael; Ghoting, Amol; Makarychev, Konstantin; Reumann, Matthias

    2011-01-01

    The challenge of comparing two or more genomes that have undergone recombination and substantial amounts of segmental loss and gain has recently been addressed for small numbers of genomes. However, datasets of hundreds of genomes are now common and their sizes will only increase in the future. Multiple sequence alignment of hundreds of genomes remains an intractable problem due to quadratic increases in compute time and memory footprint. To date, most alignment algorithms are designed for commodity clusters without parallelism. Hence, we propose the design of a multiple sequence alignment algorithm on massively parallel, distributed memory supercomputers to enable research into comparative genomics on large data sets. Following the methodology of the sequential progressiveMauve algorithm, we design data structures including sequences and sorted k-mer lists on the IBM Blue Gene/P supercomputer (BG/P). Preliminary results show that we can reduce the memory footprint so that we can potentially align over 250 bacterial genomes on a single BG/P compute node. We verify our results on a dataset of E.coli, Shigella and S.pneumoniae genomes. Our implementation returns results matching those of the original algorithm but in 1/2 the time and with 1/4 the memory footprint for scaffold building. In this study, we have laid the basis for multiple sequence alignment of large-scale datasets on a massively parallel, distributed memory supercomputer, thus enabling comparison of hundreds instead of a few genome sequences within reasonable time.

  18. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    Science.gov (United States)

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  19. BitPAl: a bit-parallel, general integer-scoring sequence alignment algorithm.

    Science.gov (United States)

    Loving, Joshua; Hernandez, Yozen; Benson, Gary

    2014-11-15

    Mapping of high-throughput sequencing data and other bulk sequence comparison applications have motivated a search for high-efficiency sequence alignment algorithms. The bit-parallel approach represents individual cells in an alignment scoring matrix as bits in computer words and emulates the calculation of scores by a series of logic operations composed of AND, OR, XOR, complement, shift and addition. Bit-parallelism has been successfully applied to the longest common subsequence (LCS) and edit-distance problems, producing fast algorithms in practice. We have developed BitPAl, a bit-parallel algorithm for general, integer-scoring global alignment. Integer-scoring schemes assign integer weights for match, mismatch and insertion/deletion. The BitPAl method uses structural properties in the relationship between adjacent scores in the scoring matrix to construct classes of efficient algorithms, each designed for a particular set of weights. In timed tests, we show that BitPAl runs 7-25 times faster than a standard iterative algorithm. Source code is freely available for download at http://lobstah.bu.edu/BitPAl/BitPAl.html. BitPAl is implemented in C and runs on all major operating systems. jloving@bu.edu or yhernand@bu.edu or gbenson@bu.edu Supplementary data are available at Bioinformatics online. © The Author 2014. Published by Oxford University Press.

  20. An extensive assessment of network alignment algorithms for comparison of brain connectomes.

    Science.gov (United States)

    Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario

    2017-06-06

    Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.

  1. Mirror position determination for the alignment of Cherenkov Telescopes

    Energy Technology Data Exchange (ETDEWEB)

    Adam, J. [TU Dortmund, Experimental Physics 5 Otto-Hahn-Str. 4, 44221 Dortmund (Germany); Ahnen, M.L. [ETH Zurich, Institute for Particle Physics Otto-Stern-Weg 5, 8093 Zurich (Switzerland); Baack, D. [TU Dortmund, Experimental Physics 5 Otto-Hahn-Str. 4, 44221 Dortmund (Germany); Balbo, M. [University of Geneva, ISDC Data Center for Astrophysics Chemin Ecogia 16, 1290 Versoix (Switzerland); Bergmann, M. [Universität Würzburg, Institute for Theoretical Physics and Astrophysics Emil-Fischer-Str. 31, 97074 Würzburg (Germany); Biland, A. [ETH Zurich, Institute for Particle Physics Otto-Stern-Weg 5, 8093 Zurich (Switzerland); Blank, M. [Universität Würzburg, Institute for Theoretical Physics and Astrophysics Emil-Fischer-Str. 31, 97074 Würzburg (Germany); Bretz, T. [ETH Zurich, Institute for Particle Physics Otto-Stern-Weg 5, 8093 Zurich (Switzerland); RWTH Aachen (Germany); Bruegge, K.A.; Buss, J. [TU Dortmund, Experimental Physics 5 Otto-Hahn-Str. 4, 44221 Dortmund (Germany); Dmytriiev, A. [University of Geneva, ISDC Data Center for Astrophysics Chemin Ecogia 16, 1290 Versoix (Switzerland); Domke, M. [TU Dortmund, Experimental Physics 5 Otto-Hahn-Str. 4, 44221 Dortmund (Germany); Dorner, D. [Universität Würzburg, Institute for Theoretical Physics and Astrophysics Emil-Fischer-Str. 31, 97074 Würzburg (Germany); FAU Erlangen (Germany); Einecke, S. [TU Dortmund, Experimental Physics 5 Otto-Hahn-Str. 4, 44221 Dortmund (Germany); Hempfling, C. [Universität Würzburg, Institute for Theoretical Physics and Astrophysics Emil-Fischer-Str. 31, 97074 Würzburg (Germany); and others

    2017-07-11

    Imaging Atmospheric Cherenkov Telescopes (IACTs) need imaging optics with large apertures to map the faint Cherenkov light emitted in extensive air showers onto their image sensors. Segmented reflectors fulfill these needs using mass produced and light weight mirror facets. However, as the overall image is the sum of the individual mirror facet images, alignment is important. Here we present a method to determine the mirror facet positions on a segmented reflector in a very direct way. Our method reconstructs the mirror facet positions from photographs and a laser distance meter measurement which goes from the center of the image sensor plane to the center of each mirror facet. We use our method to both align the mirror facet positions and to feed the measured positions into our IACT simulation. We demonstrate our implementation on the 4 m First Geiger-mode Avalanche Cherenkov Telescope (FACT).

  2. Cellular and Nuclear Alignment Analysis for Determining Epithelial Cell Chirality

    Science.gov (United States)

    Raymond, Michael J.; Ray, Poulomi; Kaur, Gurleen; Singh, Ajay V.; Wan, Leo Q.

    2015-01-01

    Left-right (LR) asymmetry is a biologically conserved property in living organisms that can be observed in the asymmetrical arrangement of organs and tissues and in tissue morphogenesis, such as the directional looping of the gastrointestinal tract and heart. The expression of LR asymmetry in embryonic tissues can be appreciated in biased cell alignment. Previously an in vitro chirality assay was reported by patterning multiple cells on microscale defined geometries and quantified the cell phenotype–dependent LR asymmetry, or cell chirality. However, morphology and chirality of individual cells on micropatterned surfaces has not been well characterized. Here, a Python-based algorithm was developed to identify and quantify immunofluorescence stained individual epithelial cells on multicellular patterns. This approach not only produces results similar to the image intensity gradient-based method reported previously, but also can capture properties of single cells such as area and aspect ratio. We also found that cell nuclei exhibited biased alignment. Around 35% cells were misaligned and were typically smaller and less elongated. This new imaging analysis approach is an effective tool for measuring single cell chirality inside multicellular structures and can potentially help unveil biophysical mechanisms underlying cellular chiral bias both in vitro and in vivo. PMID:26294010

  3. Application of a clustering-based peak alignment algorithm to analyze various DNA fingerprinting data.

    Science.gov (United States)

    Ishii, Satoshi; Kadota, Koji; Senoo, Keishi

    2009-09-01

    DNA fingerprinting analysis such as amplified ribosomal DNA restriction analysis (ARDRA), repetitive extragenic palindromic PCR (rep-PCR), ribosomal intergenic spacer analysis (RISA), and denaturing gradient gel electrophoresis (DGGE) are frequently used in various fields of microbiology. The major difficulty in DNA fingerprinting data analysis is the alignment of multiple peak sets. We report here an R program for a clustering-based peak alignment algorithm, and its application to analyze various DNA fingerprinting data, such as ARDRA, rep-PCR, RISA, and DGGE data. The results obtained by our clustering algorithm and by BioNumerics software showed high similarity. Since several R packages have been established to statistically analyze various biological data, the distance matrix obtained by our R program can be used for subsequent statistical analyses, some of which were not previously performed but are useful in DNA fingerprinting studies.

  4. Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images

    Science.gov (United States)

    Yang, Juan; Zhang, You; Yin, Yong

    2015-01-01

    Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI

  5. Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.

    Science.gov (United States)

    Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong

    2015-07-08

    Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons

  6. A Secure Alignment Algorithm for Mapping Short Reads to Human Genome.

    Science.gov (United States)

    Zhao, Yongan; Wang, Xiaofeng; Tang, Haixu

    2018-05-09

    The elastic and inexpensive computing resources such as clouds have been recognized as a useful solution to analyzing massive human genomic data (e.g., acquired by using next-generation sequencers) in biomedical researches. However, outsourcing human genome computation to public or commercial clouds was hindered due to privacy concerns: even a small number of human genome sequences contain sufficient information for identifying the donor of the genomic data. This issue cannot be directly addressed by existing security and cryptographic techniques (such as homomorphic encryption), because they are too heavyweight to carry out practical genome computation tasks on massive data. In this article, we present a secure algorithm to accomplish the read mapping, one of the most basic tasks in human genomic data analysis based on a hybrid cloud computing model. Comparing with the existing approaches, our algorithm delegates most computation to the public cloud, while only performing encryption and decryption on the private cloud, and thus makes the maximum use of the computing resource of the public cloud. Furthermore, our algorithm reports similar results as the nonsecure read mapping algorithms, including the alignment between reads and the reference genome, which can be directly used in the downstream analysis such as the inference of genomic variations. We implemented the algorithm in C++ and Python on a hybrid cloud system, in which the public cloud uses an Apache Spark system.

  7. Development and Beam Tests of an Automatic Algorithm for Alignment of LHC Collimators with Embedded BPMs

    CERN Document Server

    Valentino, G; Gasior, M; Mirarchi, D; Nosych, A A; Redaelli, S; Salvachua, B; Assmann, R W; Sammut, N

    2013-01-01

    Collimators with embedded Beam Position Monitor (BPM) buttons will be installed in the LHC during the upcoming long shutdown period. During the subsequent operation, the BPMs will allow the collimator jaws to be kept centered around the beam trajectory. In this manner, the best possible beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation, as the BPM measurements are affected by non-linearities, which vary with the distance between opposite buttons, as well as the difference between the beam and the jaw centers. The successful test results, as well as some considerations for eventual operation in the LHC are also presented.

  8. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    Directory of Open Access Journals (Sweden)

    Gianluca Valentino

    2014-02-01

    Full Text Available Collimators with embedded beam position monitor (BPM button electrodes will be installed in the Large Hadron Collider (LHC during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  9. Successive approximation algorithm for beam-position-monitor-based LHC collimator alignment

    Science.gov (United States)

    Valentino, Gianluca; Nosych, Andriy A.; Bruce, Roderik; Gasior, Marek; Mirarchi, Daniele; Redaelli, Stefano; Salvachua, Belen; Wollmann, Daniel

    2014-02-01

    Collimators with embedded beam position monitor (BPM) button electrodes will be installed in the Large Hadron Collider (LHC) during the current long shutdown period. For the subsequent operation, BPMs will allow the collimator jaws to be kept centered around the beam orbit. In this manner, a better beam cleaning efficiency and machine protection can be provided at unprecedented higher beam energies and intensities. A collimator alignment algorithm is proposed to center the jaws automatically around the beam. The algorithm is based on successive approximation and takes into account a correction of the nonlinear BPM sensitivity to beam displacement and an asymmetry of the electronic channels processing the BPM electrode signals. A software implementation was tested with a prototype collimator in the Super Proton Synchrotron. This paper presents results of the tests along with some considerations for eventual operation in the LHC.

  10. Parallel algorithms for large-scale biological sequence alignment on Xeon-Phi based clusters.

    Science.gov (United States)

    Lan, Haidong; Chan, Yuandong; Xu, Kai; Schmidt, Bertil; Peng, Shaoliang; Liu, Weiguo

    2016-07-19

    Computing alignments between two or more sequences are common operations frequently performed in computational molecular biology. The continuing growth of biological sequence databases establishes the need for their efficient parallel implementation on modern accelerators. This paper presents new approaches to high performance biological sequence database scanning with the Smith-Waterman algorithm and the first stage of progressive multiple sequence alignment based on the ClustalW heuristic on a Xeon Phi-based compute cluster. Our approach uses a three-level parallelization scheme to take full advantage of the compute power available on this type of architecture; i.e. cluster-level data parallelism, thread-level coarse-grained parallelism, and vector-level fine-grained parallelism. Furthermore, we re-organize the sequence datasets and use Xeon Phi shuffle operations to improve I/O efficiency. Evaluations show that our method achieves a peak overall performance up to 220 GCUPS for scanning real protein sequence databanks on a single node consisting of two Intel E5-2620 CPUs and two Intel Xeon Phi 7110P cards. It also exhibits good scalability in terms of sequence length and size, and number of compute nodes for both database scanning and multiple sequence alignment. Furthermore, the achieved performance is highly competitive in comparison to optimized Xeon Phi and GPU implementations. Our implementation is available at https://github.com/turbo0628/LSDBS-mpi .

  11. Optical derotator alignment using image-processing algorithm for tracking laser vibrometer measurements of rotating objects.

    Science.gov (United States)

    Khalil, Hossam; Kim, Dongkyu; Jo, Youngjoon; Park, Kyihwan

    2017-06-01

    An optical component called a Dove prism is used to rotate the laser beam of a laser-scanning vibrometer (LSV). This is called a derotator and is used for measuring the vibration of rotating objects. The main advantage of a derotator is that it works independently from an LSV. However, this device requires very specific alignment, in which the axis of the Dove prism must coincide with the rotational axis of the object. If the derotator is misaligned with the rotating object, the results of the vibration measurement are imprecise, owing to the alteration of the laser beam on the surface of the rotating object. In this study, a method is proposed for aligning a derotator with a rotating object through an image-processing algorithm that obtains the trajectory of a landmark attached to the object. After the trajectory of the landmark is mathematically modeled, the amount of derotator misalignment with respect to the object is calculated. The accuracy of the proposed method for aligning the derotator with the rotating object is experimentally tested.

  12. Computational complexity of algorithms for sequence comparison, short-read assembly and genome alignment.

    Science.gov (United States)

    Baichoo, Shakuntala; Ouzounis, Christos A

    A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Optimal image alignment with random projections of manifolds: algorithm and geometric analysis.

    Science.gov (United States)

    Kokiopoulou, Effrosyni; Kressner, Daniel; Frossard, Pascal

    2011-06-01

    This paper addresses the problem of image alignment based on random measurements. Image alignment consists of estimating the relative transformation between a query image and a reference image. We consider the specific problem where the query image is provided in compressed form in terms of linear measurements captured by a vision sensor. We cast the alignment problem as a manifold distance minimization problem in the linear subspace defined by the measurements. The transformation manifold that represents synthesis of shift, rotation, and isotropic scaling of the reference image can be given in closed form when the reference pattern is sparsely represented over a parametric dictionary. We show that the objective function can then be decomposed as the difference of two convex functions (DC) in the particular case where the dictionary is built on Gaussian functions. Thus, the optimization problem becomes a DC program, which in turn can be solved globally by a cutting plane method. The quality of the solution is typically affected by the number of random measurements and the condition number of the manifold that describes the transformations of the reference image. We show that the curvature, which is closely related to the condition number, remains bounded in our image alignment problem, which means that the relative transformation between two images can be determined optimally in a reduced subspace.

  14. SkyAlign: a portable, work-efficient skyline algorithm for multicore and GPU architectures

    DEFF Research Database (Denmark)

    Bøgh, Kenneth Sejdenfaden; Chester, Sean; Assent, Ira

    2016-01-01

    The skyline operator determines points in a multidimensional dataset that offer some optimal trade-off. State-of-the-art CPU skyline algorithms exploit quad-tree partitioning with complex branching to minimise the number of point-to-point comparisons. Branch-phobic GPU skyline algorithms rely on ...... native multicore state of the art on challenging workloads by an increasing margin as more cores and sockets are utilised....

  15. Genetic Algorithm Phase Retrieval for the Systematic Image-Based Optical Alignment Testbed

    Science.gov (United States)

    Taylor, Jaime; Rakoczy, John; Steincamp, James

    2003-01-01

    Phase retrieval requires calculation of the real-valued phase of the pupil fimction from the image intensity distribution and characteristics of an optical system. Genetic 'algorithms were used to solve two one-dimensional phase retrieval problem. A GA successfully estimated the coefficients of a polynomial expansion of the phase when the number of coefficients was correctly specified. A GA also successfully estimated the multiple p h e s of a segmented optical system analogous to the seven-mirror Systematic Image-Based Optical Alignment (SIBOA) testbed located at NASA s Marshall Space Flight Center. The SIBOA testbed was developed to investigate phase retrieval techniques. Tiphilt and piston motions of the mirrors accomplish phase corrections. A constant phase over each mirror can be achieved by an independent tip/tilt correction: the phase Conection term can then be factored out of the Discrete Fourier Tranform (DFT), greatly reducing computations.

  16. Fast global sequence alignment technique

    KAUST Repository

    Bonny, Mohamed Talal; Salama, Khaled N.

    2011-01-01

    fast alignment algorithm, called 'Alignment By Scanning' (ABS), to provide an approximate alignment of two DNA sequences. We compare our algorithm with the wellknown sequence alignment algorithms, the 'GAP' (which is heuristic) and the 'Needleman

  17. Determination of atomic cluster structure with cluster fusion algorithm

    DEFF Research Database (Denmark)

    Obolensky, Oleg I.; Solov'yov, Ilia; Solov'yov, Andrey V.

    2005-01-01

    We report an efficient scheme of global optimization, called cluster fusion algorithm, which has proved its reliability and high efficiency in determination of the structure of various atomic clusters.......We report an efficient scheme of global optimization, called cluster fusion algorithm, which has proved its reliability and high efficiency in determination of the structure of various atomic clusters....

  18. Determining OBS Instrument Orientations: A Comparison of Algorithms

    Science.gov (United States)

    Doran, A. K.; Laske, G.

    2015-12-01

    The alignment of the orientation of the horizontal seismometer components with the geographical coordinate system is critical for a wide variety of seismic analyses, but the traditional deployment method of ocean bottom seismometers (OBS) precludes knowledge of this parameter. Current techniques for determining the orientation predominantly rely on body and surface wave data recorded from teleseismic events with sufficiently large magnitudes. Both wave types experience lateral refraction between the source and receiver as a result of heterogeneity and anisotropy, and therefore the arrival angle of any one phase can significantly deviate from the great circle minor arc. We systematically compare the results and uncertainties obtained through current determination methods, as well as describe a new algorithm that uses body wave, surface wave, and differential pressure gauge data (where available) to invert for horizontal orientation. To start with, our method is based on the easily transportable computer code of Stachnik et al. (2012) that is publicly available through IRIS. A major addition is that we utilize updated global dispersion maps to account for lateral refraction, as was done by Laske (1995). We also make measurements in a wide range of frequencies, and analyze surface wave trains of repeat orbits. Our method has the advantage of requiring fewer total events to achieve high precision estimates, which is beneficial for OBS deployments that can be as short as weeks. Although the program is designed for the purpose of use with OBS instruments, it also works with standard land installations. We intend to provide the community with a program that is easy to use, requires minimal user input, and is optimized to work with data cataloged at the IRIS DMC.

  19. An algorithm to determine backscattering ratio and single scattering albedo

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Desa, E.; Matondkar, S.G.P.; Mascarenhas, A.A.M.Q.; Nayak, S.R.; Naik, P.

    Algorithms to determine the inherent optical properties of water, backscattering probability and single scattering albedo at 490 and 676 nm from the apparent optical property, remote sensing reflectance are presented here. The measured scattering...

  20. Optimizing multiple sequence alignments using a genetic algorithm based on three objectives: structural information, non-gaps percentage and totally conserved columns.

    Science.gov (United States)

    Ortuño, Francisco M; Valenzuela, Olga; Rojas, Fernando; Pomares, Hector; Florido, Javier P; Urquiza, Jose M; Rojas, Ignacio

    2013-09-01

    Multiple sequence alignments (MSAs) are widely used approaches in bioinformatics to carry out other tasks such as structure predictions, biological function analyses or phylogenetic modeling. However, current tools usually provide partially optimal alignments, as each one is focused on specific biological features. Thus, the same set of sequences can produce different alignments, above all when sequences are less similar. Consequently, researchers and biologists do not agree about which is the most suitable way to evaluate MSAs. Recent evaluations tend to use more complex scores including further biological features. Among them, 3D structures are increasingly being used to evaluate alignments. Because structures are more conserved in proteins than sequences, scores with structural information are better suited to evaluate more distant relationships between sequences. The proposed multiobjective algorithm, based on the non-dominated sorting genetic algorithm, aims to jointly optimize three objectives: STRIKE score, non-gaps percentage and totally conserved columns. It was significantly assessed on the BAliBASE benchmark according to the Kruskal-Wallis test (P algorithm also outperforms other aligners, such as ClustalW, Multiple Sequence Alignment Genetic Algorithm (MSA-GA), PRRP, DIALIGN, Hidden Markov Model Training (HMMT), Pattern-Induced Multi-sequence Alignment (PIMA), MULTIALIGN, Sequence Alignment Genetic Algorithm (SAGA), PILEUP, Rubber Band Technique Genetic Algorithm (RBT-GA) and Vertical Decomposition Genetic Algorithm (VDGA), according to the Wilcoxon signed-rank test (P 0.05) with the advantage of being able to use less structures. Structural information is included within the objective function to evaluate more accurately the obtained alignments. The source code is available at http://www.ugr.es/~fortuno/MOSAStrE/MO-SAStrE.zip.

  1. Determination of the three-dimensional structure for weakly aligned biomolecules by NMR spectroscopy

    International Nuclear Information System (INIS)

    Shahkhatuni, Astghik A; Shahkhatuni, Aleksan G

    2002-01-01

    The key achievements and the potential of NMR spectroscopy for weakly aligned biomolecules are considered. Due to weak alignment, it becomes possible to determine a number of NMR parameters dependent on the orientation of biomolecules, which are averaged to zero in usual isotropic media. The addition of new orientational constraints to standard procedures of 3D structure determination markedly increases the achievable accuracy. The possibility of structure determination for biomolecules using only orientation-dependent parameters without invoking other NMR data is discussed. The methods of orientation, experimental techniques, and calculation methods are systematised. The main results obtained and the prospects of using NMR spectroscopy of weakly aligned systems to study different classes of biomolecules and to solve various problems of molecular biology are analysed. Examples of biomolecules whose structures have been determined using orientation-dependent parameters are given. The bibliography includes 508 references.

  2. Parameter selection for peak alignment in chromatographic sample profiling: Objective quality indicators and use of control samples

    NARCIS (Netherlands)

    Peters, S.; van Velzen, E.; Janssen, H.-G.

    2009-01-01

    In chromatographic profiling applications, peak alignment is often essential as most chromatographic systems exhibit small peak shifts over time. When using currently available alignment algorithms, there are several parameters that determine the outcome of the alignment process. Selecting the

  3. Simulation of beamline alignment operations

    International Nuclear Information System (INIS)

    Annese, C; Miller, M G.

    1999-01-01

    The CORBA-based Simulator was a Laboratory Directed Research and Development (LDRD) project that applied simulation techniques to explore critical questions about distributed control systems. The simulator project used a three-prong approach that studied object-oriented distribution tools, computer network modeling, and simulation of key control system scenarios. The National Ignition Facility's (NIF) optical alignment system was modeled to study control system operations. The alignment of NIF's 192 beamlines is a large complex operation involving more than 100 computer systems and 8000 mechanized devices. The alignment process is defined by a detailed set of procedures; however, many of the steps are deterministic. The alignment steps for a poorly aligned component are similar to that of a nearly aligned component; however, additional operations/iterations are required to complete the process. Thus, the same alignment operations will require variable amounts of time to perform depending on the current alignment condition as well as other factors. Simulation of the alignment process is necessary to understand beamline alignment time requirements and how shared resources such as the Output Sensor and Target Alignment Sensor effect alignment efficiency. The simulation has provided alignment time estimates and other results based on documented alignment procedures and alignment experience gained in the laboratory. Computer communication time, mechanical hardware actuation times, image processing algorithm execution times, etc. have been experimentally determined and incorporated into the model. Previous analysis of alignment operations utilized average implementation times for all alignment operations. Resource sharing becomes rather simple to model when only average values are used. The time required to actually implement the many individual alignment operations will be quite dynamic. The simulation model estimates the time to complete an operation using

  4. Prediction of Antimicrobial Peptides Based on Sequence Alignment and Support Vector Machine-Pairwise Algorithm Utilizing LZ-Complexity

    Directory of Open Access Journals (Sweden)

    Xin Yi Ng

    2015-01-01

    Full Text Available This study concerns an attempt to establish a new method for predicting antimicrobial peptides (AMPs which are important to the immune system. Recently, researchers are interested in designing alternative drugs based on AMPs because they have found that a large number of bacterial strains have become resistant to available antibiotics. However, researchers have encountered obstacles in the AMPs designing process as experiments to extract AMPs from protein sequences are costly and require a long set-up time. Therefore, a computational tool for AMPs prediction is needed to resolve this problem. In this study, an integrated algorithm is newly introduced to predict AMPs by integrating sequence alignment and support vector machine- (SVM- LZ complexity pairwise algorithm. It was observed that, when all sequences in the training set are used, the sensitivity of the proposed algorithm is 95.28% in jackknife test and 87.59% in independent test, while the sensitivity obtained for jackknife test and independent test is 88.74% and 78.70%, respectively, when only the sequences that has less than 70% similarity are used. Applying the proposed algorithm may allow researchers to effectively predict AMPs from unknown protein peptide sequences with higher sensitivity.

  5. Agent Based Model in SAS Environment for Rail Transit System Alignment Determination

    Directory of Open Access Journals (Sweden)

    I Made Indradjaja Brunner

    2018-04-01

    Full Text Available Transit system had been proposed for the urban area of Honolulu. One consideration to be determined is the alignment of the transit system. Decision to set the transit alignment will have influences on which areas will be served, who will be benefiting, as well as who will be impacted. Inputs for the decision usually conducted through public meetings, where community members are shown numbers of maps with pre-set routes. That approach could lead to a rather subjective decision by the community members. This paper attempts to discuss the utilization of grid map in determining the best alignment for rail transit system in Honolulu, Hawaii. It tries to use a more objective approach using various data derived from thematic maps. Overlaid maps are aggregated into a uniform 0.1-square mile vector based grid map system in GIS environment. The large dataset in the GIS environment is analyzed and manipulated using SAS software. The SAS procedure is applied to select the location of the alignment using a rational and deterministic approach. Grid cells that are superior compared to the others are selected based on several predefined criteria. Location of the dominant cells indicates possible transit alignment. The SAS procedure is designed to allow a transient vector called the GUIDE (Grid Unit with Intelligent Directional Expertise agent to analyze several cells at its vicinity and to move towards a cell with the highest value. Each time the agent landed on a cell, it left a mark. The chain of those marks shows location for the transit alignment. This study shows that the combination of ArcGIS and SAS allows a robust analysis of spatial data and manipulation of its datasets, which can be used to run a simulation mimicking the Agent-Based Modelling. This study also opens up further study possibilities by increasing number of factors analyzed by the agent, as well as creating a composite value of multi-factors.

  6. Application of genetic algorithm in radio ecological models parameter determination

    Energy Technology Data Exchange (ETDEWEB)

    Pantelic, G. [Institute of Occupatioanl Health and Radiological Protection ' Dr Dragomir Karajovic' , Belgrade (Serbia)

    2006-07-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 {+-} 3) days and transfer coefficient from grass to milk is (0.019 {+-} 0.005). (authors)

  7. Application of genetic algorithm in radio ecological models parameter determination

    International Nuclear Information System (INIS)

    Pantelic, G.

    2006-01-01

    The method of genetic algorithms was used to determine the biological half-life of 137 Cs in cow milk after the accident in Chernobyl. Methodologically genetic algorithms are based on the fact that natural processes tend to optimize themselves and therefore this method should be more efficient in providing optimal solutions in the modeling of radio ecological and environmental events. The calculated biological half-life of 137 Cs in milk is (32 ± 3) days and transfer coefficient from grass to milk is (0.019 ± 0.005). (authors)

  8. An algorithm for determining the rotation count of pulsars

    Science.gov (United States)

    Freire, Paulo C. C.; Ridolfi, Alessandro

    2018-06-01

    We present here a simple, systematic method for determining the correct global rotation count of a radio pulsar; an essential step for the derivation of an accurate phase-coherent ephemeris. We then build on this method by developing a new algorithm for determining the global rotational count for pulsars with sparse timing data sets. This makes it possible to obtain phase-coherent ephemerides for pulsars for which this has been impossible until now. As an example, we do this for PSR J0024-7205aa, an extremely faint Millisecond pulsar (MSP) recently discovered in the globular cluster 47 Tucanae. This algorithm has the potential to significantly reduce the number of observations and the amount of telescope time needed to follow up on new pulsar discoveries.

  9. Tree Alignment Based on Needleman-Wunsch Algorithm for Sensor Selection in Smart Homes.

    Science.gov (United States)

    Chua, Sook-Ling; Foo, Lee Kien

    2017-08-18

    Activity recognition in smart homes aims to infer the particular activities of the inhabitant, the aim being to monitor their activities and identify any abnormalities, especially for those living alone. In order for a smart home to support its inhabitant, the recognition system needs to learn from observations acquired through sensors. One question that often arises is which sensors are useful and how many sensors are required to accurately recognise the inhabitant's activities? Many wrapper methods have been proposed and remain one of the popular evaluators for sensor selection due to its superior accuracy performance. However, they are prohibitively slow during the evaluation process and may run into the risk of overfitting due to the extent of the search. Motivated by this characteristic, this paper attempts to reduce the cost of the evaluation process and overfitting through tree alignment. The performance of our method is evaluated on two public datasets obtained in two distinct smart home environments.

  10. Can a semi-automated surface matching and principal axis-based algorithm accurately quantify femoral shaft fracture alignment in six degrees of freedom?

    Science.gov (United States)

    Crookshank, Meghan C; Beek, Maarten; Singh, Devin; Schemitsch, Emil H; Whyne, Cari M

    2013-07-01

    Accurate alignment of femoral shaft fractures treated with intramedullary nailing remains a challenge for orthopaedic surgeons. The aim of this study is to develop and validate a cone-beam CT-based, semi-automated algorithm to quantify the malalignment in six degrees of freedom (6DOF) using a surface matching and principal axes-based approach. Complex comminuted diaphyseal fractures were created in nine cadaveric femora and cone-beam CT images were acquired (27 cases total). Scans were cropped and segmented using intensity-based thresholding, producing superior, inferior and comminution volumes. Cylinders were fit to estimate the long axes of the superior and inferior fragments. The angle and distance between the two cylindrical axes were calculated to determine flexion/extension and varus/valgus angulation and medial/lateral and anterior/posterior translations, respectively. Both surfaces were unwrapped about the cylindrical axes. Three methods of matching the unwrapped surface for determination of periaxial rotation were compared based on minimizing the distance between features. The calculated corrections were compared to the input malalignment conditions. All 6DOF were calculated to within current clinical tolerances for all but two cases. This algorithm yielded accurate quantification of malalignment of femoral shaft fractures for fracture gaps up to 60 mm, based on a single CBCT image of the fractured limb. Copyright © 2012 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Location and Position Determination Algorithm For Humanoid Soccer Robot

    Directory of Open Access Journals (Sweden)

    Oei Kurniawan Utomo

    2016-03-01

    Full Text Available The algorithm of location and position determination was designed for humanoid soccer robot. The robots have to be able to control the ball effectively on the field of Indonesian Robot Soccer Competition which has a size of 900 cm x 600 cm. The algorithm of location and position determination uses parameters, such as the goalpost’s thickness, the compass value, and the robot’s head servo value. The goalpost’s thickness is detected using The Centre of Gravity method. The width of the goalpost detected is analyzed using the principles of camera geometry to determine the distance between the robot and the goalpost. The tangent value of head servo’s tilt angle is used to determine the distance between the robot and the ball. The distance between robot-goalpost and the distance between robot-ball are processed with the difference of head servo’s pan angle and compass value using trigonometric formulas to determine the coordinates of the robot and the ball in the Cartesian coordinates.

  12. Band Alignment Determination of Two-Dimensional Heterojunctions and Their Electronic Applications

    KAUST Repository

    Chiu, Ming-Hui

    2018-05-09

    Two-dimensional (2D) layered materials such as MoS2 have been recognized as high on-off ratio semiconductors which are promising candidates for electronic and optoelectronic devices. In addition to the use of individual 2D materials, the accelerated field of 2D heterostructures enables even greater functionalities. Device designs differ, and they are strongly controlled by the electronic band alignment. For example, photovoltaic cells require type II heterostructures for light harvesting, and light-emitting diodes benefit from multiple quantum wells with the type I band alignment for high emission efficiency. The vertical tunneling field-effect transistor for next-generation electronics depends on nearly broken-gap band alignment for boosting its performance. To tailor these 2D layered materials toward possible future applications, the understanding of 2D heterostructure band alignment becomes critically important. In the first part of this thesis, we discuss the band alignment of 2D heterostructures. To do so, we firstly study the interlayer coupling between two dissimilar 2D materials. We conclude that a post-anneal process could enhance the interlayer coupling of as-transferred 2D heterostructures, and heterostructural stacking imposes similar symmetry changes as homostructural stacking. Later, we precisely determine the quasi particle bandgap and band alignment of the MoS2/WSe2 heterostructure by using scan tunneling microscopy/spectroscopy (STM/S) and micron-beam X-ray photoelectron spectroscopy (μ-XPS) techniques. Lastly, we prove that the band alignment of 2D heterojunctions can be accurately predicted by Anderson’s model, which has previously failed to predict conventional bulk heterostructures. In the second part of this thesis, we develop a new Chemical Vapor Deposition (CVD) method capable of precisely controlling the growth area of p- and n-type transition metal dichalcogenides (TMDCs) and further form lateral or vertical 2D heterostructures. This

  13. SHARAKU: an algorithm for aligning and clustering read mapping profiles of deep sequencing in non-coding RNA processing.

    Science.gov (United States)

    Tsuchiya, Mariko; Amano, Kojiro; Abe, Masaya; Seki, Misato; Hase, Sumitaka; Sato, Kengo; Sakakibara, Yasubumi

    2016-06-15

    Deep sequencing of the transcripts of regulatory non-coding RNA generates footprints of post-transcriptional processes. After obtaining sequence reads, the short reads are mapped to a reference genome, and specific mapping patterns can be detected called read mapping profiles, which are distinct from random non-functional degradation patterns. These patterns reflect the maturation processes that lead to the production of shorter RNA sequences. Recent next-generation sequencing studies have revealed not only the typical maturation process of miRNAs but also the various processing mechanisms of small RNAs derived from tRNAs and snoRNAs. We developed an algorithm termed SHARAKU to align two read mapping profiles of next-generation sequencing outputs for non-coding RNAs. In contrast with previous work, SHARAKU incorporates the primary and secondary sequence structures into an alignment of read mapping profiles to allow for the detection of common processing patterns. Using a benchmark simulated dataset, SHARAKU exhibited superior performance to previous methods for correctly clustering the read mapping profiles with respect to 5'-end processing and 3'-end processing from degradation patterns and in detecting similar processing patterns in deriving the shorter RNAs. Further, using experimental data of small RNA sequencing for the common marmoset brain, SHARAKU succeeded in identifying the significant clusters of read mapping profiles for similar processing patterns of small derived RNA families expressed in the brain. The source code of our program SHARAKU is available at http://www.dna.bio.keio.ac.jp/sharaku/, and the simulated dataset used in this work is available at the same link. Accession code: The sequence data from the whole RNA transcripts in the hippocampus of the left brain used in this work is available from the DNA DataBank of Japan (DDBJ) Sequence Read Archive (DRA) under the accession number DRA004502. yasu@bio.keio.ac.jp Supplementary data are available

  14. Internal alignment and position resolution of the silicon tracker of DAMPE determined with orbit data

    Science.gov (United States)

    Tykhonov, A.; Ambrosi, G.; Asfandiyarov, R.; Azzarello, P.; Bernardini, P.; Bertucci, B.; Bolognini, A.; Cadoux, F.; D'Amone, A.; De Benedittis, A.; De Mitri, I.; Di Santo, M.; Dong, Y. F.; Duranti, M.; D'Urso, D.; Fan, R. R.; Fusco, P.; Gallo, V.; Gao, M.; Gargano, F.; Garrappa, S.; Gong, K.; Ionica, M.; La Marra, D.; Lei, S. J.; Li, X.; Loparco, F.; Marsella, G.; Mazziotta, M. N.; Peng, W. X.; Qiao, R.; Salinas, M. M.; Surdo, A.; Vagelli, V.; Vitillo, S.; Wang, H. Y.; Wang, J. Z.; Wang, Z. M.; Wu, D.; Wu, X.; Zhang, F.; Zhang, J. Y.; Zhao, H.; Zimmer, S.

    2018-06-01

    The DArk Matter Particle Explorer (DAMPE) is a space-borne particle detector designed to probe electrons and gamma-rays in the few GeV to 10 TeV energy range, as well as cosmic-ray proton and nuclei components between 10 GeV and 100 TeV. The silicon-tungsten tracker-converter is a crucial component of DAMPE. It allows the direction of incoming photons converting into electron-positron pairs to be estimated, and the trajectory and charge (Z) of cosmic-ray particles to be identified. It consists of 768 silicon micro-strip sensors assembled in 6 double layers with a total active area of 6.6 m2. Silicon planes are interleaved with three layers of tungsten plates, resulting in about one radiation length of material in the tracker. Internal alignment parameters of the tracker have been determined on orbit, with non-showering protons and helium nuclei. We describe the alignment procedure and present the position resolution and alignment stability measurements.

  15. Classical Methods and Calculation Algorithms for Determining Lime Requirements

    Directory of Open Access Journals (Sweden)

    André Guarçoni

    Full Text Available ABSTRACT The methods developed for determination of lime requirements (LR are based on widely accepted principles. However, the formulas used for calculation have evolved little over recent decades, and in some cases there are indications of their inadequacy. The aim of this study was to compare the lime requirements calculated by three classic formulas and three algorithms, defining those most appropriate for supplying Ca and Mg to coffee plants and the smaller possibility of causing overliming. The database used contained 600 soil samples, which were collected in coffee plantings. The LR was estimated by the methods of base saturation, neutralization of Al3+, and elevation of Ca2+ and Mg2+ contents (two formulas and by the three calculation algorithms. Averages of the lime requirements were compared, determining the frequency distribution of the 600 lime requirements (LR estimated through each calculation method. In soils with low cation exchange capacity at pH 7, the base saturation method may fail to adequately supply the plants with Ca and Mg in many situations, while the method of Al3+ neutralization and elevation of Ca2+ and Mg2+ contents can result in the calculation of application rates that will increase the pH above the suitable range. Among the methods studied for calculating lime requirements, the algorithm that predicts reaching a defined base saturation, with adequate Ca and Mg supply and the maximum application rate limited to the H+Al value, proved to be the most efficient calculation method, and it can be recommended for use in numerous crops conditions.

  16. Energy band alignment at ferroelectric/electrode interface determined by photoelectron spectroscopy

    International Nuclear Information System (INIS)

    Chen Feng; Wu Wen-Bin; Li Shun-Yi; Klein Andreas

    2014-01-01

    The most important interface-related quantities determined by band alignment are the barrier heights for charge transport, given by the Fermi level position at the interface. Taking Pb(Zr,Ti)O 3 (PZT) as a typical ferroelectric material and applying X-ray photoelectron spectroscopy (XPS), we briefly review the interface formation and barrier heights at the interfaces between PZT and electrodes made of various metals or conductive oxides. Polarization dependence of the Schottky barrier height at a ferroelectric/electrode interface is also directly observed using XPS. (topical review - magnetism, magnetic materials, and interdisciplinary research)

  17. Determination of band alignment in the single-layer MoS2/WSe2 heterojunction

    KAUST Repository

    Chiu, Ming-Hui

    2015-07-16

    The emergence of two-dimensional electronic materials has stimulated proposals of novel electronic and photonic devices based on the heterostructures of transition metal dichalcogenides. Here we report the determination of band offsets in the heterostructures of transition metal dichalcogenides by using microbeam X-ray photoelectron spectroscopy and scanning tunnelling microscopy/spectroscopy. We determine a type-II alignment between MoS2 and WSe2 with a valence band offset value of 0.83 eV and a conduction band offset of 0.76 eV. First-principles calculations show that in this heterostructure with dissimilar chalcogen atoms, the electronic structures of WSe2 and MoS2 are well retained in their respective layers due to a weak interlayer coupling. Moreover, a valence band offset of 0.94 eV is obtained from density functional theory, consistent with the experimental determination.

  18. Determination of band alignment in the single-layer MoS2/WSe2 heterojunction

    KAUST Repository

    Chiu, Ming-Hui; Zhang, Chendong; Shiu, Hung-Wei; Chuu, Chih-Piao; Chen, Chang-Hsiao; Chang, Chih-Yuan S.; Chen, Chia-Hao; Chou, Mei-Yin; Shih, Chih-Kang; Li, Lain-Jong

    2015-01-01

    The emergence of two-dimensional electronic materials has stimulated proposals of novel electronic and photonic devices based on the heterostructures of transition metal dichalcogenides. Here we report the determination of band offsets in the heterostructures of transition metal dichalcogenides by using microbeam X-ray photoelectron spectroscopy and scanning tunnelling microscopy/spectroscopy. We determine a type-II alignment between MoS2 and WSe2 with a valence band offset value of 0.83 eV and a conduction band offset of 0.76 eV. First-principles calculations show that in this heterostructure with dissimilar chalcogen atoms, the electronic structures of WSe2 and MoS2 are well retained in their respective layers due to a weak interlayer coupling. Moreover, a valence band offset of 0.94 eV is obtained from density functional theory, consistent with the experimental determination.

  19. ABS: Sequence alignment by scanning

    KAUST Repository

    Bonny, Mohamed Talal

    2011-08-01

    Sequence alignment is an essential tool in almost any computational biology research. It processes large database sequences and considered to be high consumers of computation time. Heuristic algorithms are used to get approximate but fast results. We introduce fast alignment algorithm, called Alignment By Scanning (ABS), to provide an approximate alignment of two DNA sequences. We compare our algorithm with the well-known alignment algorithms, the FASTA (which is heuristic) and the \\'Needleman-Wunsch\\' (which is optimal). The proposed algorithm achieves up to 76% enhancement in alignment score when it is compared with the FASTA Algorithm. The evaluations are conducted using different lengths of DNA sequences. © 2011 IEEE.

  20. ABS: Sequence alignment by scanning

    KAUST Repository

    Bonny, Mohamed Talal; Salama, Khaled N.

    2011-01-01

    Sequence alignment is an essential tool in almost any computational biology research. It processes large database sequences and considered to be high consumers of computation time. Heuristic algorithms are used to get approximate but fast results. We introduce fast alignment algorithm, called Alignment By Scanning (ABS), to provide an approximate alignment of two DNA sequences. We compare our algorithm with the well-known alignment algorithms, the FASTA (which is heuristic) and the 'Needleman-Wunsch' (which is optimal). The proposed algorithm achieves up to 76% enhancement in alignment score when it is compared with the FASTA Algorithm. The evaluations are conducted using different lengths of DNA sequences. © 2011 IEEE.

  1. Thunderstorm Algorithm for Determining Unit Commitment in Power System Operation

    Directory of Open Access Journals (Sweden)

    Arif Nur Afandi

    2016-12-01

    Full Text Available Solving the unit commitment problem is an important task in power system operation for deciding a balanced power production between various types of generating units under technical constraints and environmental limitations. This paper presents a new intelligent computation method, called the Thunderstorm Algorithm (TA, for searching the optimal solution of the integrated economic and emission dispatch (IEED problem as the operational assessment for determining unit commitment. A simulation using the IEEE-62 bus system showed that TA has smooth convergence and is applicable for solving the IEED problem. The IEED’s solution is associated with the total fuel consumption and pollutant emission. The proposed TA method seems to be a viable new approach for finding the optimal solution of the IEED problem.

  2. Antares beam-alignment-system performance

    International Nuclear Information System (INIS)

    Appert, Q.D.; Bender, S.C.

    1983-01-01

    The beam alignment system for the 24-beam-sector Antares CO 2 fusion laser automatically aligns more than 200 optical elements. A visible-wavelength alignment technique is employed which uses a telescope/TV system to view point-light sources appropriately located down the beamline. The centroids of the light spots are determined by a video tracker, which generates error signals used by the computer control system to move appropriate mirrors in a closed-loop system. Final touch-up alignment is accomplished by projecting a CO 2 alignment laser beam through the system and sensing its position at the target location. The techniques and control algorithms employed have resulted in alignment accuracies exceeding design requirements. By employing video processing to determine the centroids of diffraction images and by averaging over multiple TV frames, we achieve alignment accuracies better than 0.1 times system diffraction limits in the presence of air turbulence

  3. Global search in photoelectron diffraction structure determination using genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Viana, M L [Departamento de Fisica, Icex, UFMG, Belo Horizonte, Minas Gerais (Brazil); Muino, R Diez [Donostia International Physics Center DIPC, Paseo Manuel de Lardizabal 4, 20018 San Sebastian (Spain); Soares, E A [Departamento de Fisica, Icex, UFMG, Belo Horizonte, Minas Gerais (Brazil); Hove, M A Van [Department of Physics and Materials Science, City University of Hong Kong, Hong Kong (China); Carvalho, V E de [Departamento de Fisica, Icex, UFMG, Belo Horizonte, Minas Gerais (Brazil)

    2007-11-07

    Photoelectron diffraction (PED) is an experimental technique widely used to perform structural determinations of solid surfaces. Similarly to low-energy electron diffraction (LEED), structural determination by PED requires a fitting procedure between the experimental intensities and theoretical results obtained through simulations. Multiple scattering has been shown to be an effective approach for making such simulations. The quality of the fit can be quantified through the so-called R-factor. Therefore, the fitting procedure is, indeed, an R-factor minimization problem. However, the topography of the R-factor as a function of the structural and non-structural surface parameters to be determined is complex, and the task of finding the global minimum becomes tough, particularly for complex structures in which many parameters have to be adjusted. In this work we investigate the applicability of the genetic algorithm (GA) global optimization method to this problem. The GA is based on the evolution of species, and makes use of concepts such as crossover, elitism and mutation to perform the search. We show results of its application in the structural determination of three different systems: the Cu(111) surface through the use of energy-scanned experimental curves; the Ag(110)-c(2 x 2)-Sb system, in which a theory-theory fit was performed; and the Ag(111) surface for which angle-scanned experimental curves were used. We conclude that the GA is a highly efficient method to search for global minima in the optimization of the parameters that best fit the experimental photoelectron diffraction intensities to the theoretical ones.

  4. Note: Non-invasive optical method for rapid determination of alignment degree of oriented nanofibrous layers

    Energy Technology Data Exchange (ETDEWEB)

    Pokorny, M.; Rebicek, J. [R& D Department, Contipro Biotech s.r.o., 561 02 Dolni Dobrouc (Czech Republic); Klemes, J. [R& D Department, Contipro Pharma a.s., 561 02 Dolni Dobrouc (Czech Republic); Kotzianova, A. [R& D Department, Contipro Pharma a.s., 561 02 Dolni Dobrouc (Czech Republic); Department of Chemistry, Faculty of Science, Masaryk University, Kamenice 5, CZ-62500 Brno (Czech Republic); Velebny, V. [R& D Department, Contipro Biotech s.r.o., 561 02 Dolni Dobrouc (Czech Republic); R& D Department, Contipro Pharma a.s., 561 02 Dolni Dobrouc (Czech Republic)

    2015-10-15

    This paper presents a rapid non-destructive method that provides information on the anisotropic internal structure of nanofibrous layers. A laser beam of a wavelength of 632.8 nm is directed at and passes through a nanofibrous layer prepared by electrostatic spinning. Information about the structural arrangement of nanofibers in the layer is directly visible in the form of a diffraction image formed on a projection screen or obtained from measured intensities of the laser beam passing through the sample which are determined by the dependency of the angle of the main direction of polarization of the laser beam on the axis of alignment of nanofibers in the sample. Both optical methods were verified on Polyvinyl alcohol (PVA) nanofibrous layers (fiber diameter of 470 nm) with random, single-axis aligned and crossed structures. The obtained results match the results of commonly used methods which apply the analysis of electron microscope images. The presented simple method not only allows samples to be analysed much more rapidly and without damaging them but it also makes possible the analysis of much larger areas, up to several square millimetres, at the same time.

  5. Note: Non-invasive optical method for rapid determination of alignment degree of oriented nanofibrous layers

    International Nuclear Information System (INIS)

    Pokorny, M.; Rebicek, J.; Klemes, J.; Kotzianova, A.; Velebny, V.

    2015-01-01

    This paper presents a rapid non-destructive method that provides information on the anisotropic internal structure of nanofibrous layers. A laser beam of a wavelength of 632.8 nm is directed at and passes through a nanofibrous layer prepared by electrostatic spinning. Information about the structural arrangement of nanofibers in the layer is directly visible in the form of a diffraction image formed on a projection screen or obtained from measured intensities of the laser beam passing through the sample which are determined by the dependency of the angle of the main direction of polarization of the laser beam on the axis of alignment of nanofibers in the sample. Both optical methods were verified on Polyvinyl alcohol (PVA) nanofibrous layers (fiber diameter of 470 nm) with random, single-axis aligned and crossed structures. The obtained results match the results of commonly used methods which apply the analysis of electron microscope images. The presented simple method not only allows samples to be analysed much more rapidly and without damaging them but it also makes possible the analysis of much larger areas, up to several square millimetres, at the same time

  6. Fast global sequence alignment technique

    KAUST Repository

    Bonny, Mohamed Talal

    2011-11-01

    Bioinformatics database is growing exponentially in size. Processing these large amount of data may take hours of time even if super computers are used. One of the most important processing tool in Bioinformatics is sequence alignment. We introduce fast alignment algorithm, called \\'Alignment By Scanning\\' (ABS), to provide an approximate alignment of two DNA sequences. We compare our algorithm with the wellknown sequence alignment algorithms, the \\'GAP\\' (which is heuristic) and the \\'Needleman-Wunsch\\' (which is optimal). The proposed algorithm achieves up to 51% enhancement in alignment score when it is compared with the GAP Algorithm. The evaluations are conducted using different lengths of DNA sequences. © 2011 IEEE.

  7. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  8. Algorithms for a Precise Determination of the Betatron Tune

    CERN Document Server

    Bartolini, R; Giovannozzi, Massimo; Todesco, Ezio; Scandale, Walter

    1996-01-01

    In circular accelerators the precise knowledge of the betatron tune is of paramount importance both for routine operation and for theoretical investigations. The tune is measured by sampling the transverse position of the beam for N turns and by performing the FFT of the stored data. One can also evaluate it by computing the Average Phase Advance (APA) over N turns. These approaches have an intrinsic error proportional to 1/N. However, there are special cases where either a better precision or a faster measurement is desired. More efficient algorithms can be used, as those suggested by E.Asseo [1] and recently by J. Laskar [2]. They provide tune estimates by far more precise than those of a plain FFT, as discussed in Ref. [3]. Another important isssue is the effect of the finite resolution of the instrumentation used to measure the beam position. This introduces a noise and the frequency response of the beam is modified [4,5} thus reducing the precision by which the tune is determined. In Section 2 we recall ...

  9. Determination of Pavement Rehabilitation Activities through a Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Sangyum Lee

    2013-01-01

    Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.

  10. A Comparison of Plain Radiography with Computer Tomography in Determining Coronal and Sagittal Alignments following Total Knee Arthroplasty

    Directory of Open Access Journals (Sweden)

    Solayar GN

    2017-07-01

    Full Text Available INTRODUCTION: Optimal coronal and sagittal component positioning is important in achieving a successful outcome following total knee arthroplasty (TKA. Modalities to determine post-operative alignment include plain radiography and computer tomography (CT imaging. This study aims to determine the accuracy and reliability of plain radiographs in measuring coronal and sagittal alignment following TKA. MATERIALS AND METHODS: A prospective, consecutive study of 58 patients undergoing TKA was performed comparing alignment data from plain radiographs and CT imaging. Hip- knee-angle (HKA, sagittal femoral angle (SFA and sagittal tibial angle (STA measurements were taken by two observers from plain radiographs and compared with CT alignment. Intra- and inter-observer correlation was calculated for each measurement. RESULTS: Intra-observer correlation was excellent for HKA (r>0.89 with a mean difference of 0.95 and STA (r>0.8 compared to SFA (r=0.5. When comparing modalities (radiographs vs CT, HKA estimations for both observers showed the least maximum and mean differences while SFA observations were the least accurate. CONCLUSION: Radiographic estimation of HKA showed excellent intra- and inter-observer correlation and corresponds well with CT imaging. However, radiographic estimation of sagittal plane alignment was less reliably measured and correlated less with CT imaging. Plain radiography was found to be inferior to CT for estimation of biplanar prosthetic alignment following TKA.

  11. Determination of the electrical resistivity of vertically aligned carbon nanotubes by scanning probe microscopy

    Science.gov (United States)

    Ageev, O. A.; Il'in, O. I.; Rubashkina, M. V.; Smirnov, V. A.; Fedotov, A. A.; Tsukanova, O. G.

    2015-07-01

    Techniques are developed to determine the resistance per unit length and the electrical resistivity of vertically aligned carbon nanotubes (VA CNTs) using atomic force microscopy (AFM) and scanning tunneling microscopy (STM). These techniques are used to study the resistance of VA CNTs. The resistance of an individual VA CNT calculated with the AFM-based technique is shown to be higher than the resistance of VA CNTs determined by the STM-based technique by a factor of 200, which is related to the influence of the resistance of the contact of an AFM probe to VA CNTs. The resistance per unit length and the electrical resistivity of an individual VA CNT 118 ± 39 nm in diameter and 2.23 ± 0.37 μm in height that are determined by the STM-based technique are 19.28 ± 3.08 kΩ/μm and 8.32 ± 3.18 × 10-4 Ω m, respectively. The STM-based technique developed to determine the resistance per unit length and the electrical resistivity of VA CNTs can be used to diagnose the electrical parameters of VA CNTs and to create VA CNT-based nanoelectronic elements.

  12. Determination of Optimal Double Sampling Plan using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sampath Sundaram

    2012-03-01

    Full Text Available Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Designing double sampling plan requires identification of sample sizes and acceptance numbers. In this paper a genetic algorithm has been designed for the selection of optimal acceptance numbers and sample sizes for the specified producer’s risk and consumer’s risk. Implementation of the algorithm has been illustrated numerically for different choices of quantities involved in a double sampling plan   

  1. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  2. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner

    DEFF Research Database (Denmark)

    Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan

    2009-01-01

    MOTIVATION: The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary...... determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than...

  3. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter

    Directory of Open Access Journals (Sweden)

    Jianhua Cheng

    2017-10-01

    Full Text Available Because of the harsh polar environment, the master strapdown inertial navigation system (SINS has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment.

  4. A New Polar Transfer Alignment Algorithm with the Aid of a Star Sensor and Based on an Adaptive Unscented Kalman Filter.

    Science.gov (United States)

    Cheng, Jianhua; Wang, Tongda; Wang, Lu; Wang, Zhenmin

    2017-10-23

    Because of the harsh polar environment, the master strapdown inertial navigation system (SINS) has low accuracy and the system model information becomes abnormal. In this case, existing polar transfer alignment (TA) algorithms which use the measurement information provided by master SINS would lose their effectiveness. In this paper, a new polar TA algorithm with the aid of a star sensor and based on an adaptive unscented Kalman filter (AUKF) is proposed to deal with the problems. Since the measurement information provided by master SINS is inaccurate, the accurate information provided by the star sensor is chosen as the measurement. With the compensation of lever-arm effect and the model of star sensor, the nonlinear navigation equations are derived. Combined with the attitude matching method, the filter models for polar TA are designed. An AUKF is introduced to solve the abnormal information of system model. Then, the AUKF is used to estimate the states of TA. Results have demonstrated that the performance of the new polar TA algorithm is better than the state-of-the-art polar TA algorithms. Therefore, the new polar TA algorithm proposed in this paper is effectively to ensure and improve the accuracy of TA in the harsh polar environment.

  5. Modified compensation algorithm of lever-arm effect and flexural deformation for polar shipborne transfer alignment based on improved adaptive Kalman filter

    International Nuclear Information System (INIS)

    Wang, Tongda; Cheng, Jianhua; Guan, Dongxue; Kang, Yingyao; Zhang, Wei

    2017-01-01

    Due to the lever-arm effect and flexural deformation in the practical application of transfer alignment (TA), the TA performance is decreased. The existing polar TA algorithm only compensates a fixed lever-arm without considering the dynamic lever-arm caused by flexural deformation; traditional non-polar TA algorithms also have some limitations. Thus, the performance of existing compensation algorithms is unsatisfactory. In this paper, a modified compensation algorithm of the lever-arm effect and flexural deformation is proposed to promote the accuracy and speed of the polar TA. On the basis of a dynamic lever-arm model and a noise compensation method for flexural deformation, polar TA equations are derived in grid frames. Based on the velocity-plus-attitude matching method, the filter models of polar TA are designed. An adaptive Kalman filter (AKF) is improved to promote the robustness and accuracy of the system, and then applied to the estimation of the misalignment angles. Simulation and experiment results have demonstrated that the modified compensation algorithm based on the improved AKF for polar TA can effectively compensate the lever-arm effect and flexural deformation, and then improve the accuracy and speed of TA in the polar region. (paper)

  6. Eigenvalues calculation algorithms for {lambda}-modes determination. Parallelization approach

    Energy Technology Data Exchange (ETDEWEB)

    Vidal, V. [Universidad Politecnica de Valencia (Spain). Departamento de Sistemas Informaticos y Computacion; Verdu, G.; Munoz-Cobo, J.L. [Universidad Politecnica de Valencia (Spain). Departamento de Ingenieria Quimica y Nuclear; Ginestart, D. [Universidad Politecnica de Valencia (Spain). Departamento de Matematica Aplicada

    1997-03-01

    In this paper, we review two methods to obtain the {lambda}-modes of a nuclear reactor, Subspace Iteration method and Arnoldi`s method, which are popular methods to solve the partial eigenvalue problem for a given matrix. In the developed application for the neutron diffusion equation we include improved acceleration techniques for both methods. Also, we propose two parallelization approaches for these methods, a coarse grain parallelization and a fine grain one. We have tested the developed algorithms with two realistic problems, focusing on the efficiency of the methods according to the CPU times. (author).

  7. Importance of the alignment of polar π conjugated molecules inside carbon nanotubes in determining second-order non-linear optical properties.

    Science.gov (United States)

    Yumura, Takashi; Yamamoto, Wataru

    2017-09-20

    We employed density functional theory (DFT) calculations with dispersion corrections to investigate energetically preferred alignments of certain p,p'-dimethylaminonitrostilbene (DANS) molecules inside an armchair (m,m) carbon nanotube (n × DANS@(m,m)), where the number of inner molecules (n) is no greater than 3. Here, three types of alignments of DANS are considered: a linear alignment in a parallel fashion and stacking alignments in parallel and antiparallel fashions. According to DFT calculations, a threshold tube diameter for containing DANS molecules in linear or stacking alignments was found to be approximately 1.0 nm. Nanotubes with diameters smaller than 1.0 nm result in the selective formation of linearly aligned DANS molecules due to strong confinement effects within the nanotubes. By contrast, larger diameter nanotubes allow DANS molecules to align in a stacking and linear fashion. The type of alignment adopted by the DANS molecules inside a nanotube is responsible for their second-order non-linear optical properties represented by their static hyperpolarizability (β 0 values). In fact, we computed β 0 values of DANS assemblies taken from optimized n × DANS@(m,m) structures, and their values were compared with those of a single DANS molecule. DFT calculations showed that β 0 values of DANS molecules depend on their alignment, which decrease in the following order: linear alignment > parallel stacking alignment > antiparallel stacking alignment. In particular, a linear alignment has a β 0 value more significant than that of the same number of isolated molecules. Therefore, the linear alignment of DANS molecules, which is only allowed inside smaller diameter nanotubes, can strongly enhance their second-order non-linear optical properties. Since the nanotube confinement determines the alignment of DANS molecules, a restricted nanospace can be utilized to control their second-order non-linear optical properties. These DFT findings can assist in the

  8. Noninvasive Biosensor Algorithms for Continuous Metabolic Rate Determination

    Data.gov (United States)

    National Aeronautics and Space Administration — Our collaborators in the JSC Cardiovascular lab implemented a technique to determine stroke volume during exercise using ultrasound imaging. Data collection using...

  9. Business alignment in the procurement domain: a study of antecedents and determinants of supply chain performance

    Directory of Open Access Journals (Sweden)

    Patrick Mikalef

    2014-01-01

    Full Text Available With organizations now placing an increasing amount on attention on the management of their supply chain activities, the role of Information Technology (IT in supporting these operations has been put in the spotlight. In spite of extensive research examining how IT can be employed in various activities of supply chain management, the majority of studies are limited in identifying enablers and inhibitors of adoption. Empirical studies examining post-adoption conditions that facilitate performance improvement still remain scarce. In this study we focus on procurement as part of the supply chain management activities. We apply the business-IT alignment perspective to the domain of procurement, and examine how certain organizational factors impact the attainment of this state. Additionally, we research the effect that procurement alignment has on supply chain management performance. In order to do so, we apply Partial Least Squares (PLS analysis on a sample of 172 European companies. We find that firms that opt for a centralized governance structure, as well as larger firms, are more likely to attain a state of procurement alignment. Furthermore, our results empirically support the statement that procurement alignment is positively correlated with operational efficiency and competitive performance of the supply chain.

  10. Determination of the effective Young's modulus of vertically aligned carbon nanotube arrays: a simple nanotube-based varactor

    International Nuclear Information System (INIS)

    Olofsson, Niklas; Eriksson, Anders; Ek-Weis, Johan; Campbell, Eleanor E B; Idda, Tonio

    2009-01-01

    The electromechanical properties of arrays of vertically aligned multiwalled carbon nanotubes were studied in a parallel plate capacitor geometry. The electrostatic actuation was visualized using both optical microscopy and scanning electron microscopy, and highly reproducible behaviour was achieved for actuation voltages below the pull-in voltage. The walls of vertically aligned carbon nanotubes behave as solid cohesive units. The effective Young's modulus for the carbon nanotube arrays was determined by comparing the actuation results with the results of electrostatic simulations and was found to be exceptionally low, of the order of 1-10 MPa. The capacitance change and Q-factor were determined by measuring the frequency dependence of the radio-frequency transmission. Capacitance changes of over 20% and Q-factors in the range 100-10 were achieved for a frequency range of 0.2-1.5 GHz.

  11. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  12. Alignment of Short Reads: A Crucial Step for Application of Next-Generation Sequencing Data in Precision Medicine

    Directory of Open Access Journals (Sweden)

    Hao Ye

    2015-11-01

    Full Text Available Precision medicine or personalized medicine has been proposed as a modernized and promising medical strategy. Genetic variants of patients are the key information for implementation of precision medicine. Next-generation sequencing (NGS is an emerging technology for deciphering genetic variants. Alignment of raw reads to a reference genome is one of the key steps in NGS data analysis. Many algorithms have been developed for alignment of short read sequences since 2008. Users have to make a decision on which alignment algorithm to use in their studies. Selection of the right alignment algorithm determines not only the alignment algorithm but also the set of suitable parameters to be used by the algorithm. Understanding these algorithms helps in selecting the appropriate alignment algorithm for different applications in precision medicine. Here, we review current available algorithms and their major strategies such as seed-and-extend and q-gram filter. We also discuss the challenges in current alignment algorithms, including alignment in multiple repeated regions, long reads alignment and alignment facilitated with known genetic variants.

  13. Determination of Selection Method in Genetic Algorithm for Land Suitability

    Directory of Open Access Journals (Sweden)

    Irfianti Asti Dwi

    2016-01-01

    Full Text Available Genetic Algoirthm is one alternative solution in the field of modeling optimization, automatic programming and machine learning. The purpose of the study was to compare some type of selection methods in Genetic Algorithm for land suitability. Contribution of this research applies the best method to develop region based horticultural commodities. This testing is done by comparing the three methods on the method of selection, the Roulette Wheel, Tournament Selection and Stochastic Universal Sampling. Parameters of the locations used in the test scenarios include Temperature = 27°C, Rainfall = 1200 mm, hummidity = 30%, Cluster fruit = 4, Crossover Probabiitiy (Pc = 0.6, Mutation Probabilty (Pm = 0.2 and Epoch = 10. The second test epoch incluides location parameters consist of Temperature = 30°C, Rainfall = 2000 mm, Humidity = 35%, Cluster fruit = 5, Crossover Probability (Pc = 0.7, Mutation Probability (Pm = 0.3 and Epoch 10. The conclusion of this study shows that the Roulette Wheel is the best method because it produces more stable and fitness value than the other two methods.

  14. Robust Floor Determination Algorithm for Indoor Wireless Localization Systems under Reference Node Failure

    Directory of Open Access Journals (Sweden)

    Kriangkrai Maneerat

    2016-01-01

    Full Text Available One of the challenging problems for indoor wireless multifloor positioning systems is the presence of reference node (RN failures, which cause the values of received signal strength (RSS to be missed during the online positioning phase of the location fingerprinting technique. This leads to performance degradation in terms of floor accuracy, which in turn affects other localization procedures. This paper presents a robust floor determination algorithm called Robust Mean of Sum-RSS (RMoS, which can accurately determine the floor on which mobile objects are located and can work under either the fault-free scenario or the RN-failure scenarios. The proposed fault tolerance floor algorithm is based on the mean of the summation of the strongest RSSs obtained from the IEEE 802.15.4 Wireless Sensor Networks (WSNs during the online phase. The performance of the proposed algorithm is compared with those of different floor determination algorithms in literature. The experimental results show that the proposed robust floor determination algorithm outperformed the other floor algorithms and can achieve the highest percentage of floor determination accuracy in all scenarios tested. Specifically, the proposed algorithm can achieve greater than 95% correct floor determination under the scenario in which 40% of RNs failed.

  15. The Alignment of the CMS Silicon Tracker

    CERN Document Server

    Lampen, Pekka Tapio

    2013-01-01

    The CMS all-silicon tracker consists of 16588 modules, embedded in a solenoidal magnet providing a field of B = 3.8 T. The targeted performance requires that the alignment determines the module positions with a precision of a few micrometers. Ultimate local precision is reached by the determination of sensor curvatures, challenging the algorithms to determine about 200k parameters simultaneously, as is feasible with the Millepede II program. The main remaining challenge are global distortions that systematically bias the track parameters and thus physics measurements. They are controlled by adding further information into the alignment workflow, e.g. the mass of decaying resonances or track data taken with B = 0 T. To make use of the latter and also to integrate the determination of the Lorentz angle into the alignment procedure, the alignment framework has been extended to treat position sensitive calibration parameters. This is relevant since due to the increased LHC luminosity in 2012, the Lorentz angle ex...

  16. Nanoparticle amount, and not size, determines chain alignment and nonlinear hardening in polymer nanocomposites

    Science.gov (United States)

    Varol, H. Samet; Meng, Fanlong; Hosseinkhani, Babak; Malm, Christian; Bonn, Daniel; Bonn, Mischa; Zaccone, Alessio

    2017-01-01

    Polymer nanocomposites—materials in which a polymer matrix is blended with nanoparticles (or fillers)—strengthen under sufficiently large strains. Such strain hardening is critical to their function, especially for materials that bear large cyclic loads such as car tires or bearing sealants. Although the reinforcement (i.e., the increase in the linear elasticity) by the addition of filler particles is phenomenologically understood, considerably less is known about strain hardening (the nonlinear elasticity). Here, we elucidate the molecular origin of strain hardening using uniaxial tensile loading, microspectroscopy of polymer chain alignment, and theory. The strain-hardening behavior and chain alignment are found to depend on the volume fraction, but not on the size of nanofillers. This contrasts with reinforcement, which depends on both volume fraction and size of nanofillers, potentially allowing linear and nonlinear elasticity of nanocomposites to be tuned independently. PMID:28377517

  17. Two-wavelength Lidar inversion algorithm for determining planetary boundary layer height

    Science.gov (United States)

    Liu, Boming; Ma, Yingying; Gong, Wei; Jian, Yang; Ming, Zhang

    2018-02-01

    This study proposes a two-wavelength Lidar inversion algorithm to determine the boundary layer height (BLH) based on the particles clustering. Color ratio and depolarization ratio are used to analyze the particle distribution, based on which the proposed algorithm can overcome the effects of complex aerosol layers to calculate the BLH. The algorithm is used to determine the top of the boundary layer under different mixing state. Experimental results demonstrate that the proposed algorithm can determine the top of the boundary layer even in a complex case. Moreover, it can better deal with the weak convection conditions. Finally, experimental data from June 2015 to December 2015 were used to verify the reliability of the proposed algorithm. The correlation between the results of the proposed algorithm and the manual method is R2 = 0.89 with a RMSE of 131 m and mean bias of 49 m; the correlation between the results of the ideal profile fitting method and the manual method is R2 = 0.64 with a RMSE of 270 m and a mean bias of 165 m; and the correlation between the results of the wavelet covariance transform method and manual method is R2 = 0.76, with a RMSE of 196 m and mean bias of 23 m. These findings indicate that the proposed algorithm has better reliability and stability than traditional algorithms.

  18. MULTIFREQUENCY ALGORITHMS FOR DETERMINING THE MOISTURE CONTENT OF LIQUID EMULSIONS BY THE METHOD OF RESONANCE DIELCOMETRY

    Directory of Open Access Journals (Sweden)

    A. A. Korobko

    2017-06-01

    Full Text Available Purpose. The main attention is paid to the development and investigation of multifrequency algorithms for the realization of the method of resonance dielcometric measurement of the humidity of emulsions of the type «nonpolar liquid dielectric-water». Multifrequency algorithms take into account the problem of «uncertainty of varieties» and increase the sensitivity of the dielcometric method. Methodology. Multifrequency algorithms are proposed to solve the problem of «uncertainty of varieties» and improve the metrological characteristics of the resonance dielcometric method. The essence of the algorithms is to use a mathematical model of the emulsion and to determine the permittivity of the dehydrated liquid and the emulsion. The task of developing algorithms is to determine and take into account the influence of the parasitic electrical capacitance of the measuring oscillator and the measuring transducer. The essence of the method consists in alternately determining the resonance frequency of the oscillatory circuit with various configurations, which allows to take into account errors from parasitic parameters. The problem of «uncertainty of varieties» is formulated and solved. The metrological characteristics of the resonance dielcometric method are determined using algorithms. Results. Frequency domains of application of mathematical model of an emulsion are defined. An algorithm in a general form with four frequencies suitable for practical implementation in dielcometric resonance measurements is developed. Partial algorithms with three and two frequencies are developed. The systematic values of simulation errors in the emulsion in the microwave range are determined. Generalized metrological characteristics are obtained. The ways of increasing the sensitivity of the dielcometric method are determined. The problem of «uncertainty of varieties» was solved. Experimental data on determination of humidity for the developed algorithms are

  19. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  20. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  1. Mechanism in determining pretilt angle of liquid crystals aligned on fluorinated copolymer films

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Hsin-Ying; Wang, Chih-Yu; Lin, Chia-Jen; Pan, Ru-Pin [Department of Electrophysics, National Chiao Tung University, Hsinchu, Taiwan 30010 (China); Lin, Song-Shiang; Lee, Chein-Dhau [Material and Chemical Research Laboratories, Industrial Technology Research Institute, Hsinchu, Taiwan 31040 (China); Kou, Chwung-Shan, E-mail: rpchao@mail.nctu.edu.t [Department of Physics, National Tsing Hua University, Hsinchu, Taiwan 30013 (China)

    2009-08-07

    This work explores the surface treatment of copolymer materials with fluorinated carbonyl groups in various mole fractions by ultraviolet irradiation and ion-beam (IB) bombardment and its effect on liquid crystal (LC) surface alignments. X-ray photoemission spectroscopic analysis confirms that the content of the grafted CF{sub 2} side chains dominates the pretilt angle. A significant increase in oxygen content is responsible for the increase in the polar surface energy during IB treatment. Finally, the polar component of the surface energy dominates the pretilt angle of the LCs.

  2. Mechanism in determining pretilt angle of liquid crystals aligned on fluorinated copolymer films

    International Nuclear Information System (INIS)

    Wu, Hsin-Ying; Wang, Chih-Yu; Lin, Chia-Jen; Pan, Ru-Pin; Lin, Song-Shiang; Lee, Chein-Dhau; Kou, Chwung-Shan

    2009-01-01

    This work explores the surface treatment of copolymer materials with fluorinated carbonyl groups in various mole fractions by ultraviolet irradiation and ion-beam (IB) bombardment and its effect on liquid crystal (LC) surface alignments. X-ray photoemission spectroscopic analysis confirms that the content of the grafted CF 2 side chains dominates the pretilt angle. A significant increase in oxygen content is responsible for the increase in the polar surface energy during IB treatment. Finally, the polar component of the surface energy dominates the pretilt angle of the LCs.

  3. An optimisation algorithm for determination of treatment margins around moving and deformable targets

    International Nuclear Information System (INIS)

    Redpath, Anthony Thomas; Muren, Ludvig Paul

    2005-01-01

    Purpose: Determining treatment margins for inter-fractional motion of moving and deformable clinical target volumes (CTVs) remains a major challenge. This paper describes and applies an optimisation algorithm designed to derive such margins. Material and methods: The algorithm works by expanding the CTV, as determined from a pre-treatment or planning scan, to enclose the CTV positions observed during treatment. CTV positions during treatment may be obtained using, for example, repeat CT scanning and/or repeat electronic portal imaging (EPI). The algorithm can be applied to both individual patients and to a set of patients. The margins derived will minimise the excess volume outside the envelope that encloses all observed CTV positions (the CTV envelope). Initially, margins are set such that the envelope is more than adequately covered when the planning CTV is expanded. The algorithm uses an iterative method where the margins are sampled randomly and are then either increased or decreased randomly. The algorithm is tested on a set of 19 bladder cancer patients that underwent weekly repeat CT scanning and EPI throughout their treatment course. Results: From repeated runs on individual patients, the algorithm produces margins within a range of ±2 mm that lie among the best results found with an exhaustive search approach, and that agree within 3 mm with margins determined by a manual approach on the same data. The algorithm could be used to determine margins to cover any specified geometrical uncertainty, and allows for the determination of reduced margins by relaxing the coverage criteria, for example disregarding extreme CTV positions, or an arbitrarily selected volume fraction of the CTV envelope, and/or patients with extreme geometrical uncertainties. Conclusion: An optimisation approach to margin determination is found to give reproducible results within the accuracy required. The major advantage with this algorithm is that it is completely empirical, and it is

  4. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  5. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    Science.gov (United States)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  6. An Iterative Algorithm to Determine the Dynamic User Equilibrium in a Traffic Simulation Model

    Science.gov (United States)

    Gawron, C.

    An iterative algorithm to determine the dynamic user equilibrium with respect to link costs defined by a traffic simulation model is presented. Each driver's route choice is modeled by a discrete probability distribution which is used to select a route in the simulation. After each simulation run, the probability distribution is adapted to minimize the travel costs. Although the algorithm does not depend on the simulation model, a queuing model is used for performance reasons. The stability of the algorithm is analyzed for a simple example network. As an application example, a dynamic version of Braess's paradox is studied.

  7. Determining the Cost-Savings Threshold and Alignment Accuracy of Patient-Specific Instrumentation in Total Ankle Replacements.

    Science.gov (United States)

    Hamid, Kamran S; Matson, Andrew P; Nwachukwu, Benedict U; Scott, Daniel J; Mather, Richard C; DeOrio, James K

    2017-01-01

    Traditional intraoperative referencing for total ankle replacements (TARs) involves multiple steps and fluoroscopic guidance to determine mechanical alignment. Recent adoption of patient-specific instrumentation (PSI) allows for referencing to be determined preoperatively, resulting in less steps and potentially decreased operative time. We hypothesized that usage of PSI would result in decreased operating room time that would offset the additional cost of PSI compared with standard referencing (SR). In addition, we aimed to compare postoperative radiographic alignment between PSI and SR. Between August 2014 and September 2015, 87 patients undergoing TAR were enrolled in a prospectively collected TAR database. Patients were divided into cohorts based on PSI vs SR, and operative times were reviewed. Radiographic alignment parameters were retrospectively measured at 6 weeks postoperatively. Time-driven activity-based costing (TDABC) was used to derive direct costs. Cost vs operative time-savings were examined via 2-way sensitivity analysis to determine cost-saving thresholds for PSI applicable to a range of institution types. Cost-saving thresholds defined the price of PSI below which PSI would be cost-saving. A total of 35 PSI and 52 SR cases were evaluated with no significant differences identified in patient characteristics. Operative time from incision to completion of casting in cases without adjunct procedures was 127 minutes with PSI and 161 minutes with SR ( P cost-savings threshold range at our institution of $863 below which PSI pricing would provide net cost-savings. Two-way sensitivity analysis generated a globally applicable cost-savings threshold model based on institution-specific costs and surgeon-specific time-savings. This study demonstrated equivalent postoperative TAR alignment with PSI and SR referencing systems but with a significant decrease in operative time with PSI. Based on TDABC and associated sensitivity analysis, a cost-savings threshold

  8. Determination of the Main Influencing Factors on Road Fatalities Using an Integrated Neuro-Fuzzy Algorithm

    Directory of Open Access Journals (Sweden)

    Amir Masoud Rahimi

    Full Text Available Abstract This paper proposed an integrated algorithm of neuro-fuzzy techniques to examine the complex impact of socio-technical influencing factors on road fatalities. The proposed algorithm could handle complexity, non-linearity and fuzziness in the modeling environment due to its mechanism. The Neuro-fuzzy algorithm for determination of the potential influencing factors on road fatalities consisted of two phases. In the first phase, intelligent techniques are compared for their improved accuracy in predicting fatality rate with respect to some socio-technical influencing factors. Then in the second phase, sensitivity analysis is performed to calculate the pure effect on fatality rate of the potential influencing factors. The applicability and usefulness of the proposed algorithm is illustrated using the data in Iran provincial road transportation systems in the time period 2012-2014. Results show that road design improvement, number of trips, and number of passengers are the most influencing factors on provincial road fatality rate.

  9. Algorithm to determine electrical submersible pump performance considering temperature changes for viscous crude oils

    Energy Technology Data Exchange (ETDEWEB)

    Valderrama, A. [Petroleos de Venezuela, S.A., Distrito Socialista Tecnologico (Venezuela); Valencia, F. [Petroleos de Venezuela, S.A., Instituto de Tecnologia Venezolana para el Petroleo (Venezuela)

    2011-07-01

    In the heavy oil industry, electrical submersible pumps (ESPs) are used to transfer energy to fluids through stages made up of one impeller and one diffuser. Since liquid temperature increases through the different stages, viscosity might change between the inlet and outlet of the pump, thus affecting performance. The aim of this research was to create an algorithm to determine ESPs' performance curves considering temperature changes through the stages. A computational algorithm was developed and then compared with data collected in a laboratory with a CG2900 ESP. Results confirmed that when the fluid's viscosity is affected by the temperature changes, the stages of multistage pump systems do not have the same performance. Thus the developed algorithm could help production engineers to take viscosity changes into account and optimize the ESP design. This study developed an algorithm to take into account the fluid viscosity changes through pump stages.

  10. BAND ALIGNMENT OF ULTRATHIN GIZO/SiO2/Si HETEROSTRUCTURE DETERMINED BY ELECTRON SPECTROSCOPY

    Directory of Open Access Journals (Sweden)

    Hee Jae Kang2

    2011-11-01

    Full Text Available Amorphous GaInZnO (GIZO thin films are grown on SiO2/Si substrate by the RF magnetron sputtering method. By thecombination of measured band gaps from reflection energy loss spectroscopy (REELS spectra and valence band fromX-ray photo-electron spectroscopy (XPS spectra, we have demonstrated the energy band alignment of GIZO thin films.The band gap values are 3.2 eV, 3.2 eV, 3.4eV and 3.6eV for the concentration ratios of Ga: In: Zn in GIZO thin filmsare 1:1:1, 2:2:1, 3:2:1 and 4:2:1, respectively. These are attributed to the larger band gap energy of Ga2O3 comparedwith In2O3 and ZnO. The valence band offsets (ΔEv decrease from 2.18 to 1.68 eV with increasing amount of Ga inGIZO thin films for GIZO1 to GIZO4, respectively. These experimental values of band gap and valence band offsetwill provide the further understanding in the fundamental properties of GIZO/SiO2/Si heterostructure, which will beuseful in the design, modeling and analysis of the performance devices applications.

  11. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal; Salama, Khaled N.; Zidan, Mohammed A.

    2012-01-01

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we

  12. Low-cost attitude determination system using an extended Kalman filter (EKF) algorithm

    Science.gov (United States)

    Esteves, Fernando M.; Nehmetallah, Georges; Abot, Jandro L.

    2016-05-01

    Attitude determination is one of the most important subsystems in spacecraft, satellite, or scientific balloon mission s, since it can be combined with actuators to provide rate stabilization and pointing accuracy for payloads. In this paper, a low-cost attitude determination system with a precision in the order of arc-seconds that uses low-cost commercial sensors is presented including a set of uncorrelated MEMS gyroscopes, two clinometers, and a magnetometer in a hierarchical manner. The faster and less precise sensors are updated by the slower, but more precise ones through an Extended Kalman Filter (EKF)-based data fusion algorithm. A revision of the EKF algorithm fundamentals and its implementation to the current application, are presented along with an analysis of sensors noise. Finally, the results from the data fusion algorithm implementation are discussed in detail.

  13. Fast centroid algorithm for determining the surface plasmon resonance angle using the fixed-boundary method

    International Nuclear Information System (INIS)

    Zhan, Shuyue; Wang, Xiaoping; Liu, Yuling

    2011-01-01

    To simplify the algorithm for determining the surface plasmon resonance (SPR) angle for special applications and development trends, a fast method for determining an SPR angle, called the fixed-boundary centroid algorithm, has been proposed. Two experiments were conducted to compare three centroid algorithms from the aspects of the operation time, sensitivity to shot noise, signal-to-noise ratio (SNR), resolution, and measurement range. Although the measurement range of this method was narrower, the other performance indices were all better than the other two centroid methods. This method has outstanding performance, high speed, good conformity, low error and a high SNR and resolution. It thus has the potential to be widely adopted

  14. Experimental image alignment system

    Science.gov (United States)

    Moyer, A. L.; Kowel, S. T.; Kornreich, P. G.

    1980-01-01

    A microcomputer-based instrument for image alignment with respect to a reference image is described which uses the DEFT sensor (Direct Electronic Fourier Transform) for image sensing and preprocessing. The instrument alignment algorithm which uses the two-dimensional Fourier transform as input is also described. It generates signals used to steer the stage carrying the test image into the correct orientation. This algorithm has computational advantages over algorithms which use image intensity data as input and is suitable for a microcomputer-based instrument since the two-dimensional Fourier transform is provided by the DEFT sensor.

  15. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G.Gomez.

    Since June of 2009, the muon alignment group has focused on providing new alignment constants and on finalizing the hardware alignment reconstruction. Alignment constants for DTs and CSCs were provided for CRAFT09 data reprocessing. For DT chambers, the track-based alignment was repeated using CRAFT09 cosmic ray muons and validated using segment extrapolation and split cosmic tools. One difference with respect to the previous alignment is that only five degrees of freedom were aligned, leaving the rotation around the local x-axis to be better determined by the hardware system. Similarly, DT chambers poorly aligned by tracks (due to limited statistics) were aligned by a combination of photogrammetry and hardware-based alignment. For the CSC chambers, the hardware system provided alignment in global z and rotations about local x. Entire muon endcap rings were further corrected in the transverse plane (global x and y) by the track-based alignment. Single chamber track-based alignment suffers from poor statistic...

  16. First Attempt of Orbit Determination of SLR Satellites and Space Debris Using Genetic Algorithms

    Science.gov (United States)

    Deleflie, F.; Coulot, D.; Descosta, R.; Fernier, A.; Richard, P.

    2013-08-01

    We present an orbit determination method based on genetic algorithms. Contrary to usual estimation methods mainly based on least-squares methods, these algorithms do not require any a priori knowledge of the initial state vector to be estimated. These algorithms can be applied when a new satellite is launched or for uncatalogued objects that appear in images obtained from robotic telescopes such as the TAROT ones. We show in this paper preliminary results obtained from an SLR satellite, for which tracking data acquired by the ILRS network enable to build accurate orbital arcs at a few centimeter level, which can be used as a reference orbit ; in this case, the basic observations are made up of time series of ranges, obtained from various tracking stations. We show as well the results obtained from the observations acquired by the two TAROT telescopes on the Telecom-2D satellite operated by CNES ; in that case, the observations are made up of time series of azimuths and elevations, seen from the two TAROT telescopes. The method is carried out in several steps: (i) an analytical propagation of the equations of motion, (ii) an estimation kernel based on genetic algorithms, which follows the usual steps of such approaches: initialization and evolution of a selected population, so as to determine the best parameters. Each parameter to be estimated, namely each initial keplerian element, has to be searched among an interval that is preliminary chosen. The algorithm is supposed to converge towards an optimum over a reasonable computational time.

  17. THE ATLAS INNER DETECTOR TRACK BASED ALIGNMENT

    CERN Document Server

    Marti i Garcia, Salvador; The ATLAS collaboration

    2018-01-01

    The alignment of the ATLAS Inner Detector is performed with a track-based alignment algorithm. Its goal is to provide an accurate description of the detector geometry such that track parameters are accurately determined and free from biases. Its software implementation is modular and configurable, with a clear separation of the alignment algorithm from the detector system specifics and the database handling. The alignment must cope with the rapid movements of the detector as well as with the slow drift of the different mechanical units. Prompt alignment constants are derived for every run at the calibration stage. These sets of constants are then dynamically split from the beginning of the run in many chunks, allowing to describe the tracker geometry as it evolves with time. The alignment of the Inner Detector is validated and improved by studying resonance decays (Z and J/psi to mu+mu-), as well as using information from the calorimeter system with the E/p method with electrons. A detailed study of these res...

  18. Pairwise Sequence Alignment Library

    Energy Technology Data Exchange (ETDEWEB)

    2015-05-20

    Vector extensions, such as SSE, have been part of the x86 CPU since the 1990s, with applications in graphics, signal processing, and scientific applications. Although many algorithms and applications can naturally benefit from automatic vectorization techniques, there are still many that are difficult to vectorize due to their dependence on irregular data structures, dense branch operations, or data dependencies. Sequence alignment, one of the most widely used operations in bioinformatics workflows, has a computational footprint that features complex data dependencies. The trend of widening vector registers adversely affects the state-of-the-art sequence alignment algorithm based on striped data layouts. Therefore, a novel SIMD implementation of a parallel scan-based sequence alignment algorithm that can better exploit wider SIMD units was implemented as part of the Parallel Sequence Alignment Library (parasail). Parasail features: Reference implementations of all known vectorized sequence alignment approaches. Implementations of Smith Waterman (SW), semi-global (SG), and Needleman Wunsch (NW) sequence alignment algorithms. Implementations across all modern CPU instruction sets including AVX2 and KNC. Language interfaces for C/C++ and Python.

  19. Improved Monkey-King Genetic Algorithm for Solving Large Winner Determination in Combinatorial Auction

    Science.gov (United States)

    Li, Yuzhong

    Using GA solve the winner determination problem (WDP) with large bids and items, run under different distribution, because the search space is large, constraint complex and it may easy to produce infeasible solution, would affect the efficiency and quality of algorithm. This paper present improved MKGA, including three operator: preprocessing, insert bid and exchange recombination, and use Monkey-king elite preservation strategy. Experimental results show that improved MKGA is better than SGA in population size and computation. The problem that traditional branch and bound algorithm hard to solve, improved MKGA can solve and achieve better effect.

  20. Application of genetic algorithms for determination biological half-life of 137 Cs in milk

    International Nuclear Information System (INIS)

    Pantelic, G.

    1998-01-01

    Genetic algorithm an optimization method involving natural selection mechanisms, was used to determine biological half-life of sup 1 sup 3 sup 7 Cs in the milk, after the Chernobyl accident, based on a two compartment linear system model. Genetic algorithms operate on populations of strings. Reproduction, crossover and mutation are applied to successive string population to create new string population. A model parameter estimation is performed by minimizing square differences between fitting function and experimental data. The calculated biological half-life of sup 1 sup 3 sup 7 Cs in milk is (32(+(-) days (author)

  1. Aligning method with theory: a comparison of two approaches to modeling the social determinants of health.

    Science.gov (United States)

    O'Campo, Patricia; Urquia, Marcelo

    2012-12-01

    There is increasing interest in the study of the social determinants of maternal and child health. While there has been growth in the theory and empirical evidence about social determinants, less attention has been paid to the kind of modeling that should be used to understand the impact of social exposures on well-being. We analyzed data from the nationwide 2006 Canadian Maternity Experiences Survey to compare the pervasive disease-specific model to a model that captures the generalized health impact (GHI) of social exposures, namely low socioeconomic position. The GHI model uses a composite of adverse conditions that stem from low socioeconomic position: adverse birth outcomes, postpartum depression, severe abuse, stressful life events, and hospitalization during pregnancy. Adjusted prevalence ratios and 95% confidence intervals from disease-specific models for low income (social determinants of health.

  2. A Self-Adaptive Fuzzy c-Means Algorithm for Determining the Optimal Number of Clusters

    Science.gov (United States)

    Wang, Zhihao; Yi, Jing

    2016-01-01

    For the shortcoming of fuzzy c-means algorithm (FCM) needing to know the number of clusters in advance, this paper proposed a new self-adaptive method to determine the optimal number of clusters. Firstly, a density-based algorithm was put forward. The algorithm, according to the characteristics of the dataset, automatically determined the possible maximum number of clusters instead of using the empirical rule n and obtained the optimal initial cluster centroids, improving the limitation of FCM that randomly selected cluster centroids lead the convergence result to the local minimum. Secondly, this paper, by introducing a penalty function, proposed a new fuzzy clustering validity index based on fuzzy compactness and separation, which ensured that when the number of clusters verged on that of objects in the dataset, the value of clustering validity index did not monotonically decrease and was close to zero, so that the optimal number of clusters lost robustness and decision function. Then, based on these studies, a self-adaptive FCM algorithm was put forward to estimate the optimal number of clusters by the iterative trial-and-error process. At last, experiments were done on the UCI, KDD Cup 1999, and synthetic datasets, which showed that the method not only effectively determined the optimal number of clusters, but also reduced the iteration of FCM with the stable clustering result. PMID:28042291

  3. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G. Gomez

    Since December, the muon alignment community has focused on analyzing the data recorded so far in order to produce new DT and CSC Alignment Records for the second reprocessing of CRAFT data. Two independent algorithms were developed which align the DT chambers using global tracks, thus providing, for the first time, a relative alignment of the barrel with respect to the tracker. These results are an important ingredient for the second CRAFT reprocessing and allow, for example, a more detailed study of any possible mis-modelling of the magnetic field in the muon spectrometer. Both algorithms are constructed in such a way that the resulting alignment constants are not affected, to first order, by any such mis-modelling. The CSC chambers have not yet been included in this global track-based alignment due to a lack of statistics, since only a few cosmics go through the tracker and the CSCs. A strategy exists to align the CSCs using the barrel as a reference until collision tracks become available. Aligning the ...

  4. Black hole algorithm for determining model parameter in self-potential data

    Science.gov (United States)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  5. Mobile and replicated alignment of arrays in data-parallel programs

    Science.gov (United States)

    Chatterjee, Siddhartha; Gilbert, John R.; Schreiber, Robert

    1993-01-01

    When a data-parallel language like FORTRAN 90 is compiled for a distributed-memory machine, aggregate data objects (such as arrays) are distributed across the processor memories. The mapping determines the amount of residual communication needed to bring operands of parallel operations into alignment with each other. A common approach is to break the mapping into two stages: first, an alignment that maps all the objects to an abstract template, and then a distribution that maps the template to the processors. We solve two facets of the problem of finding alignments that reduce residual communication: we determine alignments that vary in loops, and objects that should have replicated alignments. We show that loop-dependent mobile alignment is sometimes necessary for optimum performance, and we provide algorithms with which a compiler can determine good mobile alignments for objects within do loops. We also identify situations in which replicated alignment is either required by the program itself (via spread operations) or can be used to improve performance. We propose an algorithm based on network flow that determines which objects to replicate so as to minimize the total amount of broadcast communication in replication. This work on mobile and replicated alignment extends our earlier work on determining static alignment.

  6. Thickness determination in textile material design: dynamic modeling and numerical algorithms

    International Nuclear Information System (INIS)

    Xu, Dinghua; Ge, Meibao

    2012-01-01

    Textile material design is of paramount importance in the study of functional clothing design. It is therefore important to determine the dynamic heat and moisture transfer characteristics in the human body–clothing–environment system, which directly determine the heat–moisture comfort level of the human body. Based on a model of dynamic heat and moisture transfer with condensation in porous fabric at low temperature, this paper presents a new inverse problem of textile thickness determination (IPTTD). Adopting the idea of the least-squares method, we formulate the IPTTD into a function minimization problem. By means of the finite-difference method, quasi-solution method and direct search method for one-dimensional minimization problems, we construct iterative algorithms of the approximated solution for the IPTTD. Numerical simulation results validate the formulation of the IPTTD and demonstrate the effectiveness of the proposed numerical algorithms. (paper)

  7. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

  8. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    Science.gov (United States)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  9. A New Algorithm for Determining Ultimate Pit Limits Based on Network Optimization

    Directory of Open Access Journals (Sweden)

    Ali Asghar Khodayari

    2013-12-01

    Full Text Available One of the main concerns of the mining industry is to determine ultimate pit limits. Final pit is a collection of blocks, which can be removed with maximum profit while following restrictions on the slope of the mine’s walls. The size, location and final shape of an open-pit are very important in designing the location of waste dumps, stockpiles, processing plants, access roads and other surface facilities as well as in developing a production program. There are numerous methods for designing ultimate pit limits. Some of these methods, such as floating cone algorithm, are heuristic and do not guarantee to generate optimum pit limits. Other methods, like Lerchs–Grossmann algorithm, are rigorous and always generate the true optimum pit limits. In this paper, a new rigorous algorithm is introduced. The main logic in this method is that only positive blocks, which can pay costs of their overlying non-positive blocks, are able to appear in the final pit. Those costs may be paid either by positive block itself or jointly with other positive blocks, which have the same overlying negative blocks. This logic is formulated using a network model as a Linear Programming (LP problem. This algorithm can be applied to two- and three-dimension block models. Since there are many commercial programs available for solving LP problems, pit limits in large block models can be determined easily by using this method.

  10. Direct determination of resonance energy transfer in photolyase: structural alignment for the functional state.

    Science.gov (United States)

    Tan, Chuang; Guo, Lijun; Ai, Yuejie; Li, Jiang; Wang, Lijuan; Sancar, Aziz; Luo, Yi; Zhong, Dongping

    2014-11-13

    Photoantenna is essential to energy transduction in photoinduced biological machinery. A photoenzyme, photolyase, has a light-harvesting pigment of methenyltetrahydrofolate (MTHF) that transfers its excitation energy to the catalytic flavin cofactor FADH¯ to enhance DNA-repair efficiency. Here we report our systematic characterization and direct determination of the ultrafast dynamics of resonance energy transfer from excited MTHF to three flavin redox states in E. coli photolyase by capturing the intermediates formed through the energy transfer and thus excluding the electron-transfer quenching pathway. We observed 170 ps for excitation energy transferring to the fully reduced hydroquinone FADH¯, 20 ps to the fully oxidized FAD, and 18 ps to the neutral semiquinone FADH(•), and the corresponding orientation factors (κ(2)) were determined to be 2.84, 1.53 and 1.26, respectively, perfectly matching with our calculated theoretical values. Thus, under physiological conditions and over the course of evolution, photolyase has adopted the optimized orientation of its photopigment to efficiently convert solar energy for repair of damaged DNA.

  11. THE ALGORITHM OF DETERMINATION OF EYE FUNDUS VESSELS BLOOD FLOW CHARACTERISTICS ON VIDEOSEQUENCE

    Directory of Open Access Journals (Sweden)

    O. V. Nedzvedz

    2018-01-01

    Full Text Available The method of determination of the dynamic characteristics like the vessel diameter change, the linear and volume blood velocities in the vessels of the eye fundus is considered. Such characteristics allow to determine blood flow changes in the microvasculature affecting the blood flow in the brain, kidneys and coronary vessels. Developed algorithm includes four stages: the video sequence stabilization, the vessels segmentation with the help of a neural network, the determination of the instantaneous velocity in the vessels based on the optical flow and the analysis of the results.

  12. Automatic gender determination from 3D digital maxillary tooth plaster models based on the random forest algorithm and discrete cosine transform.

    Science.gov (United States)

    Akkoç, Betül; Arslan, Ahmet; Kök, Hatice

    2017-05-01

    One of the first stages in the identification of an individual is gender determination. Through gender determination, the search spectrum can be reduced. In disasters such as accidents or fires, which can render identification somewhat difficult, durable teeth are an important source for identification. This study proposes a smart system that can automatically determine gender using 3D digital maxillary tooth plaster models. The study group was composed of 40 Turkish individuals (20 female, 20 male) between the ages of 21 and 24. Using the iterative closest point (ICP) algorithm, tooth models were aligned, and after the segmentation process, models were transformed into depth images. The local discrete cosine transform (DCT) was used in the process of feature extraction, and the random forest (RF) algorithm was used for the process of classification. Classification was performed using 30 different seeds for random generator values and 10-fold cross-validation. A value of 85.166% was obtained for average classification accuracy (CA) and a value of 91.75% for the area under the ROC curve (AUC). A multi-disciplinary study is performed here that includes computer sciences, medicine and dentistry. A smart system is proposed for the determination of gender from 3D digital models of maxillary tooth plaster models. This study has the capacity to extend the field of gender determination from teeth. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Performance Analysis of Combined Methods of Genetic Algorithm and K-Means Clustering in Determining the Value of Centroid

    Science.gov (United States)

    Adya Zizwan, Putra; Zarlis, Muhammad; Budhiarti Nababan, Erna

    2017-12-01

    The determination of Centroid on K-Means Algorithm directly affects the quality of the clustering results. Determination of centroid by using random numbers has many weaknesses. The GenClust algorithm that combines the use of Genetic Algorithms and K-Means uses a genetic algorithm to determine the centroid of each cluster. The use of the GenClust algorithm uses 50% chromosomes obtained through deterministic calculations and 50% is obtained from the generation of random numbers. This study will modify the use of the GenClust algorithm in which the chromosomes used are 100% obtained through deterministic calculations. The results of this study resulted in performance comparisons expressed in Mean Square Error influenced by centroid determination on K-Means method by using GenClust method, modified GenClust method and also classic K-Means.

  14. Software alignment of the LHCb inner tracker sensors

    International Nuclear Information System (INIS)

    Maciuc, Florin

    2009-01-01

    This work uses the Millepede linear alignment method, which is essentially a χ 2 minimization algorithm, to determine simultaneously between 76 and 476 alignment parameters and several million track parameters. For the case of non-linear alignment models, Millepede is embedded in a Newton-Raphson iterative procedure. If needed a more robust approach is provided by adding quasi-Newton steps which minimize the approximate χ 2 model function. The alignment apparatus is applied to locally align the LHCb's Inner Tracker sensors in an a priori fixed system of coordinate. An analytic measurement model was derived as function of track parameters and alignment parameters, for the two cases: null and nonnull magnetic field. The alignment problem is equivalent to solving a linear system of equations, and usually a matrix inversion is required. In general, as consequence of global degrees of freedom or poorly constrained modes, the alignment matrix is singular or near-singular. The global degrees of freedom are obtained: directly from χ 2 function invariant transformations, and in parallel by an alignment matrix diagonalization followed by an extraction of the least constrained modes. The procedure allows to properly de ne the local alignment of the Inner Tracker. Using Monte Carlo data, the outlined procedure reconstructs the position of the IT sensors within micrometer precision or better. For rotations equivalent precision was obtained. (orig.)

  15. Software alignment of the LHCb inner tracker sensors

    Energy Technology Data Exchange (ETDEWEB)

    Maciuc, Florin

    2009-04-20

    This work uses the Millepede linear alignment method, which is essentially a {chi}{sup 2} minimization algorithm, to determine simultaneously between 76 and 476 alignment parameters and several million track parameters. For the case of non-linear alignment models, Millepede is embedded in a Newton-Raphson iterative procedure. If needed a more robust approach is provided by adding quasi-Newton steps which minimize the approximate {chi}{sup 2} model function. The alignment apparatus is applied to locally align the LHCb's Inner Tracker sensors in an a priori fixed system of coordinate. An analytic measurement model was derived as function of track parameters and alignment parameters, for the two cases: null and nonnull magnetic field. The alignment problem is equivalent to solving a linear system of equations, and usually a matrix inversion is required. In general, as consequence of global degrees of freedom or poorly constrained modes, the alignment matrix is singular or near-singular. The global degrees of freedom are obtained: directly from {chi}{sup 2} function invariant transformations, and in parallel by an alignment matrix diagonalization followed by an extraction of the least constrained modes. The procedure allows to properly de ne the local alignment of the Inner Tracker. Using Monte Carlo data, the outlined procedure reconstructs the position of the IT sensors within micrometer precision or better. For rotations equivalent precision was obtained. (orig.)

  16. Software alignment of the LHCb inner tracker sensors

    Energy Technology Data Exchange (ETDEWEB)

    Maciuc, Florin

    2009-04-20

    This work uses the Millepede linear alignment method, which is essentially a {chi}{sup 2} minimization algorithm, to determine simultaneously between 76 and 476 alignment parameters and several million track parameters. For the case of non-linear alignment models, Millepede is embedded in a Newton-Raphson iterative procedure. If needed a more robust approach is provided by adding quasi-Newton steps which minimize the approximate {chi}{sup 2} model function. The alignment apparatus is applied to locally align the LHCb's Inner Tracker sensors in an a priori fixed system of coordinate. An analytic measurement model was derived as function of track parameters and alignment parameters, for the two cases: null and nonnull magnetic field. The alignment problem is equivalent to solving a linear system of equations, and usually a matrix inversion is required. In general, as consequence of global degrees of freedom or poorly constrained modes, the alignment matrix is singular or near-singular. The global degrees of freedom are obtained: directly from {chi}{sup 2} function invariant transformations, and in parallel by an alignment matrix diagonalization followed by an extraction of the least constrained modes. The procedure allows to properly de ne the local alignment of the Inner Tracker. Using Monte Carlo data, the outlined procedure reconstructs the position of the IT sensors within micrometer precision or better. For rotations equivalent precision was obtained. (orig.)

  17. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    Z. Szillasi and G. Gomez.

    2013-01-01

    When CMS is opened up, major components of the Link and Barrel Alignment systems will be removed. This operation, besides allowing for maintenance of the detector underneath, is needed for making interventions that will reinforce the alignment measurements and make the operation of the alignment system more reliable. For that purpose and also for their general maintenance and recalibration, the alignment components will be transferred to the Alignment Lab situated in the ISR area. For the track-based alignment, attention is focused on the determination of systematic uncertainties, which have become dominant, since now there is a large statistics of muon tracks. This will allow for an improved Monte Carlo misalignment scenario and updated alignment position errors, crucial for high-momentum muon analysis such as Z′ searches.

  18. MIDAS. An algorithm for the extraction of modal information from experimentally determined transfer functions

    International Nuclear Information System (INIS)

    Durrans, R.F.

    1978-12-01

    In order to design reactor structures to withstand the large flow and acoustic forces present it is necessary to know something of their dynamic properties. In many cases these properties cannot be predicted theoretically and it is necessary to determine them experimentally. The algorithm MIDAS (Modal Identification for the Dynamic Analysis of Structures) which has been developed at B.N.L. for extracting these structural properties from experimental data is described. (author)

  19. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    OpenAIRE

    Zong, Shengliang; Chai, Guorong; Su, Yana

    2017-01-01

    We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requi...

  20. DETERMINATION OF STEERING WHEEL ANGLES DURING CAR ALIGNMENT BY IMAGE ANALYSIS METHODS

    Directory of Open Access Journals (Sweden)

    M. Mueller

    2016-06-01

    Full Text Available Optical systems for automatic visual inspections are of increasing importance in the field of automation in the industrial domain. A new application is the determination of steering wheel angles during wheel track setting of the final inspection of car manufacturing. The camera has to be positioned outside the car to avoid interruptions of the processes and therefore, oblique images of the steering wheel must be acquired. Three different approaches of computer vision are considered in this paper, i.e. a 2D shape-based matching (by means of a plane to plane rectification of the oblique images and detection of a shape model with a particular rotation, a 3D shape-based matching approach (by means of a series of different perspectives of the spatial shape of the steering wheel derived from a CAD design model and a point-to-point matching (by means of the extraction of significant elements (e.g. multifunctional buttons of a steering wheel and a pairwise connection of these points to straight lines. The HALCON system (HALCON, 2016 was used for all software developments and necessary adaptions. As reference a mechanical balance with an accuracy of 0.1° was used. The quality assessment was based on two different approaches, a laboratory test and a test during production process. In the laboratory a standard deviation of ±0.035° (2D shape-based matching, ±0.12° (3D approach and ±0.029° (point-to-point matching could be obtained. The field test of 291 measurements (27 cars with varying poses and angles of the steering wheel results in a detection rate of 100% and ±0.48° (2D matching and ±0.24° (point-to-point matching. Both methods also fulfil the request of real time processing (three measurements per second.

  1. Software alignment of the LHCb Outer Tracker chambers

    Energy Technology Data Exchange (ETDEWEB)

    Deissenroth, Marc

    2010-04-21

    This work presents an alignment algorithm that was developed to precisely determine the positions of the LHCb Outer Tracker detector elements. The algorithm is based on the reconstruction of tracks and exploits that misalignments of the detector change the residual between a measured hit and the reconstructed track. It considers different levels of granularities of the Outer Tracker geometry and fully accounts for correlations of all elements which are imposed by particle trajectories. In extensive tests, simulated shifts and rotations for different levels of the detector granularity have been used as input to the track reconstruction and alignment procedure. With about 260 000 tracks the misalignments are recovered with a statistical precision of O(10 - 100 {mu}m) for the translational degrees of freedom and of O(10{sup -2} - 10{sup -1} mrad) for rotations. A study has been performed to determine the impact of Outer Tracker misalignments on the performance of the track reconstruction algorithms. It shows that the achieved statistical precision does not decrease the track reconstruction performance in a significant way. During the commissioning of the LHCb detector, cosmic ray muon events have been collected. The events have been analysed and used for the first alignment of the 216 Outer Tracker modules. The module positions have been determined within {proportional_to} 90 {mu}m. The developed track based alignment algorithm has demonstrated its reliability and is one of the core algorithms which are used for the precise determination of the positions of the LHCb Outer Tracker elements. (orig.)

  2. Software alignment of the LHCb Outer Tracker chambers

    International Nuclear Information System (INIS)

    Deissenroth, Marc

    2010-01-01

    This work presents an alignment algorithm that was developed to precisely determine the positions of the LHCb Outer Tracker detector elements. The algorithm is based on the reconstruction of tracks and exploits that misalignments of the detector change the residual between a measured hit and the reconstructed track. It considers different levels of granularities of the Outer Tracker geometry and fully accounts for correlations of all elements which are imposed by particle trajectories. In extensive tests, simulated shifts and rotations for different levels of the detector granularity have been used as input to the track reconstruction and alignment procedure. With about 260 000 tracks the misalignments are recovered with a statistical precision of O(10 - 100 μm) for the translational degrees of freedom and of O(10 -2 - 10 -1 mrad) for rotations. A study has been performed to determine the impact of Outer Tracker misalignments on the performance of the track reconstruction algorithms. It shows that the achieved statistical precision does not decrease the track reconstruction performance in a significant way. During the commissioning of the LHCb detector, cosmic ray muon events have been collected. The events have been analysed and used for the first alignment of the 216 Outer Tracker modules. The module positions have been determined within ∝ 90 μm. The developed track based alignment algorithm has demonstrated its reliability and is one of the core algorithms which are used for the precise determination of the positions of the LHCb Outer Tracker elements. (orig.)

  3. Algorithm for determining two-periodic steady-states in AC machines directly in time domain

    Directory of Open Access Journals (Sweden)

    Sobczyk Tadeusz J.

    2016-09-01

    Full Text Available This paper describes an algorithm for finding steady states in AC machines for the cases of their two-periodic nature. The algorithm enables to specify the steady-state solution identified directly in time domain despite of the fact that two-periodic waveforms are not repeated in any finite time interval. The basis for such an algorithm is a discrete differential operator that specifies the temporary values of the derivative of the two-periodic function in the selected set of points on the basis of the values of that function in the same set of points. It allows to develop algebraic equations defining the steady state solution reached in a chosen point set for the nonlinear differential equations describing the AC machines when electrical and mechanical equations should be solved together. That set of those values allows determining the steady state solution at any time instant up to infinity. The algorithm described in this paper is competitive with respect to the one known in literature an approach based on the harmonic balance method operated in frequency domain.

  4. Floyd-warshall algorithm to determine the shortest path based on android

    Science.gov (United States)

    Ramadiani; Bukhori, D.; Azainil; Dengen, N.

    2018-04-01

    The development of technology has made all areas of life easier now, one of which is the ease of obtaining geographic information. The use of geographic information may vary according to need, for example, the digital map learning, navigation systems, observations area, and much more. With the support of adequate infrastructure, almost no one will ever get lost to a destination even to foreign places or that have never been visited before. The reasons why many institutions and business entities use technology to improve services to consumers and to streamline the production process undertaken and so forth. Speaking of the efficient, there are many elements related to efficiency in navigation systems, and one of them is the efficiency in terms of distance. The shortest distance determination algorithm required in this research is used Floyd-Warshall Algorithm. Floyd-Warshall algorithm is the algorithm to find the fastest path and the shortest distance between 2 nodes, while the program is intended to find the path of more than 2 nodes.

  5. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    Gervasio Gomez

    The main progress of the muon alignment group since March has been in the refinement of both the track-based alignment for the DTs and the hardware-based alignment for the CSCs. For DT track-based alignment, there has been significant improvement in the internal alignment of the superlayers inside the DTs. In particular, the distance between superlayers is now corrected, eliminating the residual dependence on track impact angles, and good agreement is found between survey and track-based corrections. The new internal geometry has been approved to be included in the forthcoming reprocessing of CRAFT samples. The alignment of DTs with respect to the tracker using global tracks has also improved significantly, since the algorithms use the latest B-field mapping, better run selection criteria, optimized momentum cuts, and an alignment is now obtained for all six degrees of freedom (three spatial coordinates and three rotations) of the aligned DTs. This work is ongoing and at a stage where we are trying to unders...

  6. Determining Optimal Replacement Policy with an Availability Constraint via Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Shengliang Zong

    2017-01-01

    Full Text Available We develop a model and a genetic algorithm for determining an optimal replacement policy for power equipment subject to Poisson shocks. If the time interval of two consecutive shocks is less than a threshold value, the failed equipment can be repaired. We assume that the operating time after repair is stochastically nonincreasing and the repair time is exponentially distributed with a geometric increasing mean. Our objective is to minimize the expected average cost under an availability requirement. Based on this average cost function, we propose the genetic algorithm to locate the optimal replacement policy N to minimize the average cost rate. The results show that the GA is effective and efficient in finding the optimal solutions. The availability of equipment has significance effect on the optimal replacement policy. Many practical systems fit the model developed in this paper.

  7. Algorithm based on the Thomson problem for determination of equilibrium structures of metal nanoclusters

    Science.gov (United States)

    Arias, E.; Florez, E.; Pérez-Torres, J. F.

    2017-06-01

    A new algorithm for the determination of equilibrium structures suitable for metal nanoclusters is proposed. The algorithm performs a stochastic search of the minima associated with the nuclear potential energy function restricted to a sphere (similar to the Thomson problem), in order to guess configurations of the nuclear positions. Subsequently, the guessed configurations are further optimized driven by the total energy function using the conventional gradient descent method. This methodology is equivalent to using the valence shell electron pair repulsion model in guessing initial configurations in the traditional molecular quantum chemistry. The framework is illustrated in several clusters of increasing complexity: Cu7, Cu9, and Cu11 as benchmark systems, and Cu38 and Ni9 as novel systems. New equilibrium structures for Cu9, Cu11, Cu38, and Ni9 are reported.

  8. Band alignment of TiO{sub 2}/FTO interface determined by X-ray photoelectron spectroscopy: Effect of annealing

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Haibo, E-mail: hbfan@nwu.edu.cn, E-mail: liusz@snnu.edu.cn [Key Laboratory of Applied Surface and Colloid Chemistry, National Ministry of Education, Shaanxi Engineering Lab for Advanced Energy Technology, School of Materials Science and Engineering, Shaanxi Normal University, Xi’an 710119 (China); School of Physics, Northwest University, Xi’an 710069 (China); Yang, Zhou; Ren, Xianpei; Gao, Fei [Key Laboratory of Applied Surface and Colloid Chemistry, National Ministry of Education, Shaanxi Engineering Lab for Advanced Energy Technology, School of Materials Science and Engineering, Shaanxi Normal University, Xi’an 710119 (China); Yin, Mingli [Key Laboratory of Applied Surface and Colloid Chemistry, National Ministry of Education, Shaanxi Engineering Lab for Advanced Energy Technology, School of Materials Science and Engineering, Shaanxi Normal University, Xi’an 710119 (China); School of Science, Xi’an Technological University, Xi’an, Shaanxi 710062 (China); Liu, Shengzhong, E-mail: hbfan@nwu.edu.cn, E-mail: liusz@snnu.edu.cn [Key Laboratory of Applied Surface and Colloid Chemistry, National Ministry of Education, Shaanxi Engineering Lab for Advanced Energy Technology, School of Materials Science and Engineering, Shaanxi Normal University, Xi’an 710119 (China); Dalian Institute of Chemical Physics, Dalian National Laboratory for Clean Energy, Chinese Academy of Sciences, Dalian, 116023 (China)

    2016-01-15

    The energy band alignment between pulsed-laser-deposited TiO{sub 2} and FTO was firstly characterized using high-resolution X-ray photoelectron spectroscopy. A valence band offset (VBO) of 0.61 eV and a conduction band offset (CBO) of 0.29 eV were obtained across the TiO{sub 2}/FTO heterointerface. With annealing process, the VBO and CBO across the heterointerface were found to be -0.16 eV and 1.06 eV, respectively, with the alignment transforming from type-I to type-II. The difference in the band alignment is believed to be dominated by the core level down-shift of the FTO substrate, which is a result of the oxidation of Sn. Current-voltage test has verified that the band alignment has a significant effect on the current transport of the heterojunction.

  9. Band alignment of TiO2/FTO interface determined by X-ray photoelectron spectroscopy: Effect of annealing

    Directory of Open Access Journals (Sweden)

    Haibo Fan

    2016-01-01

    Full Text Available The energy band alignment between pulsed-laser-deposited TiO2 and FTO was firstly characterized using high-resolution X-ray photoelectron spectroscopy. A valence band offset (VBO of 0.61 eV and a conduction band offset (CBO of 0.29 eV were obtained across the TiO2/FTO heterointerface. With annealing process, the VBO and CBO across the heterointerface were found to be -0.16 eV and 1.06 eV, respectively, with the alignment transforming from type-I to type-II. The difference in the band alignment is believed to be dominated by the core level down-shift of the FTO substrate, which is a result of the oxidation of Sn. Current-voltage test has verified that the band alignment has a significant effect on the current transport of the heterojunction.

  10. An orbit determination algorithm for small satellites based on the magnitude of the earth magnetic field

    Science.gov (United States)

    Zagorski, P.; Gallina, A.; Rachucki, J.; Moczala, B.; Zietek, S.; Uhl, T.

    2018-06-01

    Autonomous attitude determination systems based on simple measurements of vector quantities such as magnetic field and the Sun direction are commonly used in very small satellites. However, those systems always require knowledge of the satellite position. This information can be either propagated from orbital elements periodically uplinked from the ground station or measured onboard by dedicated global positioning system (GPS) receiver. The former solution sacrifices satellite autonomy while the latter requires additional sensors which may represent a significant part of mass, volume, and power budget in case of pico- or nanosatellites. Hence, it is thought that a system for onboard satellite position determination without resorting to GPS receivers would be useful. In this paper, a novel algorithm for determining the satellite orbit semimajor-axis is presented. The methods exploit only the magnitude of the Earth magnetic field recorded onboard by magnetometers. This represents the first step toward an extended algorithm that can determine all orbital elements of the satellite. The method is validated by numerical analysis and real magnetic field measurements.

  11. Algorithm for selection of optimized EPR distance restraints for de novo protein structure determination

    Science.gov (United States)

    Kazmier, Kelli; Alexander, Nathan S.; Meiler, Jens; Mchaourab, Hassane S.

    2010-01-01

    A hybrid protein structure determination approach combining sparse Electron Paramagnetic Resonance (EPR) distance restraints and Rosetta de novo protein folding has been previously demonstrated to yield high quality models (Alexander et al., 2008). However, widespread application of this methodology to proteins of unknown structures is hindered by the lack of a general strategy to place spin label pairs in the primary sequence. In this work, we report the development of an algorithm that optimally selects spin labeling positions for the purpose of distance measurements by EPR. For the α-helical subdomain of T4 lysozyme (T4L), simulated restraints that maximize sequence separation between the two spin labels while simultaneously ensuring pairwise connectivity of secondary structure elements yielded vastly improved models by Rosetta folding. 50% of all these models have the correct fold compared to only 21% and 8% correctly folded models when randomly placed restraints or no restraints are used, respectively. Moreover, the improvements in model quality require a limited number of optimized restraints, the number of which is determined by the pairwise connectivities of T4L α-helices. The predicted improvement in Rosetta model quality was verified by experimental determination of distances between spin labels pairs selected by the algorithm. Overall, our results reinforce the rationale for the combined use of sparse EPR distance restraints and de novo folding. By alleviating the experimental bottleneck associated with restraint selection, this algorithm sets the stage for extending computational structure determination to larger, traditionally elusive protein topologies of critical structural and biochemical importance. PMID:21074624

  12. ACCURACY COMPARISON OF ALGORITHMS FOR DETERMINATION OF IMAGE CENTER COORDINATES IN OPTOELECTRONIC DEVICES

    Directory of Open Access Journals (Sweden)

    N. A. Starasotnikau

    2018-01-01

    Full Text Available Accuracy in determination of coordinates for image having simple shapes is considered as one of important and significant parameters in metrological optoelectronic systems such as autocollimators, stellar sensors, Shack-Hartmann sensors, schemes for geometric calibration of digital cameras for aerial and space imagery, various tracking systems. The paper describes a mathematical model for a measuring stand based on a collimator which projects a test-object onto a photodetector of an optoelectronic device. The mathematical model takes into account characteristic noises for photodetectors: a shot noise of the desired signal (photon and a shot noise of a dark signal, readout and spatial heterogeneity of CCD (charge-coupled device matrix elements. In order to reduce noise effect it is proposed to apply the Wiener filter for smoothing an image and its unambiguous identification and also enter a threshold according to brightness level. The paper contains a comparison of two algorithms for determination of coordinates in accordance with energy gravity center and contour. Sobel, Pruitt, Roberts, Laplacian Gaussian, Canni detectors have been used for determination of the test-object contour. The essence of the algorithm for determination of coordinates lies in search for an image contour in the form of a circle with its subsequent approximation and determination of the image center. An error calculation has been made while determining coordinates of a gravity center for test-objects of various diameters: 5, 10, 20, 30, 40, 50 pixels of a photodetector and also signalto-noise ratio values: 200, 100, 70, 20, 10. Signal-to-noise ratio has been calculated as a difference between maximum image intensity of the test-object and the background which is divided by mean-square deviation of the background. The accuracy for determination of coordinates has been improved by 0.5-1 order in case when there was an increase in a signal-to-noise ratio. Accuracy

  13. Control rod housing alignment

    International Nuclear Information System (INIS)

    Dixon, R.C.; Deaver, G.A.; Punches, J.R.; Singleton, G.E.; Erbes, J.G.; Offer, H.P.

    1990-01-01

    This patent describes a process for measuring the vertical alignment between a hole in a core plate and the top of a corresponding control rod drive housing within a boiling water reactor. It comprises: providing an alignment apparatus. The alignment apparatus including a lower end for fitting to the top of the control rod drive housing; an upper end for fitting to the aperture in the core plate, and a leveling means attached to the alignment apparatus to read out the difference in angularity with respect to gravity, and alignment pin registering means for registering to the alignment pin on the core plate; lowering the alignment device on a depending support through a lattice position in the top guide through the hole in the core plate down into registered contact with the top of the control rod drive housing; registering the upper end to the sides of the hole in the core plate; registering the alignment pin registering means to an alignment pin on the core plate to impart to the alignment device the required angularity; and reading out the angle of the control rod drive housing with respect to the hole in the core plate through the leveling devices whereby the angularity of the top of the control rod drive housing with respect to the hole in the core plate can be determined

  14. Pairagon: a highly accurate, HMM-based cDNA-to-genome aligner.

    Science.gov (United States)

    Lu, David V; Brown, Randall H; Arumugam, Manimozhiyan; Brent, Michael R

    2009-07-01

    The most accurate way to determine the intron-exon structures in a genome is to align spliced cDNA sequences to the genome. Thus, cDNA-to-genome alignment programs are a key component of most annotation pipelines. The scoring system used to choose the best alignment is a primary determinant of alignment accuracy, while heuristics that prevent consideration of certain alignments are a primary determinant of runtime and memory usage. Both accuracy and speed are important considerations in choosing an alignment algorithm, but scoring systems have received much less attention than heuristics. We present Pairagon, a pair hidden Markov model based cDNA-to-genome alignment program, as the most accurate aligner for sequences with high- and low-identity levels. We conducted a series of experiments testing alignment accuracy with varying sequence identity. We first created 'perfect' simulated cDNA sequences by splicing the sequences of exons in the reference genome sequences of fly and human. The complete reference genome sequences were then mutated to various degrees using a realistic mutation simulator and the perfect cDNAs were aligned to them using Pairagon and 12 other aligners. To validate these results with natural sequences, we performed cross-species alignment using orthologous transcripts from human, mouse and rat. We found that aligner accuracy is heavily dependent on sequence identity. For sequences with 100% identity, Pairagon achieved accuracy levels of >99.6%, with one quarter of the errors of any other aligner. Furthermore, for human/mouse alignments, which are only 85% identical, Pairagon achieved 87% accuracy, higher than any other aligner. Pairagon source and executables are freely available at http://mblab.wustl.edu/software/pairagon/

  15. An efficient iterative grand canonical Monte Carlo algorithm to determine individual ionic chemical potentials in electrolytes.

    Science.gov (United States)

    Malasics, Attila; Boda, Dezso

    2010-06-28

    Two iterative procedures have been proposed recently to calculate the chemical potentials corresponding to prescribed concentrations from grand canonical Monte Carlo (GCMC) simulations. Both are based on repeated GCMC simulations with updated excess chemical potentials until the desired concentrations are established. In this paper, we propose combining our robust and fast converging iteration algorithm [Malasics, Gillespie, and Boda, J. Chem. Phys. 128, 124102 (2008)] with the suggestion of Lamperski [Mol. Simul. 33, 1193 (2007)] to average the chemical potentials in the iterations (instead of just using the chemical potentials obtained in the last iteration). We apply the unified method for various electrolyte solutions and show that our algorithm is more efficient if we use the averaging procedure. We discuss the convergence problems arising from violation of charge neutrality when inserting/deleting individual ions instead of neutral groups of ions (salts). We suggest a correction term to the iteration procedure that makes the algorithm efficient to determine the chemical potentials of individual ions too.

  16. Determination of point of maximum likelihood in failure domain using genetic algorithms

    International Nuclear Information System (INIS)

    Obadage, A.S.; Harnpornchai, N.

    2006-01-01

    The point of maximum likelihood in a failure domain yields the highest value of the probability density function in the failure domain. The maximum-likelihood point thus represents the worst combination of random variables that contribute in the failure event. In this work Genetic Algorithms (GAs) with an adaptive penalty scheme have been proposed as a tool for the determination of the maximum likelihood point. The utilization of only numerical values in the GAs operation makes the algorithms applicable to cases of non-linear and implicit single and multiple limit state function(s). The algorithmic simplicity readily extends its application to higher dimensional problems. When combined with Monte Carlo Simulation, the proposed methodology will reduce the computational complexity and at the same time will enhance the possibility in rare-event analysis under limited computational resources. Since, there is no approximation done in the procedure, the solution obtained is considered accurate. Consequently, GAs can be used as a tool for increasing the computational efficiency in the element and system reliability analyses

  17. Algorithm of Data Reduce in Determination of Aerosol Particle Size Distribution at Damps/C

    International Nuclear Information System (INIS)

    Muhammad-Priyatna; Otto-Pribadi-Ruslanto

    2001-01-01

    The analysis had to do for algorithm of data reduction on Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system, this is for determine aerosol particle size distribution with range 0,01 μm to 1 μm in diameter. Damps/C (Differential Mobility Particle Sizer with Condensation Particle Counter) system contents are software and hardware. The hardware used determine of mobilities of aerosol particle and so the software used determine aerosol particle size distribution in diameter. The mobilities and diameter particle had connection in the electricity field. That is basic program for reduction of data and particle size conversion from particle mobility become particle diameter. The analysis to get transfer function value, Ω, is 0.5. The data reduction program to do conversation mobility basis become diameter basis with number efficiency correction, transfer function value, and poly charge particle. (author)

  18. A Novel Algorithm for Determining Contact Area Between a Respirator and a Headform

    OpenAIRE

    Lei, Zhipeng; Yang, James; Zhuang, Ziqing

    2014-01-01

    The contact area, as well as the contact pressure, is created when a respiratory protection device (a respirator or surgical mask) contacts a human face. A computer-based algorithm for determining the contact area between a headform and N95 filtering facepiece respirator (FFR) was proposed. Six N95 FFRs were applied to five sizes of standard headforms (large, medium, small, long/narrow, and short/wide) to simulate respirator donning. After the contact simulation between a headform and an N95 ...

  19. Application of modified Martinez-Silva algorithm in determination of net cover

    Science.gov (United States)

    Stefanowicz, Łukasz; Grobelna, Iwona

    2016-12-01

    In the article we present the idea of modifications of Martinez-Silva algorithm, which allows for determination of place invariants (p-invariants) of Petri net. Their generation time is important in the parallel decomposition of discrete systems described by Petri nets. Decomposition process is essential from the point of view of discrete system design, as it allows for separation of smaller sequential parts. The proposed modifications of Martinez-Silva method concern the net cover by p-invariants and are focused on two important issues: cyclic reduction of invariant matrix and cyclic checking of net cover.

  20. RNA Structural Alignments, Part I

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Gorodkin, Jan

    2014-01-01

    Simultaneous alignment and secondary structure prediction of RNA sequences is often referred to as "RNA structural alignment." A class of the methods for structural alignment is based on the principles proposed by Sankoff more than 25 years ago. The Sankoff algorithm simultaneously folds and aligns...... is so high that it took more than a decade before the first implementation of a Sankoff style algorithm was published. However, with the faster computers available today and the improved heuristics used in the implementations the Sankoff-based methods have become practical. This chapter describes...... the methods based on the Sankoff algorithm. All the practical implementations of the algorithm use heuristics to make them run in reasonable time and memory. These heuristics are also described in this chapter....

  1. Procedure, algorithm and criterions of determination of a burnup of a irradiated nuclear fuel in process of overloading

    International Nuclear Information System (INIS)

    Bilej, D.V.; Fridman, N.A.; Maslov, O.V.; Maksimov, M.V.

    2001-01-01

    The procedure, algorithm and criterions of determination of a burnup of the irradiated nuclear fuel in process of overloading are described. The feature of the procedure, algorithm and criterions consists in the account of initial enrichment and cooling time nuclear fuel after irradiation

  2. Determination of the band alignment of a-IGZO/a-IGMO heterojunction for high-electron mobility transistor application

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Yi-Yu; Qian, Ling-Xuan; Liu, Xing-Zhao [School of Microelectronics and Solid-State Electronics, University of Electronic Science and Technology of China, Chengdu (China); State Key Laboratory of Electronic Thin Films and Integrated Devices, Chengdu (China)

    2017-10-15

    In the past decade, amorphous InGaZnO thin film transistors (a-IGZO TFTs) have become a very promising candidate for application in flat panel displays (FPDs). However, it is difficult to break through the mobility bottleneck of a-IGZO TFTs to obtain mobilities higher than 100 cm{sup 2} V{sup -1} s{sup -1}, thus limiting their use in more advanced applications. Construction of a high-electron mobility transistor (HEMT) based on a heterojunction structure could provide a solution for this problem. In this work, the band alignment of a-IGZO and amorphous InGaMgO (a-IGMO) heterojunction has been investigated using X-ray photoelectron spectroscopy (XPS) and transmission spectra measurements. The valence band (ΔE{sub V}) and conduction band offsets (ΔE{sub C}) were determined as 0.09 and 0.83 eV, respectively. The ΔE{sub C} was large enough to construct a potential well that could favor the appearance of a two-dimensional electron gas (2DEG). Hence, the achievement of an HEMT based on a-IGZO/a-IGMO heterojunction can be expected. Moreover, band bending contributed greatly to such a large ΔE{sub C}, and thus to the formation of electrical confinement structure. Our findings suggest that a-IGZO/a-IGMO heterojunction is a potential candidate for constructing a HEMT and thus breaking through the mobility bottleneck of a-IGZO-based TFTs for the applications in next-generation electronic products. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  3. Determination of the band alignment of a-IGZO/a-IGMO heterojunction for high-electron mobility transistor application

    International Nuclear Information System (INIS)

    Zhang, Yi-Yu; Qian, Ling-Xuan; Liu, Xing-Zhao

    2017-01-01

    In the past decade, amorphous InGaZnO thin film transistors (a-IGZO TFTs) have become a very promising candidate for application in flat panel displays (FPDs). However, it is difficult to break through the mobility bottleneck of a-IGZO TFTs to obtain mobilities higher than 100 cm"2 V"-"1 s"-"1, thus limiting their use in more advanced applications. Construction of a high-electron mobility transistor (HEMT) based on a heterojunction structure could provide a solution for this problem. In this work, the band alignment of a-IGZO and amorphous InGaMgO (a-IGMO) heterojunction has been investigated using X-ray photoelectron spectroscopy (XPS) and transmission spectra measurements. The valence band (ΔE_V) and conduction band offsets (ΔE_C) were determined as 0.09 and 0.83 eV, respectively. The ΔE_C was large enough to construct a potential well that could favor the appearance of a two-dimensional electron gas (2DEG). Hence, the achievement of an HEMT based on a-IGZO/a-IGMO heterojunction can be expected. Moreover, band bending contributed greatly to such a large ΔE_C, and thus to the formation of electrical confinement structure. Our findings suggest that a-IGZO/a-IGMO heterojunction is a potential candidate for constructing a HEMT and thus breaking through the mobility bottleneck of a-IGZO-based TFTs for the applications in next-generation electronic products. (copyright 2017 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  4. Experimental determination and verification of the parameters used in a proton pencil beam algorithm

    International Nuclear Information System (INIS)

    Szymanowski, H.; Mazal, A.; Nauraye, C.; Biensan, S.; Ferrand, R.; Murillo, M.C.; Caneva, S.; Gaboriaud, G.; Rosenwald, J.C.

    2001-01-01

    We present an experimental procedure for the determination and the verification under practical conditions of physical and computational parameters used in our proton pencil beam algorithm. The calculation of the dose delivered by a single pencil beam relies on a measured spread-out Bragg peak, and the description of its radial spread at depth features simple specific parameters accounting individually for the influence of the beam line as a whole, the beam energy modulation, the compensator, and the patient medium. For determining the experimental values of the physical parameters related to proton scattering, we utilized a simple relation between Gaussian radial spreads and the width of lateral penumbras. The contribution from the beam line has been extracted from lateral penumbra measurements in air: a linear variation with the distance collimator-point has been observed. Analytically predicted radial spreads within the patient were in good agreement with experimental values in water under various reference conditions. Results indicated no significant influence of the beam energy modulation. Using measurements in presence of Plexiglas slabs, a simple assumption on the effective source of scattering due to the compensator has been stated, leading to accurate radial spread calculations. Dose measurements in presence of complexly shaped compensators have been used to assess the performances of the algorithm supplied with the adequate physical parameters. One of these compensators has also been used, together with a reference configuration, for investigating a set of computational parameters decreasing the calculation time while maintaining a high level of accuracy. Faster dose computations have been performed for algorithm evaluation in the presence of geometrical and patient compensators, and have shown good agreement with the measured dose distributions

  5. The CMS Muon System Alignment

    CERN Document Server

    Martinez Ruiz-Del-Arbol, P

    2009-01-01

    The alignment of the muon system of CMS is performed using different techniques: photogrammetry measurements, optical alignment and alignment with tracks. For track-based alignment, several methods are employed, ranging from a hit and impact point (HIP) algorithm and a procedure exploiting chamber overlaps to a global fit method based on the Millepede approach. For start-up alignment as long as available integrated luminosity is still significantly limiting the size of the muon sample from collisions, cosmic muon and beam halo signatures play a very strong role. During the last commissioning runs in 2008 the first aligned geometries have been produced and validated with data. The CMS offline computing infrastructure has been used in order to perform improved reconstructions. We present the computational aspects related to the calculation of alignment constants at the CERN Analysis Facility (CAF), the production and population of databases and the validation and performance in the official reconstruction. Also...

  6. GraphAlignment: Bayesian pairwise alignment of biological networks

    Directory of Open Access Journals (Sweden)

    Kolář Michal

    2012-11-01

    Full Text Available Abstract Background With increased experimental availability and accuracy of bio-molecular networks, tools for their comparative and evolutionary analysis are needed. A key component for such studies is the alignment of networks. Results We introduce the Bioconductor package GraphAlignment for pairwise alignment of bio-molecular networks. The alignment incorporates information both from network vertices and network edges and is based on an explicit evolutionary model, allowing inference of all scoring parameters directly from empirical data. We compare the performance of our algorithm to an alternative algorithm, Græmlin 2.0. On simulated data, GraphAlignment outperforms Græmlin 2.0 in several benchmarks except for computational complexity. When there is little or no noise in the data, GraphAlignment is slower than Græmlin 2.0. It is faster than Græmlin 2.0 when processing noisy data containing spurious vertex associations. Its typical case complexity grows approximately as O(N2.6. On empirical bacterial protein-protein interaction networks (PIN and gene co-expression networks, GraphAlignment outperforms Græmlin 2.0 with respect to coverage and specificity, albeit by a small margin. On large eukaryotic PIN, Græmlin 2.0 outperforms GraphAlignment. Conclusions The GraphAlignment algorithm is robust to spurious vertex associations, correctly resolves paralogs, and shows very good performance in identification of homologous vertices defined by high vertex and/or interaction similarity. The simplicity and generality of GraphAlignment edge scoring makes the algorithm an appropriate choice for global alignment of networks.

  7. Use artificial neural network to align biological ontologies.

    Science.gov (United States)

    Huang, Jingshan; Dang, Jiangbo; Huhns, Michael N; Zheng, W Jim

    2008-09-16

    Being formal, declarative knowledge representation models, ontologies help to address the problem of imprecise terminologies in biological and biomedical research. However, ontologies constructed under the auspices of the Open Biomedical Ontologies (OBO) group have exhibited a great deal of variety, because different parties can design ontologies according to their own conceptual views of the world. It is therefore becoming critical to align ontologies from different parties. During automated/semi-automated alignment across biological ontologies, different semantic aspects, i.e., concept name, concept properties, and concept relationships, contribute in different degrees to alignment results. Therefore, a vector of weights must be assigned to these semantic aspects. It is not trivial to determine what those weights should be, and current methodologies depend a lot on human heuristics. In this paper, we take an artificial neural network approach to learn and adjust these weights, and thereby support a new ontology alignment algorithm, customized for biological ontologies, with the purpose of avoiding some disadvantages in both rule-based and learning-based aligning algorithms. This approach has been evaluated by aligning two real-world biological ontologies, whose features include huge file size, very few instances, concept names in numerical strings, and others. The promising experiment results verify our proposed hypothesis, i.e., three weights for semantic aspects learned from a subset of concepts are representative of all concepts in the same ontology. Therefore, our method represents a large leap forward towards automating biological ontology alignment.

  8. Band alignment of ZnO/multilayer MoS{sub 2} interface determined by x-ray photoelectron spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Xinke, E-mail: xkliu@szu.edu.cn, E-mail: liuwj@szu.edu.cn; He, Jiazhu; Chen, Le; Li, Kuilong; Jia, Fang; Zeng, Yuxiang; Lu, Youming; Zhu, Deliang; Liu, Wenjun, E-mail: xkliu@szu.edu.cn, E-mail: liuwj@szu.edu.cn [College of Materials Science and Engineering, Nanshan District Key Lab for Biopolymer and Safety Evaluation, Shenzhen University, 3688 Nanhai Ave, Shenzhen 518060 (China); Zhang, Yuan [School of Physics and Electronic Information, Hua Bei Normal University, 100 Dongshan Road, Huai Bei 235000 (China); Liu, Qiang; Yu, Wenjie [State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Microsystem and Information Technology, CAS, 865 Chang Ning Road, Shanghai 200050 (China); Wu, Jing [Institute of Materials research and Engineering (IMRE), 2 Fusionopolis Way, Innovis, #08-03, 138634 Singapore (Singapore); He, Zhubing [Department of Materials Science and Engineering, South University of Science and Technology of China, 1088 Xueyuan Road, Shenzhen 518055 (China); Ang, Kah-Wee [Department of Electrical and Computer Engineering, National University of Singapore, 4 Engineering Drive 3, 117583 Singapore (Singapore)

    2016-08-15

    The energy band alignment between ZnO and multilayer (ML)-MoS{sub 2} was characterized using high-resolution x-ray photoelectron spectroscopy. The ZnO film was deposited using an atomic layer deposition tool, and ML-MoS{sub 2} was grown by chemical vapor deposition. A valence band offset (VBO) of 3.32 eV and a conduction band offset (CBO) of 1.12 eV were obtained for the ZnO/ML-MoS{sub 2} interface without any treatment. With CHF{sub 3} plasma treatment, a VBO and a CBO across the ZnO/ML-MoS{sub 2} interface were found to be 3.54 eV and 1.34 eV, respectively. With the CHF{sub 3} plasma treatment, the band alignment of the ZnO/ML-MoS{sub 2} interface has been changed from type II or staggered band alignment to type III or misaligned one, which favors the electron-hole pair separation. The band alignment difference is believed to be dominated by the down-shift in the core level of Zn 2p or the interface dipoles, which is caused by the interfacial layer rich in F.

  9. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    International Nuclear Information System (INIS)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R.; Capes, H.; Guirlet, R.

    2003-01-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D α line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior

  10. Determination of edge plasma parameters by a genetic algorithm analysis of spectral line shapes

    Energy Technology Data Exchange (ETDEWEB)

    Marandet, Y.; Genesio, P.; Godbert-Mouret, L.; Koubiti, M.; Stamm, R. [Universite de Provence (PIIM), Centre de Saint-Jerome, 13 - Marseille (France); Capes, H.; Guirlet, R. [Association Euratom-CEA Cadarache, 13 - Saint-Paul-lez-Durance (France). Dept. de Recherches sur la Fusion Controlee

    2003-07-01

    Comparing an experimental and a theoretical line shape can be achieved by a genetic algorithm (GA) based on an analogy to the mechanisms of natural selection. Such an algorithm is able to deal with complex non-linear models, and can avoid local minima. We have used this optimization tool in the context of edge plasma spectroscopy, for a determination of the temperatures and fractions of the various populations of neutral deuterium emitting the D{sub {alpha}} line in 2 configurations of Tore-Supra: ergodic divertor and toroidal pumped limiter. Using the GA fit, the neutral emitters are separated into up to 4 populations which can be identified as resulting from molecular dissociation reactions, charge exchange, or reflection. In all the edge plasmas studied, a significant fraction of neutrals emit in the line wings, leading to neutrals with a temperature up to a few hundreds eV if a Gaussian line shape is assumed. This conclusion could be modified if the line wing exhibits a non Gaussian behavior.

  11. Final Progress Report: Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes Feasibility Study

    International Nuclear Information System (INIS)

    Rawool-Sullivan, Mohini; Bounds, John Alan; Brumby, Steven P.; Prasad, Lakshman; Sullivan, John P.

    2012-01-01

    This is the final report of the project titled, 'Isotope Identification Algorithm for Rapid and Accurate Determination of Radioisotopes,' PMIS project number LA10-HUMANID-PD03. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). It summarizes work performed over the FY10 time period. The goal of the work was to demonstrate principles of emulating a human analysis approach towards the data collected using radiation isotope identification devices (RIIDs). Human analysts begin analyzing a spectrum based on features in the spectrum - lines and shapes that are present in a given spectrum. The proposed work was to carry out a feasibility study that will pick out all gamma ray peaks and other features such as Compton edges, bremsstrahlung, presence/absence of shielding and presence of neutrons and escape peaks. Ultimately success of this feasibility study will allow us to collectively explain identified features and form a realistic scenario that produced a given spectrum in the future. We wanted to develop and demonstrate machine learning algorithms that will qualitatively enhance the automated identification capabilities of portable radiological sensors that are currently being used in the field.

  12. Optimization of sequence alignment for simple sequence repeat regions

    Directory of Open Access Journals (Sweden)

    Ogbonnaya Francis C

    2011-07-01

    Full Text Available Abstract Background Microsatellites, or simple sequence repeats (SSRs, are tandemly repeated DNA sequences, including tandem copies of specific sequences no longer than six bases, that are distributed in the genome. SSR has been used as a molecular marker because it is easy to detect and is used in a range of applications, including genetic diversity, genome mapping, and marker assisted selection. It is also very mutable because of slipping in the DNA polymerase during DNA replication. This unique mutation increases the insertion/deletion (INDELs mutation frequency to a high ratio - more than other types of molecular markers such as single nucleotide polymorphism (SNPs. SNPs are more frequent than INDELs. Therefore, all designed algorithms for sequence alignment fit the vast majority of the genomic sequence without considering microsatellite regions, as unique sequences that require special consideration. The old algorithm is limited in its application because there are many overlaps between different repeat units which result in false evolutionary relationships. Findings To overcome the limitation of the aligning algorithm when dealing with SSR loci, a new algorithm was developed using PERL script with a Tk graphical interface. This program is based on aligning sequences after determining the repeated units first, and the last SSR nucleotides positions. This results in a shifting process according to the inserted repeated unit type. When studying the phylogenic relations before and after applying the new algorithm, many differences in the trees were obtained by increasing the SSR length and complexity. However, less distance between different linage had been observed after applying the new algorithm. Conclusions The new algorithm produces better estimates for aligning SSR loci because it reflects more reliable evolutionary relations between different linages. It reduces overlapping during SSR alignment, which results in a more realistic

  13. Analyzing the determinants of the voting behavior using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Marcos Vizcaíno-González

    2016-09-01

    Full Text Available Using data about votes emitted by funds in meetings held by United States banks from 2003 to 2013, we apply a genetic algorithm to a set of financial variables in order to detect the determinants of the vote direction. Our findings indicate that there are three main explanatory factors: the market value of the firm, the shareholder activism measured as the total number of funds voting, and the temporal context, which reflects the influence of recent critical events affecting the banking industry, including bankruptcies, reputational failures, and mergers and acquisitions. As a result, considering that voting behavior has been empirically linked to reputational harms, these findings can be considered as a useful insight about the keys that should be taken into account in order to achieve an effective reputational risk management strategy.

  14. A Robust Algorithm to Determine the Topology of Space from the Cosmic Microwave Background Radiation

    OpenAIRE

    Weeks, Jeffrey R.

    2001-01-01

    Satellite measurements of the cosmic microwave back-ground radiation will soon provide an opportunity to test whether the universe is multiply connected. This paper presents a new algorithm for deducing the topology of the universe from the microwave background data. Unlike an older algorithm, the new algorithm gives the curvature of space and the radius of the last scattering surface as outputs, rather than requiring them as inputs. The new algorithm is also more tolerant of erro...

  15. Determination of Critical Conditions for Puncturing Almonds Using Coupled Response Surface Methodology and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Mahmood Mahmoodi-Eshkaftaki

    2013-01-01

    Full Text Available In this study, the effect of seed moisture content, probe diameter and loading velocity (puncture conditions on some mechanical properties of almond kernel and peeled almond kernel is considered to model a relationship between the puncture conditions and rupture energy. Furthermore, distribution of the mechanical properties is determined. The main objective is to determine the critical values of mechanical properties significant for peeling machines. The response surface methodology was used to find the relationship between the input parameters and the output responses, and the fitness function was applied to measure the optimal values using the genetic algorithm. Two-parameter Weibull function was used to describe the distribution of mechanical properties. Based on the Weibull parameter values, i.e. shape parameter (β and scale parameter (η calculated for each property, the mechanical distribution variations were completely described and it was confirmed that the mechanical properties are rule governed, which makes the Weibull function suitable for estimating their distributions. The energy model estimated using response surface methodology shows that the mechanical properties relate exponentially to the moisture, and polynomially to the loading velocity and probe diameter, which enabled successful estimation of the rupture energy (R²=0.94. The genetic algorithm calculated the critical values of seed moisture, probe diameter, and loading velocity to be 18.11 % on dry mass basis, 0.79 mm, and 0.15 mm/min, respectively, and optimum rupture energy of 1.97·10-³ J. These conditions were used for comparison with new samples, where the rupture energy was experimentally measured to be 2.68 and 2.21·10-³ J for kernel and peeled kernel, respectively, which was nearly in agreement with our model results.

  16. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    S. Szillasi

    2013-01-01

    The CMS detector has been gradually opened and whenever a wheel became exposed the first operation was the removal of the MABs, the sensor structures of the Hardware Barrel Alignment System. By the last days of June all 36 MABs have arrived at the Alignment Lab at the ISR where, as part of the Alignment Upgrade Project, they are refurbished with new Survey target holders. Their electronic checkout is on the way and finally they will be recalibrated. During LS1 the alignment system will be upgraded in order to allow more precise reconstruction of the MB4 chambers in Sector 10 and Sector 4. This requires new sensor components, so called MiniMABs (pictured below), that have already been assembled and calibrated. Image 6: Calibrated MiniMABs are ready for installation For the track-based alignment, the systematic uncertainties of the algorithm are under scrutiny: this study will enable the production of an improved Monte Carlo misalignment scenario and to update alignment position errors eventually, crucial...

  17. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G. Gomez

    2012-01-01

      A new muon alignment has been produced for 2012 A+B data reconstruction. It uses the latest Tracker alignment and single-muon data samples to align both DTs and CSCs. Physics validation has been performed and shows a modest improvement in stand-alone muon momentum resolution in the barrel, where the alignment is essentially unchanged from the previous version. The reference-target track-based algorithm using only collision muons is employed for the first time to align the CSCs, and a substantial improvement in resolution is observed in the endcap and overlap regions for stand-alone muons. This new alignment is undergoing the approval process and is expected to be deployed as part of a new global tag in the beginning of December. The pT dependence of the φ-bias in curvature observed in Monte Carlo was traced to a relative vertical misalignment between the Tracker and barrel muon systems. Moving the barrel as a whole to match the Tracker cures this pT dependence, leaving only the &phi...

  18. The AUDANA algorithm for automated protein 3D structure determination from NMR NOE data

    International Nuclear Information System (INIS)

    Lee, Woonghee; Petit, Chad M.; Cornilescu, Gabriel; Stark, Jaime L.; Markley, John L.

    2016-01-01

    We introduce AUDANA (Automated Database-Assisted NOE Assignment), an algorithm for determining three-dimensional structures of proteins from NMR data that automates the assignment of 3D-NOE spectra, generates distance constraints, and conducts iterative high temperature molecular dynamics and simulated annealing. The protein sequence, chemical shift assignments, and NOE spectra are the only required inputs. Distance constraints generated automatically from ambiguously assigned NOE peaks are validated during the structure calculation against information from an enlarged version of the freely available PACSY database that incorporates information on protein structures deposited in the Protein Data Bank (PDB). This approach yields robust sets of distance constraints and 3D structures. We evaluated the performance of AUDANA with input data for 14 proteins ranging in size from 6 to 25 kDa that had 27–98 % sequence identity to proteins in the database. In all cases, the automatically calculated 3D structures passed stringent validation tests. Structures were determined with and without database support. In 9/14 cases, database support improved the agreement with manually determined structures in the PDB and in 11/14 cases, database support lowered the r.m.s.d. of the family of 20 structural models.

  19. The AUDANA algorithm for automated protein 3D structure determination from NMR NOE data

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Woonghee, E-mail: whlee@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States); Petit, Chad M. [University of Alabama at Birmingham, Department of Biochemistry and Molecular Genetics (United States); Cornilescu, Gabriel; Stark, Jaime L.; Markley, John L., E-mail: markley@nmrfam.wisc.edu [University of Wisconsin-Madison, National Magnetic Resonance Facility at Madison and Biochemistry Department (United States)

    2016-06-15

    We introduce AUDANA (Automated Database-Assisted NOE Assignment), an algorithm for determining three-dimensional structures of proteins from NMR data that automates the assignment of 3D-NOE spectra, generates distance constraints, and conducts iterative high temperature molecular dynamics and simulated annealing. The protein sequence, chemical shift assignments, and NOE spectra are the only required inputs. Distance constraints generated automatically from ambiguously assigned NOE peaks are validated during the structure calculation against information from an enlarged version of the freely available PACSY database that incorporates information on protein structures deposited in the Protein Data Bank (PDB). This approach yields robust sets of distance constraints and 3D structures. We evaluated the performance of AUDANA with input data for 14 proteins ranging in size from 6 to 25 kDa that had 27–98 % sequence identity to proteins in the database. In all cases, the automatically calculated 3D structures passed stringent validation tests. Structures were determined with and without database support. In 9/14 cases, database support improved the agreement with manually determined structures in the PDB and in 11/14 cases, database support lowered the r.m.s.d. of the family of 20 structural models.

  20. Implementation of a Parallel Protein Structure Alignment Service on Cloud

    Directory of Open Access Journals (Sweden)

    Che-Lun Hung

    2013-01-01

    Full Text Available Protein structure alignment has become an important strategy by which to identify evolutionary relationships between protein sequences. Several alignment tools are currently available for online comparison of protein structures. In this paper, we propose a parallel protein structure alignment service based on the Hadoop distribution framework. This service includes a protein structure alignment algorithm, a refinement algorithm, and a MapReduce programming model. The refinement algorithm refines the result of alignment. To process vast numbers of protein structures in parallel, the alignment and refinement algorithms are implemented using MapReduce. We analyzed and compared the structure alignments produced by different methods using a dataset randomly selected from the PDB database. The experimental results verify that the proposed algorithm refines the resulting alignments more accurately than existing algorithms. Meanwhile, the computational performance of the proposed service is proportional to the number of processors used in our cloud platform.

  1. Cloud-Coffee: implementation of a parallel consistency-based multiple alignment algorithm in the T-Coffee package and its benchmarking on the Amazon Elastic-Cloud.

    Science.gov (United States)

    Di Tommaso, Paolo; Orobitg, Miquel; Guirado, Fernando; Cores, Fernado; Espinosa, Toni; Notredame, Cedric

    2010-08-01

    We present the first parallel implementation of the T-Coffee consistency-based multiple aligner. We benchmark it on the Amazon Elastic Cloud (EC2) and show that the parallelization procedure is reasonably effective. We also conclude that for a web server with moderate usage (10K hits/month) the cloud provides a cost-effective alternative to in-house deployment. T-Coffee is a freeware open source package available from http://www.tcoffee.org/homepage.html

  2. An Alignment-Free Algorithm in Comparing the Similarity of Protein Sequences Based on Pseudo-Markov Transition Probabilities among Amino Acids.

    Science.gov (United States)

    Li, Yushuang; Song, Tian; Yang, Jiasheng; Zhang, Yi; Yang, Jialiang

    2016-01-01

    In this paper, we have proposed a novel alignment-free method for comparing the similarity of protein sequences. We first encode a protein sequence into a 440 dimensional feature vector consisting of a 400 dimensional Pseudo-Markov transition probability vector among the 20 amino acids, a 20 dimensional content ratio vector, and a 20 dimensional position ratio vector of the amino acids in the sequence. By evaluating the Euclidean distances among the representing vectors, we compare the similarity of protein sequences. We then apply this method into the ND5 dataset consisting of the ND5 protein sequences of 9 species, and the F10 and G11 datasets representing two of the xylanases containing glycoside hydrolase families, i.e., families 10 and 11. As a result, our method achieves a correlation coefficient of 0.962 with the canonical protein sequence aligner ClustalW in the ND5 dataset, much higher than those of other 5 popular alignment-free methods. In addition, we successfully separate the xylanases sequences in the F10 family and the G11 family and illustrate that the F10 family is more heat stable than the G11 family, consistent with a few previous studies. Moreover, we prove mathematically an identity equation involving the Pseudo-Markov transition probability vector and the amino acids content ratio vector.

  3. Development and investigation of an inverse problem solution algorithm for determination of Ap stars magnetic field geometry

    International Nuclear Information System (INIS)

    Piskunov, N.E.

    1985-01-01

    Mathematical formulation of the inverse problem of determination of magnetic field geometry from the polarization profiles of spectral lines is gven. The solving algorithm is proposed. A set of model calculations has shown the effectiveness of the algorithm, the high precision of magnetic star model parameters obtained and also the advantages of the inverse problem method over the commonly used method of interpretation of effective field curves

  4. Determining gestational age for public health care users in Brazil: comparison of methods and algorithm creation

    Directory of Open Access Journals (Sweden)

    Pereira Ana Paula Esteves

    2013-02-01

    Full Text Available Abstract Background A valid, accurate method for determining gestational age (GA is crucial in classifying early and late prematurity, and it is a relevant issue in perinatology. This study aimed at assessing the validity of different measures for approximating GA, and it provides an insight into the development of algorithms that can be adopted in places with similar characteristics to Brazil. A follow-up study was carried out in two cities in southeast Brazil. Participants were interviewed in the first trimester of pregnancy and in the postpartum period, with a final sample of 1483 participants after exclusions. The distribution of GA estimates at birth using ultrasound (US at 21–28 weeks, US at 29+ weeks, last menstrual period (LMP, and the Capurro method were compared with GA estimates at birth using the reference US (at 7–20 weeks of gestation. Kappa, sensitivity, and specificity tests were calculated for preterm (=42 weeks birth rates. The difference in days in the GA estimates between the reference US and the LMP and between the reference US and the Capurro method were evaluated in terms of maternal and infant characteristics, respectively. Results For prematurity, US at 21–28 weeks had the highest sensitivity (0.84 and the Capurro method the highest specificity (0.97. For postmaturity, US at 21–28 weeks and the Capurro method had a very high sensitivity (0.98. All methods of GA estimation had a very low specificity (≤0.50 for postmaturity. GA estimates at birth with the algorithm and the reference US produced very similar results, with a preterm birth rate of 12.5%. Conclusions In countries such as Brazil, where there is less accurate information about the LMP and lower coverage of early obstetric US examinations, we recommend the development of algorithms that enable the use of available information using methodological strategies to reduce the chance of errors with GA. Thus, this study calls into attention the care needed

  5. Getting Your Peaks in Line: A Review of Alignment Methods for NMR Spectral Data

    Directory of Open Access Journals (Sweden)

    Trung Nghia Vu

    2013-04-01

    Full Text Available One of the most significant challenges in the comparative analysis of Nuclear Magnetic Resonance (NMR metabolome profiles is the occurrence of shifts between peaks across different spectra, for example caused by fluctuations in pH, temperature, instrument factors and ion content. Proper alignment of spectral peaks is therefore often a crucial preprocessing step prior to downstream quantitative analysis. Various alignment methods have been developed specifically for this purpose. Other methods were originally developed to align other data types (GC, LC, SELDI-MS, etc., but can also be applied to NMR data. This review discusses the available methods, as well as related problems such as reference determination or the evaluation of alignment quality. We present a generic alignment framework that allows for comparison and classification of different alignment approaches according to their algorithmic principles, and we discuss their performance.

  6. A novel algorithm for determining contact area between a respirator and a headform.

    Science.gov (United States)

    Lei, Zhipeng; Yang, James; Zhuang, Ziqing

    2014-01-01

    The contact area, as well as the contact pressure, is created when a respiratory protection device (a respirator or surgical mask) contacts a human face. A computer-based algorithm for determining the contact area between a headform and N95 filtering facepiece respirator (FFR) was proposed. Six N95 FFRs were applied to five sizes of standard headforms (large, medium, small, long/narrow, and short/wide) to simulate respirator donning. After the contact simulation between a headform and an N95 FFR was conducted, a contact area was determined by extracting the intersection surfaces of the headform and the N95 FFR. Using computer-aided design tools, a superimposed contact area and an average contact area, which are non-uniform rational basis spline (NURBS) surfaces, were developed for each headform. Experiments that directly measured dimensions of the contact areas between headform prototypes and N95 FFRs were used to validate the simulation results. Headform sizes influenced all contact area dimensions (P contact area dimensions (P contact area, while the large and small headforms produced the smallest.

  7. Measurement methods and interpretation algorithms for the determination of the remaining lifetime of the electrical insulation

    Directory of Open Access Journals (Sweden)

    Engster F.

    2005-12-01

    Full Text Available The paper presents a set of on-line and off-line measuring methods for the dielectric parameters of the electric insulation as well as the method of results interpretation aimed to determine the occurence of a damage and to set up the its speed of evolution. These results lead finally to the determination of the life time under certain imposed safety conditions. The interpretation of the measurement results is done based on analytical algorithms allowing also the calculation of the index of correlation between the real results and the mathematical interpolation. It is performed a comparative analysis between different measuring and interpretation methods. There are considered certain events occurred during the measurement performance including their causes. The working-out of the analytical methods has been improved during the during the dielectric measurements performance for about 25 years at a number of 140 turbo and hydro power plants. Finally it is proposed a measurement program to be applied and which will allow the correlation of the on-line and off-line dielectric measurement obtaining thus a reliable technology of high accuracy level for the estimation of the available lifetime of electrical insulation.

  8. Delayed Slater determinant update algorithms for high efficiency quantum Monte Carlo

    Science.gov (United States)

    McDaniel, T.; D'Azevedo, E. F.; Li, Y. W.; Wong, K.; Kent, P. R. C.

    2017-11-01

    Within ab initio Quantum Monte Carlo simulations, the leading numerical cost for large systems is the computation of the values of the Slater determinants in the trial wavefunction. Each Monte Carlo step requires finding the determinant of a dense matrix. This is most commonly iteratively evaluated using a rank-1 Sherman-Morrison updating scheme to avoid repeated explicit calculation of the inverse. The overall computational cost is, therefore, formally cubic in the number of electrons or matrix size. To improve the numerical efficiency of this procedure, we propose a novel multiple rank delayed update scheme. This strategy enables probability evaluation with an application of accepted moves to the matrices delayed until after a predetermined number of moves, K. The accepted events are then applied to the matrices en bloc with enhanced arithmetic intensity and computational efficiency via matrix-matrix operations instead of matrix-vector operations. This procedure does not change the underlying Monte Carlo sampling or its statistical efficiency. For calculations on large systems and algorithms such as diffusion Monte Carlo, where the acceptance ratio is high, order of magnitude improvements in the update time can be obtained on both multi-core central processing units and graphical processing units.

  9. Alignment of whole genomes.

    Science.gov (United States)

    Delcher, A L; Kasif, S; Fleischmann, R D; Peterson, J; White, O; Salzberg, S L

    1999-01-01

    A new system for aligning whole genome sequences is described. Using an efficient data structure called a suffix tree, the system is able to rapidly align sequences containing millions of nucleotides. Its use is demonstrated on two strains of Mycoplasma tuberculosis, on two less similar species of Mycoplasma bacteria and on two syntenic sequences from human chromosome 12 and mouse chromosome 6. In each case it found an alignment of the input sequences, using between 30 s and 2 min of computation time. From the system output, information on single nucleotide changes, translocations and homologous genes can easily be extracted. Use of the algorithm should facilitate analysis of syntenic chromosomal regions, strain-to-strain comparisons, evolutionary comparisons and genomic duplications. PMID:10325427

  10. Methods in ALFA Alignment

    CERN Document Server

    Melendez, Jordan

    2014-01-01

    This note presents two model-independent methods for use in the alignment of the ALFA forward detectors. Using a Monte Carlo simulated LHC run at \\beta = 90m and \\sqrt{s} = 7 TeV, the Kinematic Peak alignment method is utilized to reconstruct the Mandelstam momentum transfer variable t for single-diractive protons. The Hot Spot method uses fluctuations in the hitmap density to pinpoint particular regions in the detector that could signal a misalignment. Another method uses an error function fit to find the detector edge. With this information, the vertical alignment can be determined.

  11. Pareto optimal pairwise sequence alignment.

    Science.gov (United States)

    DeRonne, Kevin W; Karypis, George

    2013-01-01

    Sequence alignment using evolutionary profiles is a commonly employed tool when investigating a protein. Many profile-profile scoring functions have been developed for use in such alignments, but there has not yet been a comprehensive study of Pareto optimal pairwise alignments for combining multiple such functions. We show that the problem of generating Pareto optimal pairwise alignments has an optimal substructure property, and develop an efficient algorithm for generating Pareto optimal frontiers of pairwise alignments. All possible sets of two, three, and four profile scoring functions are used from a pool of 11 functions and applied to 588 pairs of proteins in the ce_ref data set. The performance of the best objective combinations on ce_ref is also evaluated on an independent set of 913 protein pairs extracted from the BAliBASE RV11 data set. Our dynamic-programming-based heuristic approach produces approximated Pareto optimal frontiers of pairwise alignments that contain comparable alignments to those on the exact frontier, but on average in less than 1/58th the time in the case of four objectives. Our results show that the Pareto frontiers contain alignments whose quality is better than the alignments obtained by single objectives. However, the task of identifying a single high-quality alignment among those in the Pareto frontier remains challenging.

  12. Confocal Microscopy for Process Monitoring and Wide-Area Height Determination of Vertically-Aligned Carbon Nanotube Forests

    Directory of Open Access Journals (Sweden)

    Markus Piwko

    2015-08-01

    Full Text Available Confocal microscopy is introduced as a new and generally applicable method for the characterization of the vertically-aligned carbon nanotubes (VACNT forest height. With this technique process control is significantly intensified. The topography of the substrate and VACNT can be mapped with a height resolution down to 15 nm. The advantages of confocal microscopy, compared to scanning electron microscopy (SEM, are demonstrated by investigating the growth kinetics of VACNT using Al2O3 buffer layers with varying thicknesses. A process optimization using confocal microscopy for fast VACNT forest height evaluation is presented.

  13. Comparison of algorithms for determination of rotation measure and Faraday structure. I. 1100–1400 MHz

    International Nuclear Information System (INIS)

    Sun, X. H.; Akahori, Takuya; Anderson, C. S.; Farnes, J. S.; O’Sullivan, S. P.; Rudnick, L.; O’Brien, T.; Bell, M. R.; Bray, J. D.; Scaife, A. M. M.; Ideguchi, S.; Kumazaki, K.; Stepanov, R.; Stil, J.; Wolleben, M.; Takahashi, K.; Weeren, R. J. van

    2015-01-01

    Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faraday structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, RM wtd , (2) the separation Δϕ of two Faraday components, and (3) the reduced chi-squared χ r 2 . Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for RM wtd but with significantly higher errors for Δϕ. All other methods, including standard Faraday synthesis, frequently identify only one component when Δϕ is below or near the width of the Faraday point-spread function. (3) No methods as currently implemented work well

  14. Lord-Wingersky Algorithm Version 2.0 for Hierarchical Item Factor Models with Applications in Test Scoring, Scale Alignment, and Model Fit Testing.

    Science.gov (United States)

    Cai, Li

    2015-06-01

    Lord and Wingersky's (Appl Psychol Meas 8:453-461, 1984) recursive algorithm for creating summed score based likelihoods and posteriors has a proven track record in unidimensional item response theory (IRT) applications. Extending the recursive algorithm to handle multidimensionality is relatively simple, especially with fixed quadrature because the recursions can be defined on a grid formed by direct products of quadrature points. However, the increase in computational burden remains exponential in the number of dimensions, making the implementation of the recursive algorithm cumbersome for truly high-dimensional models. In this paper, a dimension reduction method that is specific to the Lord-Wingersky recursions is developed. This method can take advantage of the restrictions implied by hierarchical item factor models, e.g., the bifactor model, the testlet model, or the two-tier model, such that a version of the Lord-Wingersky recursive algorithm can operate on a dramatically reduced set of quadrature points. For instance, in a bifactor model, the dimension of integration is always equal to 2, regardless of the number of factors. The new algorithm not only provides an effective mechanism to produce summed score to IRT scaled score translation tables properly adjusted for residual dependence, but leads to new applications in test scoring, linking, and model fit checking as well. Simulated and empirical examples are used to illustrate the new applications.

  15. Formatt: Correcting protein multiple structural alignments by incorporating sequence alignment

    Directory of Open Access Journals (Sweden)

    Daniels Noah M

    2012-10-01

    Full Text Available Abstract Background The quality of multiple protein structure alignments are usually computed and assessed based on geometric functions of the coordinates of the backbone atoms from the protein chains. These purely geometric methods do not utilize directly protein sequence similarity, and in fact, determining the proper way to incorporate sequence similarity measures into the construction and assessment of protein multiple structure alignments has proved surprisingly difficult. Results We present Formatt, a multiple structure alignment based on the Matt purely geometric multiple structure alignment program, that also takes into account sequence similarity when constructing alignments. We show that Formatt outperforms Matt and other popular structure alignment programs on the popular HOMSTRAD benchmark. For the SABMark twilight zone benchmark set that captures more remote homology, Formatt and Matt outperform other programs; depending on choice of embedded sequence aligner, Formatt produces either better sequence and structural alignments with a smaller core size than Matt, or similarly sized alignments with better sequence similarity, for a small cost in average RMSD. Conclusions Considering sequence information as well as purely geometric information seems to improve quality of multiple structure alignments, though defining what constitutes the best alignment when sequence and structural measures would suggest different alignments remains a difficult open question.

  16. Ancestral sequence alignment under optimal conditions

    Directory of Open Access Journals (Sweden)

    Brown Daniel G

    2005-11-01

    Full Text Available Abstract Background Multiple genome alignment is an important problem in bioinformatics. An important subproblem used by many multiple alignment approaches is that of aligning two multiple alignments. Many popular alignment algorithms for DNA use the sum-of-pairs heuristic, where the score of a multiple alignment is the sum of its induced pairwise alignment scores. However, the biological meaning of the sum-of-pairs of pairs heuristic is not obvious. Additionally, many algorithms based on the sum-of-pairs heuristic are complicated and slow, compared to pairwise alignment algorithms. An alternative approach to aligning alignments is to first infer ancestral sequences for each alignment, and then align the two ancestral sequences. In addition to being fast, this method has a clear biological basis that takes into account the evolution implied by an underlying phylogenetic tree. In this study we explore the accuracy of aligning alignments by ancestral sequence alignment. We examine the use of both maximum likelihood and parsimony to infer ancestral sequences. Additionally, we investigate the effect on accuracy of allowing ambiguity in our ancestral sequences. Results We use synthetic sequence data that we generate by simulating evolution on a phylogenetic tree. We use two different types of phylogenetic trees: trees with a period of rapid growth followed by a period of slow growth, and trees with a period of slow growth followed by a period of rapid growth. We examine the alignment accuracy of four ancestral sequence reconstruction and alignment methods: parsimony, maximum likelihood, ambiguous parsimony, and ambiguous maximum likelihood. Additionally, we compare against the alignment accuracy of two sum-of-pairs algorithms: ClustalW and the heuristic of Ma, Zhang, and Wang. Conclusion We find that allowing ambiguity in ancestral sequences does not lead to better multiple alignments. Regardless of whether we use parsimony or maximum likelihood, the

  17. Quasiparticle Level Alignment for Photocatalytic Interfaces.

    Science.gov (United States)

    Migani, Annapaoala; Mowbray, Duncan J; Zhao, Jin; Petek, Hrvoje; Rubio, Angel

    2014-05-13

    Electronic level alignment at the interface between an adsorbed molecular layer and a semiconducting substrate determines the activity and efficiency of many photocatalytic materials. Standard density functional theory (DFT)-based methods have proven unable to provide a quantitative description of this level alignment. This requires a proper treatment of the anisotropic screening, necessitating the use of quasiparticle (QP) techniques. However, the computational complexity of QP algorithms has meant a quantitative description of interfacial levels has remained elusive. We provide a systematic study of a prototypical interface, bare and methanol-covered rutile TiO2(110) surfaces, to determine the type of many-body theory required to obtain an accurate description of the level alignment. This is accomplished via a direct comparison with metastable impact electron spectroscopy (MIES), ultraviolet photoelectron spectroscopy (UPS), and two-photon photoemission (2PP) spectroscopy. We consider GGA DFT, hybrid DFT, and G0W0, scQPGW1, scQPGW0, and scQPGW QP calculations. Our results demonstrate that G0W0, or our recently introduced scQPGW1 approach, are required to obtain the correct alignment of both the highest occupied and lowest unoccupied interfacial molecular levels (HOMO/LUMO). These calculations set a new standard in the interpretation of electronic structure probe experiments of complex organic molecule/semiconductor interfaces.

  18. Mineral crystal alignment in mineralized fracture callus determined by 3D small-angle X-ray scattering

    Energy Technology Data Exchange (ETDEWEB)

    Liu Yifei; Manjubala, Inderchand; Fratzl, Peter [Department of Biomaterials, Max Planck Institute of Colloids and Interfaces, 14424 Potsdam (Germany); Roschger, Paul [4th Medical Department, Ludwig Boltzmann Institute of Osteology at Hanusch Hospital of WGKK and AUVA Trauma Centre Meidling, 1140 Vienna (Austria); Schell, Hanna; Duda, Georg N, E-mail: fratzl@mpikg.mpg.d [Julius Wolff Institut and Center for Musculoskeletal Surgery, Charite- University Medicine Berlin, Augustenburger Platz 1, 13353 Berlin (Germany)

    2010-10-01

    Callus tissue formed during bone fracture healing is a mixture of different tissue types as revealed by histological analysis. But the structural characteristics of mineral crystals within the healing callus are not well known. Since two-dimensional (2D) scanning small-angle X-ray scattering (sSAXS) patterns showed that the size and orientation of callus crystals vary both spatially and temporally [1] and 2D electron microscopic analysis implies an anisotropic property of the callus morphology, the mineral crystals within the callus are also expected to vary in size and orientation in 3D. Three-dimensional small-angle X-ray scattering (3D SAXS), which combines 2D SAXS patterns collected at different angles of sample tilting, has been previously applied to investigate bone minerals in horse radius [2] and oim/oim mouse femur/tibia [3]. We implement a similar 3D SAXS method but with a different way of data analysis to gather information on the mineral alignment in fracture callus. With the proposed accurate yet fast assessment of 3D SAXS information, it was shown that the plate shaped mineral particles in the healing callus were aligned in groups with their predominant orientations occurring as a fiber texture.

  19. Mineral crystal alignment in mineralized fracture callus determined by 3D small-angle X-ray scattering

    International Nuclear Information System (INIS)

    Liu Yifei; Manjubala, Inderchand; Fratzl, Peter; Roschger, Paul; Schell, Hanna; Duda, Georg N

    2010-01-01

    Callus tissue formed during bone fracture healing is a mixture of different tissue types as revealed by histological analysis. But the structural characteristics of mineral crystals within the healing callus are not well known. Since two-dimensional (2D) scanning small-angle X-ray scattering (sSAXS) patterns showed that the size and orientation of callus crystals vary both spatially and temporally [1] and 2D electron microscopic analysis implies an anisotropic property of the callus morphology, the mineral crystals within the callus are also expected to vary in size and orientation in 3D. Three-dimensional small-angle X-ray scattering (3D SAXS), which combines 2D SAXS patterns collected at different angles of sample tilting, has been previously applied to investigate bone minerals in horse radius [2] and oim/oim mouse femur/tibia [3]. We implement a similar 3D SAXS method but with a different way of data analysis to gather information on the mineral alignment in fracture callus. With the proposed accurate yet fast assessment of 3D SAXS information, it was shown that the plate shaped mineral particles in the healing callus were aligned in groups with their predominant orientations occurring as a fiber texture.

  20. Beyond Alignment

    DEFF Research Database (Denmark)

    Beyond Alignment: Applying Systems Thinking to Architecting Enterprises is a comprehensive reader about how enterprises can apply systems thinking in their enterprise architecture practice, for business transformation and for strategic execution. The book's contributors find that systems thinking...

  1. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    M. Dallavalle

    2013-01-01

    A new Muon misalignment scenario for 2011 (7 TeV) Monte Carlo re-processing was re-leased. The scenario is based on running of standard track-based reference-target algorithm (exactly as in data) using single-muon simulated sample (with the transverse-momentum spectrum matching data). It used statistics similar to what was used for alignment with 2011 data, starting from an initially misaligned Muon geometry from uncertainties of hardware measurements and using the latest Tracker misalignment geometry. Validation of the scenario (with muons from Z decay and high-pT simulated muons) shows that it describes data well. The study of systematic uncertainties (dominant by now due to huge amount of data collected by CMS and used for muon alignment) is finalised. Realistic alignment position errors are being obtained from the estimated uncertainties and are expected to improve the muon reconstruction performance. Concerning the Hardware Alignment System, the upgrade of the Barrel Alignment is in progress. By now, d...

  2. Supervised chaos genetic algorithm based state of charge determination for LiFePO4 batteries in electric vehicles

    Science.gov (United States)

    Shen, Yanqing

    2018-04-01

    LiFePO4 battery is developed rapidly in electric vehicle, whose safety and functional capabilities are influenced greatly by the evaluation of available cell capacity. Added with adaptive switch mechanism, this paper advances a supervised chaos genetic algorithm based state of charge determination method, where a combined state space model is employed to simulate battery dynamics. The method is validated by the experiment data collected from battery test system. Results indicate that the supervised chaos genetic algorithm based state of charge determination method shows great performance with less computation complexity and is little influenced by the unknown initial cell state.

  3. Determining clinical photon beam spectra from measured depth dose with the Cimmino algorithm

    International Nuclear Information System (INIS)

    Bloch, P.; Altschuler, M.D.; Bjaerngard, B.E.; Kassaee, A.; McDonough, J.

    2000-01-01

    A method to determine the spectrum of a clinical photon beam from measured depth-dose data is described. At shallow depths, where the range of Compton-generated electrons increases rapidly with photon energy, the depth dose provides the information to discriminate the spectral contributions. To minimize the influence of contaminating electrons, small (6x6cm2 ) fields were used. The measured depth dose is represented as a linear combination of basis functions, namely the depth doses of monoenergetic photon beams derived by Monte Carlo simulations. The weights of the basis functions were obtained with the Cimmino feasibility algorithm, which examines in each iteration the discrepancy between predicted and measured depth dose. For 6 and 15 MV photon beams of a clinical accelerator, the depth dose obtained from the derived spectral weights was within about 1% of the measured depth dose at all depths. Because the problem is ill conditioned, solutions for the spectrum can fluctuate with energy. Physically realistic smooth spectra for these photon beams appeared when a small margin (about ±1%) was attributed to the measured depth dose. The maximum energy of both derived spectra agreed with the measured energy of the electrons striking the target to within 1 MeV. The use of a feasibility method on minimally relaxed constraints provides realistic spectra quickly and interactively. (author)

  4. A hybrid algorithm of BSC and QFD to determine the criteria affecting implementation of successful outsourcing

    Directory of Open Access Journals (Sweden)

    Mohammad Hemati

    2012-04-01

    Full Text Available Successful organizations share some identical factors that pave the way for their success. Among these factors, strategic management is the key to success for organizations to contribute more to the competitive world market of today. In this respect, the pivotal role of outsourcing cannot be denied. This research parallelizes the criteria affecting the outsourcing success as presented in Elmuti model with the Balanced score card method in the Tose'e Ta'avon Bank. In this research, questionnaires and interviews with experts helped determine the strategic goals at four perspectives of balanced score card method (at Tose'e Ta'avon Bank and the relative weights were computed for each of balance score card (BSC perspectives by using AHP method. As the next step, the indexes were prioritized by applying the quality function development(QFD technique and considering strategic goals at four perspectives in section "WHAT" and the outsourcing success criteria of Elmuti model in section "HOW". At the end of algorithm, the results are compared with the Elmuti method. Based on the results, the hybrid proposed technique seems to perform better than Elmuti.

  5. Determination of optimum allocation and pricing of distributed generation using genetic algorithm methodology

    Science.gov (United States)

    Mwakabuta, Ndaga Stanslaus

    Electric power distribution systems play a significant role in providing continuous and "quality" electrical energy to different classes of customers. In the context of the present restrictions on transmission system expansions and the new paradigm of "open and shared" infrastructure, new approaches to distribution system analyses, economic and operational decision-making need investigation. This dissertation includes three layers of distribution system investigations. In the basic level, improved linear models are shown to offer significant advantages over previous models for advanced analysis. In the intermediate level, the improved model is applied to solve the traditional problem of operating cost minimization using capacitors and voltage regulators. In the advanced level, an artificial intelligence technique is applied to minimize cost under Distributed Generation injection from private vendors. Soft computing techniques are finding increasing applications in solving optimization problems in large and complex practical systems. The dissertation focuses on Genetic Algorithm for investigating the economic aspects of distributed generation penetration without compromising the operational security of the distribution system. The work presents a methodology for determining the optimal pricing of distributed generation that would help utilities make a decision on how to operate their system economically. This would enable modular and flexible investments that have real benefits to the electric distribution system. Improved reliability for both customers and the distribution system in general, reduced environmental impacts, increased efficiency of energy use, and reduced costs of energy services are some advantages.

  6. Cognitive Development Optimization Algorithm Based Support Vector Machines for Determining Diabetes

    Directory of Open Access Journals (Sweden)

    Utku Kose

    2016-03-01

    Full Text Available The definition, diagnosis and classification of Diabetes Mellitus and its complications are very important. First of all, the World Health Organization (WHO and other societies, as well as scientists have done lots of studies regarding this subject. One of the most important research interests of this subject is the computer supported decision systems for diagnosing diabetes. In such systems, Artificial Intelligence techniques are often used for several disease diagnostics to streamline the diagnostic process in daily routine and avoid misdiagnosis. In this study, a diabetes diagnosis system, which is formed via both Support Vector Machines (SVM and Cognitive Development Optimization Algorithm (CoDOA has been proposed. Along the training of SVM, CoDOA was used for determining the sigma parameter of the Gauss (RBF kernel function, and eventually, a classification process was made over the diabetes data set, which is related to Pima Indians. The proposed approach offers an alternative solution to the field of Artificial Intelligence-based diabetes diagnosis, and contributes to the related literature on diagnosis processes.

  7. An algorithm for determining the K-best solutions of the one-dimensional Knapsack problem

    Directory of Open Access Journals (Sweden)

    Horacio Hideki Yanasse

    2000-06-01

    Full Text Available In this work we present an enumerative scheme for determining the K-best solutions (K > 1 of the one dimensional knapsack problem. If n is the total number of different items and b is the knapsack's capacity, the computational complexity of the proposed scheme is bounded by O(Knb with memory requirements bounded by O(nb. The algorithm was implemented in a workstation and computational tests for varying values of the parameters were performed.Neste trabalho apresenta-se um esquema enumerativo para se determinar as K-melhores (K > 1 soluções para o problema da mochila unidimensional. Se n é o número total de itens diferentes e b é a capacidade da mochila, a complexidade computacional do esquema proposto é limitado por O(Knb. O algoritmo foi implementado em uma estação de trabalho e testes computacionais foram realizados variando-se diferentes parâmetros do problema.

  8. An objective algorithm for the determination of bone mineral content using dichromatic absorptiometry

    International Nuclear Information System (INIS)

    Appledorn, C.R.; Witt, R.M.; Wellman, H.N.; Johnston, C.C.

    1985-01-01

    The determination of vertebral column bone mineral content by dual photon absorptiometric methods is a problem of continued clinical interest. The more successful methods suffer from the frequent need of operator interaction in order to maintain good precision results. The authors have introduced a new objective algorithm that eliminates the subjectiveness of operator interaction without sacrificing reproducibility. The authors' system consists of a modified rectilinear scanner interfaced to a CAMAC acquisition device coupled to a PDP-11V03 minicomputer. The subject is scanned in the supine position with legs elevated to minimize lordosis. The source (Gd-153) and detector are collimated defining an area of 10mm x 10mm at the level of the spine. The transverse scan width is usually 120 mm. Scanning from the iliac crests toward the head, 50 transverses at 3mm y-increments are acquired at approximately 1mm increments. The data analysis begins with the calculation of R-value for each pixel in the scan. The calculations for bone mineral content are performed and various quantities are accumulated. In a reproducibility study of 116 patient studies, the authors achieved a bone mineral/bone area ratio precision (std dev/mean) of 1.37% without operator interaction nor vertebral body selection

  9. An Image Segmentation Based on a Genetic Algorithm for Determining Soil Coverage by Crop Residues

    Science.gov (United States)

    Ribeiro, Angela; Ranz, Juan; Burgos-Artizzu, Xavier P.; Pajares, Gonzalo; Sanchez del Arco, Maria J.; Navarrete, Luis

    2011-01-01

    Determination of the soil coverage by crop residues after ploughing is a fundamental element of Conservation Agriculture. This paper presents the application of genetic algorithms employed during the fine tuning of the segmentation process of a digital image with the aim of automatically quantifying the residue coverage. In other words, the objective is to achieve a segmentation that would permit the discrimination of the texture of the residue so that the output of the segmentation process is a binary image in which residue zones are isolated from the rest. The RGB images used come from a sample of images in which sections of terrain were photographed with a conventional camera positioned in zenith orientation atop a tripod. The images were taken outdoors under uncontrolled lighting conditions. Up to 92% similarity was achieved between the images obtained by the segmentation process proposed in this paper and the templates made by an elaborate manual tracing process. In addition to the proposed segmentation procedure and the fine tuning procedure that was developed, a global quantification of the soil coverage by residues for the sampled area was achieved that differed by only 0.85% from the quantification obtained using template images. Moreover, the proposed method does not depend on the type of residue present in the image. The study was conducted at the experimental farm “El Encín” in Alcalá de Henares (Madrid, Spain). PMID:22163966

  10. Alignment of the Measurement Scale Mark during Immersion Hydrometer Calibration Using an Image Processing System

    OpenAIRE

    Pe?a-Perez, Luis Manuel; Pedraza-Ortega, Jesus Carlos; Ramos-Arreguin, Juan Manuel; Arriaga, Saul Tovar; Fernandez, Marco Antonio Aceves; Becerra, Luis Omar; Hurtado, Efren Gorrostieta; Vargas-Soto, Jose Emilio

    2013-01-01

    The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process,...

  11. Alignment of the measurement scale mark during immersion hydrometer calibration using an image processing system.

    Science.gov (United States)

    Peña-Perez, Luis Manuel; Pedraza-Ortega, Jesus Carlos; Ramos-Arreguin, Juan Manuel; Arriaga, Saul Tovar; Fernandez, Marco Antonio Aceves; Becerra, Luis Omar; Hurtado, Efren Gorrostieta; Vargas-Soto, Jose Emilio

    2013-10-24

    The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration.

  12. Application of fuzzy C-Means Algorithm for Determining Field of Interest in Information System Study STTH Medan

    Science.gov (United States)

    Rahman Syahputra, Edy; Agustina Dalimunthe, Yulia; Irvan

    2017-12-01

    Many students are confused in choosing their own field of specialization, ultimately choosing areas of specialization that are incompatible with a variety of reasons such as just following a friend or because of the area of interest of many choices without knowing whether they have Competencies in the chosen field of interest. This research aims to apply Clustering method with Fuzzy C-means algorithm to classify students in the chosen interest field. The Fuzzy C-Means algorithm is one of the easiest and often used algorithms in data grouping techniques because it makes efficient estimates and does not require many parameters. Several studies have led to the conclusion that the Fuzzy C-Means algorithm can be used to group data based on certain attributes. In this research will be used Fuzzy C-Means algorithm to classify student data based on the value of core subjects in the selection of specialization field. This study also tested the accuracy of the Fuzzy C-Means algorithm in the determination of interest area. The study was conducted on the STT-Harapan Medan Information System Study program, and the object of research is the value of all students of STT-Harapan Medan Information System Study Program 2012. From this research, it is expected to get the specialization field, according to the students' ability based on the prerequisite principal value.

  13. Quantitative x-ray photoelectron spectroscopy: Simple algorithm to determine the amount of atoms in the outermost few nanometers

    International Nuclear Information System (INIS)

    Tougaard, Sven

    2003-01-01

    It is well known that due to inelastic electron scattering, the measured x-ray photoelectron spectroscopy peak intensity depends strongly on the in-depth atom distribution. Quantification based only on the peak intensity can therefore give large errors. The problem was basically solved by developing algorithms for the detailed analysis of the energy distribution of emitted electrons. These algorithms have been extensively tested experimentally and found to be able to determine the depth distribution of atoms with nanometer resolution. Practical application of these algorithms has increased after ready-to-use software packages were made available and they are now being used in laboratories worldwide. These software packages are easy to use but they need operator interaction. They are not well suited for automatic data processing and there is an additional need for simplified quantification strategies that can be automated. In this article we report on a very simple algorithm. It is a slightly more accurate version of our previous algorithm. The algorithm gives the amount of atoms within the outermost three inelastic mean free paths and it also gives a rough estimate for the in-depth distribution. An experimental example of its application is also presented

  14. The CMS Silicon Tracker Alignment

    CERN Document Server

    Castello, R

    2008-01-01

    The alignment of the Strip and Pixel Tracker of the Compact Muon Solenoid experiment, with its large number of independent silicon sensors and its excellent spatial resolution, is a complex and challenging task. Besides high precision mounting, survey measurements and the Laser Alignment System, track-based alignment is needed to reach the envisaged precision.\\\\ Three different algorithms for track-based alignment were successfully tested on a sample of cosmic-ray data collected at the Tracker Integration Facility, where 15\\% of the Tracker was tested. These results, together with those coming from the CMS global run, will provide the basis for the full-scale alignment of the Tracker, which will be carried out with the first \\emph{p-p} collisions.

  15. Preliminary results of algorithms to determine horizontal and vertical underwater visibilities of coastal waters

    Digital Repository Service at National Institute of Oceanography (India)

    Suresh, T.; Joshi, Shreya; Talaulikar, M.; Desa, E.J.

    the underwater average cosine. These algorithms for vertical and horizontal visibilities have been validated for the coastal waters of Goa with the measured and those derived from the ocean color data of OCM-2 and MODIS...

  16. An algorithm for determination of geodetic path for application in long-range acoustic propagation

    Digital Repository Service at National Institute of Oceanography (India)

    Murty, T.V.R.; Sivakholundu, K.M.; Navelkar, G.S.; Somayajulu, Y.K.; Murty, C.S.

    the distance of interest subject to initial conditions and a azimuth (forward problem). Inverse problem has been solved interatively based on the spheroidal geometry to supplement the initial conditions to the forward problem. The algorithm has been test...

  17. Noninvasive Biosensor Algorithms for Continuous Metabolic Rate Determination--SMS01302

    Data.gov (United States)

    National Aeronautics and Space Administration — This is the final year of the project. During 2012 we completed the development of an algorithm for calculating VO2 during cycling using data from the Near Infrared...

  18. Automatic Algorithm for the Determination of the Anderson-wilkins Acuteness Score In Patients With St Elevation Myocardial Infarction

    DEFF Research Database (Denmark)

    Fakhri, Yama; Sejersten, Maria; Schoos, Mikkel Malby

    2016-01-01

    using 50 ECGs. Each ECG lead (except aVR) was manually scored according to AW-score by two independent experts (Exp1 and Exp2) and automatically by our designed algorithm (auto-score). An adjudicated manual score (Adj-score) was determined between Exp1 and Exp2. The inter-rater reliabilities (IRRs...

  19. Computer-Based Algorithmic Determination of Muscle Movement Onset Using M-Mode Ultrasonography

    Science.gov (United States)

    2017-05-01

    statistical analysis. Given the large number of MO algorithms, poorly performing algorithms were systematically eliminated from further evaluation. First...very large data sets (Kaufman and Rousseeuw 2005). The three algo- rithms with the closest proximity (i.e., highest similarity) to the gold-standard...but lowering the thresholds will likely increase the chances of premature onset detection. Additionally, although theFig. 4. Forest plot of mean

  20. A new algorithm to determine the total radiated power at ASDEX upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Gloeggler, Stephan; Bernert, Matthias; Eich, Thomas [Max Planck Institute for Plasma Physics, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: The ASDEX Upgrade Team

    2016-07-01

    Radiation is an essential part of the power balance in a fusion plasma. In future fusion devices about 90% of the power will have to be dissipated, mainly by radiation. For the development of an appropriate operational scenario, information about the absolute level of plasma radiation (P{sub rad,tot}) is crucial. Bolometers are used to measure the radiated power, however, an algorithm is required to derive the absolute power out of many line-integrated measurements. The currently used algorithm (BPD) was developed for the main chamber radiation. It underestimates the divertor radiation as its basic assumptions are not satisfied in this region. Therefore, a new P{sub rad,tot} algorithm is presented. It applies an Abel inversion on the main chamber and uses empirically based assumptions for poloidal asymmetries and the divertor radiation. To benchmark the new algorithm, synthetic emissivity profiles are used. On average, the new Abel inversion based algorithm deviates by only 10% from the nominal synthetic value while BPD is about 25% too low. With both codes time traces of ASDEX Upgrade discharges are calculated. The analysis of these time traces shows that the underestimation of the divertor radiation can have significant consequences on the accuracy of BPD while the new algorithm is shown to be stable.

  1. Research on the Effectiveness of Different Estimation Algorithm on the Autonomous Orbit Determination of Lagrangian Navigation Constellation

    Directory of Open Access Journals (Sweden)

    Youtao Gao

    2016-01-01

    Full Text Available The accuracy of autonomous orbit determination of Lagrangian navigation constellation will affect the navigation accuracy for the deep space probes. Because of the special dynamical characteristics of Lagrangian navigation satellite, the error caused by different estimation algorithm will cause totally different autonomous orbit determination accuracy. We apply the extended Kalman filter and the fading–memory filter to determinate the orbits of Lagrangian navigation satellites. The autonomous orbit determination errors are compared. The accuracy of autonomous orbit determination using fading-memory filter can improve 50% compared to the autonomous orbit determination accuracy using extended Kalman filter. We proposed an integrated Kalman fading filter to smooth the process of autonomous orbit determination and improve the accuracy of autonomous orbit determination. The square root extended Kalman filter is introduced to deal with the case of inaccurate initial error variance matrix. The simulations proved that the estimation method can affect the accuracy of autonomous orbit determination greatly.

  2. Climatic zonation and land suitability determination for saffron in Khorasan-Razavi province using data mining algorithms

    Directory of Open Access Journals (Sweden)

    mehdi Bashiri

    2017-12-01

    Full Text Available Yield prediction for agricultural crops plays an important role in export-import planning, purchase guarantees, pricing, secure profits and increasing in agricultural productivity. Crop yield is affected by several parameters especially climate. In this study, the saffron yield in the Khorasan-Razavi province was evaluated by different classification algorithms including artificial neural networks, regression models, local linear trees, decision trees, discriminant analysis, random forest, support vector machine and nearest neighbor analysis. These algorithms analyzed data for 20 years (1989-2009 including 11 climatological parameters. The results showed that a few numbers of climatological parameters affect the saffron yield. The minimum, mean and maximum of temperature, had the highest positive correlations and the relative humidity of 6.5h, sunny hours, relative humidity of 18.5h, evaporation, relative humidity of 12.5h and absolute humidity had the highest negative correlations with saffron cultivation areas, respectively. In addition, in classification of saffron cultivation areas, the discriminant analysis and support vector machine had higher accuracies. The correlation between saffron cultivation area and saffron yield values was relatively high (r=0.38. The nearest neighbor analysis had the best prediction accuracy for classification of cultivation areas. For this algorithm the coefficients of determination were 1 and 0.944 for training and testing stages, respectively. However, the algorithms accuracy for prediction of crop yield from climatological parameters was low (the average coefficients of determination equal to 0.48 and 0.05 for training and testing stages. The best algorithm i.e. nearest neighbor analysis had coefficients of determination equal to 1 and 0.177 for saffron yield prediction. Results showed that, using climatological parameters and data mining algorithms can classify cultivation areas. By this way it is possible

  3. Progressive multiple sequence alignments from triplets

    Directory of Open Access Journals (Sweden)

    Stadler Peter F

    2007-07-01

    Full Text Available Abstract Background The quality of progressive sequence alignments strongly depends on the accuracy of the individual pairwise alignment steps since gaps that are introduced at one step cannot be removed at later aggregation steps. Adjacent insertions and deletions necessarily appear in arbitrary order in pairwise alignments and hence form an unavoidable source of errors. Research Here we present a modified variant of progressive sequence alignments that addresses both issues. Instead of pairwise alignments we use exact dynamic programming to align sequence or profile triples. This avoids a large fractions of the ambiguities arising in pairwise alignments. In the subsequent aggregation steps we follow the logic of the Neighbor-Net algorithm, which constructs a phylogenetic network by step-wisely replacing triples by pairs instead of combining pairs to singletons. To this end the three-way alignments are subdivided into two partial alignments, at which stage all-gap columns are naturally removed. This alleviates the "once a gap, always a gap" problem of progressive alignment procedures. Conclusion The three-way Neighbor-Net based alignment program aln3nn is shown to compare favorably on both protein sequences and nucleic acids sequences to other progressive alignment tools. In the latter case one easily can include scoring terms that consider secondary structure features. Overall, the quality of resulting alignments in general exceeds that of clustalw or other multiple alignments tools even though our software does not included heuristics for context dependent (mismatch scores.

  4. Multiple alignment analysis on phylogenetic tree of the spread of SARS epidemic using distance method

    Science.gov (United States)

    Amiroch, S.; Pradana, M. S.; Irawan, M. I.; Mukhlash, I.

    2017-09-01

    Multiple Alignment (MA) is a particularly important tool for studying the viral genome and determine the evolutionary process of the specific virus. Application of MA in the case of the spread of the Severe acute respiratory syndrome (SARS) epidemic is an interesting thing because this virus epidemic a few years ago spread so quickly that medical attention in many countries. Although there has been a lot of software to process multiple sequences, but the use of pairwise alignment to process MA is very important to consider. In previous research, the alignment between the sequences to process MA algorithm, Super Pairwise Alignment, but in this study used a dynamic programming algorithm Needleman wunchs simulated in Matlab. From the analysis of MA obtained and stable region and unstable which indicates the position where the mutation occurs, the system network topology that produced the phylogenetic tree of the SARS epidemic distance method, and system area networks mutation.

  5. RNA structure alignment by a unit-vector approach.

    Science.gov (United States)

    Capriotti, Emidio; Marti-Renom, Marc A

    2008-08-15

    The recent discovery of tiny RNA molecules such as microRNAs and small interfering RNA are transforming the view of RNA as a simple information transfer molecule. Similar to proteins, the native three-dimensional structure of RNA determines its biological activity. Therefore, classifying the current structural space is paramount for functionally annotating RNA molecules. The increasing numbers of RNA structures deposited in the PDB requires more accurate, automatic and benchmarked methods for RNA structure comparison. In this article, we introduce a new algorithm for RNA structure alignment based on a unit-vector approach. The algorithm has been implemented in the SARA program, which results in RNA structure pairwise alignments and their statistical significance. The SARA program has been implemented to be of general applicability even when no secondary structure can be calculated from the RNA structures. A benchmark against the ARTS program using a set of 1275 non-redundant pairwise structure alignments results in inverted approximately 6% extra alignments with at least 50% structurally superposed nucleotides and base pairs. A first attempt to perform RNA automatic functional annotation based on structure alignments indicates that SARA can correctly assign the deepest SCOR classification to >60% of the query structures. The SARA program is freely available through a World Wide Web server http://sgu.bioinfo.cipf.es/services/SARA/. Supplementary data are available at Bioinformatics online.

  6. Attitude Determination Method by Fusing Single Antenna GPS and Low Cost MEMS Sensors Using Intelligent Kalman Filter Algorithm

    Directory of Open Access Journals (Sweden)

    Lei Wang

    2017-01-01

    Full Text Available For meeting the demands of cost and size for micronavigation system, a combined attitude determination approach with sensor fusion algorithm and intelligent Kalman filter (IKF on low cost Micro-Electro-Mechanical System (MEMS gyroscope, accelerometer, and magnetometer and single antenna Global Positioning System (GPS is proposed. The effective calibration method is performed to compensate the effect of errors in low cost MEMS Inertial Measurement Unit (IMU. The different control strategies fusing the MEMS multisensors are designed. The yaw angle fusing gyroscope, accelerometer, and magnetometer algorithm is estimated accurately under GPS failure and unavailable sideslip situations. For resolving robust control and characters of the uncertain noise statistics influence, the high gain scale of IKF is adjusted by fuzzy controller in the transition process and steady state to achieve faster convergence and accurate estimation. The experiments comparing different MEMS sensors and fusion algorithms are implemented to verify the validity of the proposed approach.

  7. Proposed algorithm for determining the delta intercept of a thermocouple psychrometer curve

    International Nuclear Information System (INIS)

    Kurzmack, M.A.

    1993-01-01

    The USGS Hydrologic Investigations Program is currently developing instrumentation to study the unsaturated zone at Yucca Mountain in Nevada. Surface-based boreholes up to 2,500 feet in depth will be drilled, and then instrumented in order to define the water potential field within the unsaturated zone. Thermocouple psychrometers will be used to monitor the in-situ water potential. An algorithm is proposed for simply and efficiently reducing a six wire thermocouple psychrometer voltage output curve to a single value, the delta intercept. The algorithm identifies a plateau region in the psychrometer curve and extrapolates a linear regression back to the initial start of relaxation. When properly conditioned for the measurements being made, the algorithm results in reasonable results even with incomplete or noisy psychrometer curves over a 1 to 60 bar range

  8. An Algorithm for Determining Minimal Reduced—Coverings of Acyclic Database Schemes

    Institute of Scientific and Technical Information of China (English)

    刘铁英; 叶新铭

    1996-01-01

    This paper reports an algoritm(DTV)for deermining the minimal reducedcovering of an acyclic database scheme over a specified subset of attributes.The output of this algotithm contains not only minimum number of attributes but also minimum number of partial relation schemes.The algorithm has complexity O(|N|·|E|2),where|N| is the number of attributes and |E|the number of relation schemes.It is also proved that for Berge,γ or β acyclic database schemes,the output of algorithm DTV maintains the acyclicity correspondence.

  9. Momentum bias determination in the tracker alignment and first differential t anti t cross section measurement at CMS

    Energy Technology Data Exchange (ETDEWEB)

    Enderle, Holger

    2012-01-15

    This thesis is prepared within the framework of the CMS experiment at the Large Hadron Collider. It is divided into a technical topic and an analysis. In the technical part, a method is developed to validate the alignment of the tracker geometry concerning biases in the momentum measurement. The method is based on the comparison of the measured momentum of isolated tracks and the corresponding energy deposited in the calorimeter. Comparing positively and negatively charged hadrons, the twist of the tracker is constrained with a precision of ({delta}{phi})/({delta}z)=12 ({mu}rad)/(m). The analysis deals with cross section measurements in events containing an isolated muon and jets. The complete dataset of proton-proton collisions at a centre-of-mass energy of 7 TeV taken in 2010 is investigated. This corresponds to an integrated luminosity of 35.9 pb{sup -1}. Cross sections including different physics processes with an isolated muon and jets in the final state are measured for different jet multiplicities (N{sub jets} {>=}1;2;3;4). With increasing jet multiplicity, the transition from a W {yields} l{nu} dominated to a strongly t anti t enriched phase space becomes evident. The inclusive cross section for t anti t production derived from the four jet sample is measured to be {sigma}=172{+-}15(stat.){+-}41(syst.){+-}7(lumi.) pb. Cross sections differentially in kinematic quantities of the muon, (d{sigma})/(d{sub PT}), (d{sigma})/(d{eta}) are measured as well and compared to theoretical predictions.

  10. Exact Solutions for Internuclear Vectors and Backbone Dihedral Angles from NH Residual Dipolar Couplings in Two Media, and their Application in a Systematic Search Algorithm for Determining Protein Backbone Structure

    International Nuclear Information System (INIS)

    Wang Lincong; Donald, Bruce Randall

    2004-01-01

    We have derived a quartic equation for computing the direction of an internuclear vector from residual dipolar couplings (RDCs) measured in two aligning media, and two simple trigonometric equations for computing the backbone (φ,ψ) angles from two backbone vectors in consecutive peptide planes. These equations make it possible to compute, exactly and in constant time, the backbone (φ,ψ) angles for a residue from RDCs in two media on any single backbone vector type. Building upon these exact solutions we have designed a novel algorithm for determining a protein backbone substructure consisting of α-helices and β-sheets. Our algorithm employs a systematic search technique to refine the conformation of both α-helices and β-sheets and to determine their orientations using exclusively the angular restraints from RDCs. The algorithm computes the backbone substructure employing very sparse distance restraints between pairs of α-helices and β-sheets refined by the systematic search. The algorithm has been demonstrated on the protein human ubiquitin using only backbone NH RDCs, plus twelve hydrogen bonds and four NOE distance restraints. Further, our results show that both the global orientations and the conformations of α-helices and β-strands can be determined with high accuracy using only two RDCs per residue. The algorithm requires, as its input, backbone resonance assignments, the identification of α-helices and β-sheets as well as sparse NOE distance and hydrogen bond restraints.Abbreviations: NMR - nuclear magnetic resonance; RDC - residual dipolar coupling; NOE - nuclear Overhauser effect; SVD - singular value decomposition; DFS - depth-first search; RMSD - root mean square deviation; POF - principal order frame; PDB - protein data bank; SA - simulated annealing; MD - molecular dynamics

  11. A localization algorithm of adaptively determining the ROI of the reference circle in image

    Science.gov (United States)

    Xu, Zeen; Zhang, Jun; Zhang, Daimeng; Liu, Xiaomao; Tian, Jinwen

    2018-03-01

    Aiming at solving the problem of accurately positioning the detection probes underwater, this paper proposed a method based on computer vision which can effectively solve this problem. The theory of this method is that: First, because the shape information of the heat tube is similar to a circle in the image, we can find a circle which physical location is well known in the image, we set this circle as the reference circle. Second, we calculate the pixel offset between the reference circle and the probes in the picture, and adjust the steering gear through the offset. As a result, we can accurately measure the physical distance between the probes and the under test heat tubes, then we can know the precise location of the probes underwater. However, how to choose reference circle in image is a difficult problem. In this paper, we propose an algorithm that can adaptively confirm the area of reference circle. In this area, there will be only one circle, and the circle is the reference circle. The test results show that the accuracy of the algorithm of extracting the reference circle in the whole picture without using ROI (region of interest) of the reference circle is only 58.76% and the proposed algorithm is 95.88%. The experimental results indicate that the proposed algorithm can effectively improve the efficiency of the tubes detection.

  12. A firefly algorithm approach for determining the parameters characteristics of solar cell

    Directory of Open Access Journals (Sweden)

    Mohamed LOUZAZNI

    2017-12-01

    Full Text Available A metaheuristic algorithm is proposed to describe the characteristics of solar cell. The I-V characteristics of solar cell present double nonlinearity in the presence of exponential and in the five parameters. Since, these parameters are unknown, it is important to predict these parameters for accurate modelling of I-V and P-V curves of solar cell. Moreover, firefly algorithm has attracted the intention to optimize the non-linear and complex systems, based on the flashing patterns and behaviour of firefly’s swarm. Besides, the proposed constrained objective function is derived from the current-voltage curve. Using the experimental current and voltage of commercial RTC France Company mono-crystalline silicon solar cell single diode at 33°C and 1000W/m² to predict the unknown parameters. The statistical errors are calculated to verify the accuracy of the results. The obtained results are compared with experimental data and other reported meta-heuristic optimization algorithms. In the end, the theoretical results confirm the validity and reliability of firefly algorithm in estimation the optimal parameters of the solar cell.

  13. Triangular Alignment (TAME). A Tensor-based Approach for Higher-order Network Alignment

    Energy Technology Data Exchange (ETDEWEB)

    Mohammadi, Shahin [Purdue Univ., West Lafayette, IN (United States); Gleich, David F. [Purdue Univ., West Lafayette, IN (United States); Kolda, Tamara G. [Sandia National Laboratories (SNL-CA), Livermore, CA (United States); Grama, Ananth [Purdue Univ., West Lafayette, IN (United States)

    2015-11-01

    Network alignment is an important tool with extensive applications in comparative interactomics. Traditional approaches aim to simultaneously maximize the number of conserved edges and the underlying similarity of aligned entities. We propose a novel formulation of the network alignment problem that extends topological similarity to higher-order structures and provide a new objective function that maximizes the number of aligned substructures. This objective function corresponds to an integer programming problem, which is NP-hard. Consequently, we approximate this objective function as a surrogate function whose maximization results in a tensor eigenvalue problem. Based on this formulation, we present an algorithm called Triangular AlignMEnt (TAME), which attempts to maximize the number of aligned triangles across networks. We focus on alignment of triangles because of their enrichment in complex networks; however, our formulation and resulting algorithms can be applied to general motifs. Using a case study on the NAPABench dataset, we show that TAME is capable of producing alignments with up to 99% accuracy in terms of aligned nodes. We further evaluate our method by aligning yeast and human interactomes. Our results indicate that TAME outperforms the state-of-art alignment methods both in terms of biological and topological quality of the alignments.

  14. A cross-species alignment tool (CAT)

    DEFF Research Database (Denmark)

    Li, Heng; Guan, Liang; Liu, Tao

    2007-01-01

    BACKGROUND: The main two sorts of automatic gene annotation frameworks are ab initio and alignment-based, the latter splitting into two sub-groups. The first group is used for intra-species alignments, among which are successful ones with high specificity and speed. The other group contains more...... sensitive methods which are usually applied in aligning inter-species sequences. RESULTS: Here we present a new algorithm called CAT (for Cross-species Alignment Tool). It is designed to align mRNA sequences to mammalian-sized genomes. CAT is implemented using C scripts and is freely available on the web...... at http://xat.sourceforge.net/. CONCLUSIONS: Examined from different angles, CAT outperforms other extant alignment tools. Tested against all available mouse-human and zebrafish-human orthologs, we demonstrate that CAT combines the specificity and speed of the best intra-species algorithms, like BLAT...

  15. BinAligner: a heuristic method to align biological networks.

    Science.gov (United States)

    Yang, Jialiang; Li, Jun; Grünewald, Stefan; Wan, Xiu-Feng

    2013-01-01

    The advances in high throughput omics technologies have made it possible to characterize molecular interactions within and across various species. Alignments and comparison of molecular networks across species will help detect orthologs and conserved functional modules and provide insights on the evolutionary relationships of the compared species. However, such analyses are not trivial due to the complexity of network and high computational cost. Here we develop a mixture of global and local algorithm, BinAligner, for network alignments. Based on the hypotheses that the similarity between two vertices across networks would be context dependent and that the information from the edges and the structures of subnetworks can be more informative than vertices alone, two scoring schema, 1-neighborhood subnetwork and graphlet, were introduced to derive the scoring matrices between networks, besides the commonly used scoring scheme from vertices. Then the alignment problem is formulated as an assignment problem, which is solved by the combinatorial optimization algorithm, such as the Hungarian method. The proposed algorithm was applied and validated in aligning the protein-protein interaction network of Kaposi's sarcoma associated herpesvirus (KSHV) and that of varicella zoster virus (VZV). Interestingly, we identified several putative functional orthologous proteins with similar functions but very low sequence similarity between the two viruses. For example, KSHV open reading frame 56 (ORF56) and VZV ORF55 are helicase-primase subunits with sequence identity 14.6%, and KSHV ORF75 and VZV ORF44 are tegument proteins with sequence identity 15.3%. These functional pairs can not be identified if one restricts the alignment into orthologous protein pairs. In addition, BinAligner identified a conserved pathway between two viruses, which consists of 7 orthologous protein pairs and these proteins are connected by conserved links. This pathway might be crucial for virus packing and

  16. Attitude Determination Algorithm based on Relative Quaternion Geometry of Velocity Incremental Vectors for Cost Efficient AHRS Design

    Science.gov (United States)

    Lee, Byungjin; Lee, Young Jae; Sung, Sangkyung

    2018-05-01

    A novel attitude determination method is investigated that is computationally efficient and implementable in low cost sensor and embedded platform. Recent result on attitude reference system design is adapted to further develop a three-dimensional attitude determination algorithm through the relative velocity incremental measurements. For this, velocity incremental vectors, computed respectively from INS and GPS with different update rate, are compared to generate filter measurement for attitude estimation. In the quaternion-based Kalman filter configuration, an Euler-like attitude perturbation angle is uniquely introduced for reducing filter states and simplifying propagation processes. Furthermore, assuming a small angle approximation between attitude update periods, it is shown that the reduced order filter greatly simplifies the propagation processes. For performance verification, both simulation and experimental studies are completed. A low cost MEMS IMU and GPS receiver are employed for system integration, and comparison with the true trajectory or a high-grade navigation system demonstrates the performance of the proposed algorithm.

  17. Nova laser alignment control system

    International Nuclear Information System (INIS)

    Van Arsdall, P.J.; Holloway, F.W.; McGuigan, D.L.; Shelton, R.T.

    1984-01-01

    Alignment of the Nova laser requires control of hundreds of optical components in the ten beam paths. Extensive application of computer technology makes daily alignment practical. The control system is designed in a manner which provides both centralized and local manual operator controls integrated with automatic closed loop alignment. Menudriven operator consoles using high resolution color graphics displays overlaid with transport touch panels allow laser personnel to interact efficiently with the computer system. Automatic alignment is accomplished by using image analysis techniques to determine beam references points from video images acquired along the laser chain. A major goal of the design is to contribute substantially to rapid experimental turnaround and consistent alignment results. This paper describes the computer-based control structure and the software methods developed for aligning this large laser system

  18. An algorithm to include the bremsstrahlung component in the determination of the absorbed dose in electron beams

    Energy Technology Data Exchange (ETDEWEB)

    Klevenhagen, S C [The Royal London Hospital, London (United Kingdom). Medical Physics Dept.

    1996-08-01

    Currently used dosimetry protocols for absolute dose determination of electron beams from accelerators in radiation therapy do not account for the effect of the bremsstrahlung contamination of the beam. This results in slightly erroneous doses calculated from ionization chamber measurements. In this report the deviation is calculated and an improved algorithm, which accounts for the effect of the bremsstrahlung component of the beam, is suggested. (author). 14 refs, 2 figs, 1 tab.

  19. Towards a robust algorithm to determine topological domains from colocalization data

    Directory of Open Access Journals (Sweden)

    Alexander P. Moscalets

    2015-09-01

    Full Text Available One of the most important tasks in understanding the complex spatial organization of the genome consists in extracting information about this spatial organization, the function and structure of chromatin topological domains from existing experimental data, in particular, from genome colocalization (Hi-C matrices. Here we present an algorithm allowing to reveal the underlying hierarchical domain structure of a polymer conformation from analyzing the modularity of colocalization matrices. We also test this algorithm on several model polymer structures: equilibrium globules, random fractal globules and regular fractal (Peano conformations. We define what we call a spectrum of cluster borders, and show that these spectra behave strikingly di erently for equilibrium and fractal conformations, allowing us to suggest an additional criterion to identify fractal polymer conformations.

  20. Study of high speed complex number algorithms. [for determining antenna for field radiation patterns

    Science.gov (United States)

    Heisler, R.

    1981-01-01

    A method of evaluating the radiation integral on the curved surface of a reflecting antenna is presented. A three dimensional Fourier transform approach is used to generate a two dimensional radiation cross-section along a planer cut at any angle phi through the far field pattern. Salient to the method is an algorithm for evaluating a subset of the total three dimensional discrete Fourier transform results. The subset elements are selectively evaluated to yield data along a geometric plane of constant. The algorithm is extremely efficient so that computation of the induced surface currents via the physical optics approximation dominates the computer time required to compute a radiation pattern. Application to paraboloid reflectors with off-focus feeds in presented, but the method is easily extended to offset antenna systems and reflectors of arbitrary shapes. Numerical results were computed for both gain and phase and are compared with other published work.

  1. A robust Hough transform algorithm for determining the radiation centers of circular and rectangular fields with subpixel accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Du Weiliang; Yang, James [Department of Radiation Physics, University of Texas M D Anderson Cancer Center, 1515 Holcombe Blvd, Unit 94, Houston, TX 77030 (United States)], E-mail: wdu@mdanderson.org

    2009-02-07

    Uncertainty in localizing the radiation field center is among the major components that contribute to the overall positional error and thus must be minimized. In this study, we developed a Hough transform (HT)-based computer algorithm to localize the radiation center of a circular or rectangular field with subpixel accuracy. We found that the HT method detected the centers of the test circular fields with an absolute error of 0.037 {+-} 0.019 pixels. On a typical electronic portal imager with 0.5 mm image resolution, this mean detection error was translated to 0.02 mm, which was much finer than the image resolution. It is worth noting that the subpixel accuracy described here does not include experimental uncertainties such as linac mechanical instability or room laser inaccuracy. The HT method was more accurate and more robust to image noise and artifacts than the traditional center-of-mass method. Application of the HT method in Winston-Lutz tests was demonstrated to measure the ball-radiation center alignment with subpixel accuracy. Finally, the method was applied to quantitative evaluation of the radiation center wobble during collimator rotation.

  2. Selection and determination of beam weights based on genetic algorithms for conformal radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Zunliang Wang

    2000-01-01

    A genetic algorithm has been used to optimize the selection of beam weights for external beam three-dimensional conformal radiotherapy treatment planning. A fitness function is defined, which includes a difference function to achieve a least-square fit to doses at preselected points in a planning target volume, and a penalty item to constrain the maximum allowable doses delivered to critical organs. Adjustment between the dose uniformity within the target volume and the dose constraint to the critical structures can be achieved by varying the beam weight variables in the fitness function. A floating-point encoding schema and several operators, like uniform crossover, arithmetical crossover, geometrical crossover, Gaussian mutation and uniform mutation, have been used to evolve the population. Three different cases were used to verify the correctness of the algorithm and quality assessment based on dose-volume histograms and three-dimensional dose distributions were given. The results indicate that the genetic algorithm presented here has considerable potential. (author)

  3. DETERMINATION ALGORITHM OF OPTIMAL GEOMETRICAL PARAMETERS FOR COMPONENTS OF FREIGHT CARS ON THE BASIS OF GENERALIZED MATHEMATICAL MODELS

    Directory of Open Access Journals (Sweden)

    O. V. Fomin

    2013-10-01

    Full Text Available Purpose. Presentation of features and example of the use of the offered determination algorithm of optimum geometrical parameters for the components of freight cars on the basis of the generalized mathematical models, which is realized using computer. Methodology. The developed approach to search for optimal geometrical parameters can be described as the determination of optimal decision of the selected set of possible variants. Findings. The presented application example of the offered algorithm proved its operation capacity and efficiency of use. Originality. The determination procedure of optimal geometrical parameters for freight car components on the basis of the generalized mathematical models was formalized in the paper. Practical value. Practical introduction of the research results for universal open cars allows one to reduce container of their design and accordingly to increase the carrying capacity almost by100 kg with the improvement of strength characteristics. Taking into account the mass of their park this will provide a considerable economic effect when producing and operating. The offered approach is oriented to the distribution of the software packages (for example Microsoft Excel, which are used by technical services of the most enterprises, and does not require additional capital investments (acquisitions of the specialized programs and proper technical staff training. This proves the correctness of the research direction. The offered algorithm can be used for the solution of other optimization tasks on the basis of the generalized mathematical models.

  4. Robust and Efficient Parametric Face Alignment

    NARCIS (Netherlands)

    Tzimiropoulos, Georgios; Zafeiriou, Stefanos; Pantic, Maja

    2011-01-01

    We propose a correlation-based approach to parametric object alignment particularly suitable for face analysis applications which require efficiency and robustness against occlusions and illumination changes. Our algorithm registers two images by iteratively maximizing their correlation coefficient

  5. Alignment of Sexuality Education with Self Determination for People with Significant Disabilities: A Review of Research and Future Directions

    Science.gov (United States)

    Travers, Jason; Tincani, Matt; Whitby, Peggy Schaefer; Boutot, E. Amanda

    2014-01-01

    Sexual development is a complex but vital part of the human experience. People with significant disabilities are not excluded from this principle, but often may be prevented from receiving high-quality and comprehensive instruction necessary for a healthy sexual life. The functional model of self-determination emphasizes increasing knowledge,…

  6. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    Science.gov (United States)

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  7. SATe-II: very fast and accurate simultaneous estimation of multiple sequence alignments and phylogenetic trees.

    Science.gov (United States)

    Liu, Kevin; Warnow, Tandy J; Holder, Mark T; Nelesen, Serita M; Yu, Jiaye; Stamatakis, Alexandros P; Linder, C Randal

    2012-01-01

    Highly accurate estimation of phylogenetic trees for large data sets is difficult, in part because multiple sequence alignments must be accurate for phylogeny estimation methods to be accurate. Coestimation of alignments and trees has been attempted but currently only SATé estimates reasonably accurate trees and alignments for large data sets in practical time frames (Liu K., Raghavan S., Nelesen S., Linder C.R., Warnow T. 2009b. Rapid and accurate large-scale coestimation of sequence alignments and phylogenetic trees. Science. 324:1561-1564). Here, we present a modification to the original SATé algorithm that improves upon SATé (which we now call SATé-I) in terms of speed and of phylogenetic and alignment accuracy. SATé-II uses a different divide-and-conquer strategy than SATé-I and so produces smaller more closely related subsets than SATé-I; as a result, SATé-II produces more accurate alignments and trees, can analyze larger data sets, and runs more efficiently than SATé-I. Generally, SATé is a metamethod that takes an existing multiple sequence alignment method as an input parameter and boosts the quality of that alignment method. SATé-II-boosted alignment methods are significantly more accurate than their unboosted versions, and trees based upon these improved alignments are more accurate than trees based upon the original alignments. Because SATé-I used maximum likelihood (ML) methods that treat gaps as missing data to estimate trees and because we found a correlation between the quality of tree/alignment pairs and ML scores, we explored the degree to which SATé's performance depends on using ML with gaps treated as missing data to determine the best tree/alignment pair. We present two lines of evidence that using ML with gaps treated as missing data to optimize the alignment and tree produces very poor results. First, we show that the optimization problem where a set of unaligned DNA sequences is given and the output is the tree and alignment of

  8. Tailoring Care to Vulnerable Populations by Incorporating Social Determinants of Health: the Veterans Health Administration’s “Homeless Patient Aligned Care Team” Program

    Science.gov (United States)

    Johnson, Erin E.; Aiello, Riccardo; Kane, Vincent; Pape, Lisa

    2016-01-01

    Introduction Although the clinical consequences of homelessness are well described, less is known about the role for health care systems in improving clinical and social outcomes for the homeless. We described the national implementation of a “homeless medical home” initiative in the Veterans Health Administration (VHA) and correlated patient health outcomes with characteristics of high-performing sites. Methods We conducted an observational study of 33 VHA facilities with homeless medical homes and patient- aligned care teams that served more than 14,000 patients. We correlated site-specific health care performance data for the 3,543 homeless veterans enrolled in the program from October 2013 through March 2014, including those receiving ambulatory or acute health care services during the 6 months prior to enrollment in our study and 6 months post-enrollment with corresponding survey data on the Homeless Patient Aligned Care Team (H-PACT) program implementation. We defined high performance as high rates of ambulatory care and reduced use of acute care services. Results More than 96% of VHA patients enrolled in these programs were concurrently receiving VHA homeless services. Of the 33 sites studied, 82% provided hygiene care (on-site showers, hygiene kits, and laundry), 76% provided transportation, and 55% had an on-site clothes pantry; 42% had a food pantry and provided on-site meals or other food assistance. Six-month patterns of acute-care use pre-enrollment and post-enrollment for 3,543 consecutively enrolled patients showed a 19.0% reduction in emergency department use and a 34.7% reduction in hospitalizations. Three features were significantly associated with high performance: 1) higher staffing ratios than other sites, 1) integration of social supports and social services into clinical care, and 3) outreach to and integration with community agencies. Conclusion Integrating social determinants of health into clinical care can be effective for high

  9. Aligned Layers of Silver Nano-Fibers

    Directory of Open Access Journals (Sweden)

    Andrii B. Golovin

    2012-02-01

    Full Text Available We describe a new dichroic polarizers made by ordering silver nano-fibers to aligned layers. The aligned layers consist of nano-fibers and self-assembled molecular aggregates of lyotropic liquid crystals. Unidirectional alignment of the layers is achieved by means of mechanical shearing. Aligned layers of silver nano-fibers are partially transparent to a linearly polarized electromagnetic radiation. The unidirectional alignment and density of the silver nano-fibers determine degree of polarization of transmitted light. The aligned layers of silver nano-fibers might be used in optics, microwave applications, and organic electronics.

  10. Multiple Whole Genome Alignments Without a Reference Organism

    Energy Technology Data Exchange (ETDEWEB)

    Dubchak, Inna; Poliakov, Alexander; Kislyuk, Andrey; Brudno, Michael

    2009-01-16

    Multiple sequence alignments have become one of the most commonly used resources in genomics research. Most algorithms for multiple alignment of whole genomes rely either on a reference genome, against which all of the other sequences are laid out, or require a one-to-one mapping between the nucleotides of the genomes, preventing the alignment of recently duplicated regions. Both approaches have drawbacks for whole-genome comparisons. In this paper we present a novel symmetric alignment algorithm. The resulting alignments not only represent all of the genomes equally well, but also include all relevant duplications that occurred since the divergence from the last common ancestor. Our algorithm, implemented as a part of the VISTA Genome Pipeline (VGP), was used to align seven vertebrate and sixDrosophila genomes. The resulting whole-genome alignments demonstrate a higher sensitivity and specificity than the pairwise alignments previously available through the VGP and have higher exon alignment accuracy than comparable public whole-genome alignments. Of the multiple alignment methods tested, ours performed the best at aligning genes from multigene families?perhaps the most challenging test for whole-genome alignments. Our whole-genome multiple alignments are available through the VISTA Browser at http://genome.lbl.gov/vista/index.shtml.

  11. Differential pulse adsorptive stripping voltammetric determination of nanomolar levels of atorvastatin calcium in pharmaceutical and biological samples using a vertically aligned carbon nanotube/graphene oxide electrode.

    Science.gov (United States)

    Silva, Tiago Almeida; Zanin, Hudson; Vicentini, Fernando Campanhã; Corat, Evaldo José; Fatibello-Filho, Orlando

    2014-06-07

    A novel vertically aligned carbon nanotube/graphene oxide (VACNT-GO) electrode is proposed, and its ability to determine atorvastatin calcium (ATOR) in pharmaceutical and biological samples by differential pulse adsorptive stripping voltammetry (DPAdSV) is evaluated. VACNT films were prepared on a Ti substrate by a microwave plasma chemical vapour deposition method and then treated with oxygen plasma to produce the VACNT-GO electrode. The oxygen plasma treatment exfoliates the carbon nanotube tips exposing graphene foils and inserting oxygen functional groups, these effects improved the VACNT wettability (super-hydrophobic) which is crucial for its electrochemical application. The electrochemical behaviour of ATOR on the VACNT-GO electrode was studied by cyclic voltammetry, which showed that it underwent an irreversible oxidation process at a potential of +1.08 V in pHcond 2.0 (0.2 mol L(-1) buffer phosphate solution). By applying DPAdSV under optimized experimental conditions the analytical curve was found to be linear in the ATOR concentration range of 90 to 3.81 × 10(3) nmol L(-1) with a limit of detection of 9.4 nmol L(-1). The proposed DPAdSV method was successfully applied in the determination of ATOR in pharmaceutical and biological samples, and the results were in close agreement with those obtained by a comparative spectrophotometric method at a confidence level of 95%.

  12. Tantalum electrodes modified with well-aligned carbon nanotube-Au nanoparticles: application to the highly sensitive electrochemical determination of cefazolin.

    Science.gov (United States)

    Fayazfar, Haniyeh; Afshar, Abdollah; Dolati, Abolghasem

    2014-07-01

    Carbon nanotube/nanoparticle hybrid materials have been proven to exhibit high electrocatalytic activity suggesting broad potential applications in the field of electroanalysis. For the first time, modification of Ta electrode with aligned multi-walled carbon nanotubes/Au nanoparticles introduced for the sensitive determination of the antibiotic drug, cefazolin (CFZ). The electrochemical response characteristics of the modified electrode toward CFZ were investigated by means of cyclic and linear sweep voltammetry. The modified electrode showed an efficient catalytic activity for the reduction of CFZ, leading to a remarkable decrease in reduction overpotential and a significant increase of peak current. Under optimum conditions, the highly sensitive modified electrode showed a wide linear range from 50 pM to 50 μM with a sufficiently low detection limit of 1 ± 0.01 pM (S/N = 3). The results indicated that the prepared electrode presents suitable characteristics in terms of sensitivity (458.2 ± 2.6 μAcm(-2)/μM), accuracy, repeatability (RSD of 1.8 %), reproducibility (RSD of 2.9 %), stability (14 days), and good catalytic activity in physiological conditions. The method was successfully applied for accurate determination of trace amounts of CFZ in pharmaceutical and clinical preparations without the necessity for samples pretreatment or any time-consuming extraction or evaporation steps prior to the analysis.

  13. Designing and assessment of accuracy of an algorithm for determining the accuracy of radiographic film density by changing exposure time

    Directory of Open Access Journals (Sweden)

    Hoorieh Bashizadeh Fakhar

    2014-06-01

    Full Text Available   Background and Aims Bone density is frequently used in medical diagnosis and research. The current methods for determining bone density are expensive and not easily available in dental clinics. The aim of this study was to design and evaluate the accuracy of a digital method for hard tissue densitometry which could be applied on personal computers.   Materials and Methods: An aluminum step wedge was constructed. 50 E-speed Kodak films were exposed. Exposure time varied from 0.05s to 0.5 s with 0.05 s interval. Films were developed with automatic developer and fixer and digitized with 1240U photo Epson scanner. Images were cropped at 10 × 10mm size with Microsoft Office Picture Manager. By running the algorithm designed in MATLAB software, the mean pixel value of pictures was calculated.   Results: Finding of this study showed that by increasing the exposure time, the mean pixel value was decreased and at step 12, a significant discrimination was seen between the two subsequent times(P<0.001. By increasing the thickness of object, algorithm could define the density changes from step 4 in 0.3 s and 5 in 0.5 s, and it could determine the differences in the mean pixel value between the same steps of 0.3 s and 0.5 s from step 4.   Conclusion: By increasing the object thickness and exposure time, the accuracy of the algorithm for recognizing changes in density was increased. This software was able to determine the radiographic density changes of aluminum step wedge with at least 4mm thickness at exposure time of 0.3 s and 5 mm at 0.5 s.

  14. Low-Carbon Energy Development in Indonesia in Alignment with Intended Nationally Determined Contribution (INDC by 2030

    Directory of Open Access Journals (Sweden)

    Ucok W.R. Siagian

    2017-01-01

    Full Text Available This study analyzed the role of low-carbon energy technologies in reducing the greenhouse gas emissions of Indonesia’s energy sector by 2030. The aim of this study was to provide insights into the Indonesian government’s approach to developing a strategy and plan for mitigating emissions and achieving Indonesia’s emission reduction targets by 2030, as pledged in the country’s Intended Nationally Determined Contribution. The Asia-Pacific Integrated Model/Computable General Equilibrium (AIM/CGE model was used to quantify three scenarios that had the same socioeconomic assumptions: baseline, countermeasure (CM1, and CM2, which had a higher emission reduction target than that of CM1. Results of the study showed that an Indonesian low-carbon energy system could be achieved with two pillars, namely, energy efficiency measures and deployment of less carbon-intensive energy systems (i.e., the use of renewable energy in the power and transport sectors, and the use of natural gas in the power sector and in transport. Emission reductions would also be satisfied through the electrification of end-user consumption where the electricity supply becomes decarbonized by deploying renewables for power generation. Under CM1, Indonesia could achieve a 15.5% emission reduction target (compared to the baseline scenario. This reduction could be achieved using efficiency measures that reduce final energy demand by 4%; This would require the deployment of geothermal power plants at a rate six times greater than the baseline scenario and four times the use of hydropower than that used in the baseline scenario. Greater carbon reductions (CM2; i.e., a 27% reduction could be achieved with similar measures to CM1 but with more intensive penetration. Final energy demand would need to be cut by 13%, deployment of geothermal power plants would need to be seven times greater than at baseline, and hydropower use would need to be five times greater than the baseline case

  15. Analytical and Algorithmic Approaches to Determine the Number of Sensor Nodes for Minimum Power Consumption in LWSNs

    Directory of Open Access Journals (Sweden)

    Ali Soner Kilinc

    2017-08-01

    Full Text Available A Linear Wireless Sensor Network (LWSN is a kind of wireless sensor network where the nodes are deployed in a line. Since the sensor nodes are energy restricted, energy efficiency becomes one of the most significant design issues for LWSNs as well as wireless sensor networks. With the proper deployment, the power consumption could be minimized by adjusting the distance between the sensor nodes which is known as hop length. In this paper, analytical and algorithmic approaches are presented to determine the number of hops and sensor nodes for minimum power consumption in a linear wireless sensor network including equidistantly placed sensor nodes.

  16. Determination of the Cascade Reservoir Operation for Optimal Firm-Energy Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Azmeri

    2013-08-01

    Full Text Available Indonesia today face a new paradigm in water management where aim to apply integrated water resources management has become unavoidable task in purpose of achieving greater level of effectiveness and efficiency. On of most interesting case study is the case of Citarum river, one of the most potential river for water supply in West Java, Indonesia. Alongside the river, Saguling, Cirata and Djuanda Reservoirs had been constructed in series/cascade. Saguling and Cirata reservoirs are particularly operated for hydroelectric power and Djuanda is multipurpose reservoir mainly operated for irrigation and contribute domestic water supply for Jakarta (capital city of Indonesia. Basically all reservoirs are relying on same resources, therefore this condition has considered addressing management and operational problem. Therefore, an approach toward new management and operation system are urgently required in order to achieve the effective and efficient output and to avoid conflicts of water used. This study aims to obtain energy production from Citarum Cascade Reservoir System using Genetic Algorithms optimization with the objective function to maximize firm-energy. Firm-energy is the minimum energy requirements must be available in a certain time period. Then, the result obtained by using the energy produced by GA is compared to the conventional searching technique of Non-Linier Programming (NLP. The GA derived operating curves reveal the higher energy and firm-energy than NLP model

  17. Constructing Aligned Assessments Using Automated Test Construction

    Science.gov (United States)

    Porter, Andrew; Polikoff, Morgan S.; Barghaus, Katherine M.; Yang, Rui

    2013-01-01

    We describe an innovative automated test construction algorithm for building aligned achievement tests. By incorporating the algorithm into the test construction process, along with other test construction procedures for building reliable and unbiased assessments, the result is much more valid tests than result from current test construction…

  18. Determining the boundary of inclusions with known conductivities using a Levenberg–Marquardt algorithm by electrical resistance tomography

    International Nuclear Information System (INIS)

    Tan, Chao; Xu, Yaoyuan; Dong, Feng

    2011-01-01

    Electrical resistance tomography (ERT) is a non-intrusive technique to image the electrical conductivity distribution of a closed vessel by injecting exciting current into the vessel and measuring the boundary voltages induced. ERT image reconstruction is characterized as a severely nonlinear and ill-posed inverse problem with many unknowns. In recent years, a growing number of papers have been published which aim to determine the locations and shapes of inclusions by assuming that their conductivities are piecewise constant and isotropic. In this work, the boundary of inclusions is reconstructed by ERT with a boundary element method. The Jacobian matrix of the forward problem is first calculated with a direct linearization method based on the boundary element, and validated through comparison with that determined by the finite element method and analytical method. A boundary reconstruction algorithm is later presented based on the Levenberg–Marquardt (L-M) method. Several numerical simulations and static experiments were conducted to study the reconstruction quality, where much importance was given to the smoothness of boundaries in the reconstruction; thus, a restriction of the curve radius is introduced to adjust the damping parameter for the L-M algorithm. Analytical results on the stability and precision of the boundary reconstruction demonstrate that stable reconstruction can be achieved when the conductivity of the objects differs much from that of the background medium, and convex boundaries can also be precisely reconstructed. Contrarily, the reconstructions for inclusions with similar conductivities to the background medium are not stable. The situation of an incorrect initial estimation of the inclusions' number is numerically studied and the results show that the boundary of inclusions could be correctly reconstructed with a splitting/merging function under the aforementioned proper operation condition of the present algorithm

  19. Dose determination algorithms for a nearly tissue equivalent multi-element thermoluminescent dosimeter

    International Nuclear Information System (INIS)

    Moscovitch, M.; Chamberlain, J.; Velbeck, K.J.

    1988-01-01

    In a continuing effort to develop dosimetric systems that will enable reliable interpretation of dosimeter readings in terms of the absorbed dose or dose-equivalent, a new multi-element TL dosimeter assembly for Beta and Gamma dose monitoring has been designed. The radiation-sensitive volumes are four LiF-TLD elements, each covered by its own unique filter. For media-matching, care has been taken to employ nearly tissue equivalent filters of thicknesses of 1000 mg/cm 2 and 300 mg/cm 2 for deep dose and dose to the lens-of-the-eye measurements respectively. Only one metal filter (Cu) is employed to provide low energy photon discrimination. A Thin TL element (0.09 mm thick) is located behind an open window designed to improve the energy under-response to low energy beta rays and to provide closer estimate of the shallow dose equivalent. The deep and shallow dose equivalents are derived from the correlation of the response of the various TL elements to the above quantities through computations based on previously defined relationships obtained from experimental results. The theoretical formalization for the dose calculation algorithms is described in detail, and provides a useful methodology which can be applied to different tissue-equivalent dosimeter assemblies. Experimental data has been obtained by performing irradiation according to the specifications established by DOELAP, using 27 types of pure and mixed radiation fields including Cs-137 gamma rays, low energy photons down to 20 keV, Sr/Y-90, Uranium, and Tl-204 beta particles

  20. Determining the number of clusters for kernelized fuzzy C-means algorithms for automatic medical image segmentation

    Directory of Open Access Journals (Sweden)

    E.A. Zanaty

    2012-03-01

    Full Text Available In this paper, we determine the suitable validity criterion of kernelized fuzzy C-means and kernelized fuzzy C-means with spatial constraints for automatic segmentation of magnetic resonance imaging (MRI. For that; the original Euclidean distance in the FCM is replaced by a Gaussian radial basis function classifier (GRBF and the corresponding algorithms of FCM methods are derived. The derived algorithms are called as the kernelized fuzzy C-means (KFCM and kernelized fuzzy C-means with spatial constraints (SKFCM. These methods are implemented on eighteen indexes as validation to determine whether indexes are capable to acquire the optimal clusters number. The performance of segmentation is estimated by applying these methods independently on several datasets to prove which method can give good results and with which indexes. Our test spans various indexes covering the classical and the rather more recent indexes that have enjoyed noticeable success in that field. These indexes are evaluated and compared by applying them on various test images, including synthetic images corrupted with noise of varying levels, and simulated volumetric MRI datasets. Comparative analysis is also presented to show whether the validity index indicates the optimal clustering for our datasets.

  1. Computer realization of an algorithm for determining the optimal arrangement of a fast power reactor core with hexagonal assemblies

    International Nuclear Information System (INIS)

    Karpov, V.A.; Rybnikov, A.F.

    1983-01-01

    An algorithm for solving the problems associated with fast nuclear reactor computer-aided design is suggested. Formulation of the discrete optimization problem dealing with chosing of the first loading arrangement, determination of the control element functional purpose and the order of their rearrangement during reactor operation as well as the choice of operations for core reloading is given. An algorithm for computerized solutions of the mentioned optimization problem based on variational methods relized in the form of the DESIGN program complex written in FORTRAN for the BEhSM-6 computer is proposed. A fast-response program for solving the diffusion equations of two-dimensional reactor permitting to obtain the optimization problem solution at reasonable period of time is developed to conduct necessary neutron-physical calculations for the reactor in hexagonal geometry. The DESIGN program can be included into a computer-aided design system for automation of the procedure of determining the fast power reactor core arrangement. Application of the DESIGN program permits to avoid the routine calculations on substantiation of neutron-physical and thermal-hydraulic characteristics of the reactor core that releases operators from essential waste of time and increases efficiency of their work

  2. Clark and Reza-Latif-Shabgahi Algorithm for the Determination of ...

    African Journals Online (AJOL)

    Shabgahi bottom-up method in determining the minimal cut-sets of the modified fault tree of pipeline failure in the Niger Delta region of Nigeria occasioned by third party activity. It employs the bottom –up technique, producing a table containing the ...

  3. An efficient algorithmic approach for mass spectrometry-based disulfide connectivity determination using multi-ion analysis

    Directory of Open Access Journals (Sweden)

    Yen Ten-Yang

    2011-02-01

    Full Text Available Abstract Background Determining the disulfide (S-S bond pattern in a protein is often crucial for understanding its structure and function. In recent research, mass spectrometry (MS based analysis has been applied to this problem following protein digestion under both partial reduction and non-reduction conditions. However, this paradigm still awaits solutions to certain algorithmic problems fundamental amongst which is the efficient matching of an exponentially growing set of putative S-S bonded structural alternatives to the large amounts of experimental spectrometric data. Current methods circumvent this challenge primarily through simplifications, such as by assuming only the occurrence of certain ion-types (b-ions and y-ions that predominate in the more popular dissociation methods, such as collision-induced dissociation (CID. Unfortunately, this can adversely impact the quality of results. Method We present an algorithmic approach to this problem that can, with high computational efficiency, analyze multiple ions types (a, b, bo, b*, c, x, y, yo, y*, and z and deal with complex bonding topologies, such as inter/intra bonding involving more than two peptides. The proposed approach combines an approximation algorithm-based search formulation with data driven parameter estimation. This formulation considers only those regions of the search space where the correct solution resides with a high likelihood. Putative disulfide bonds thus obtained are finally combined in a globally consistent pattern to yield the overall disulfide bonding topology of the molecule. Additionally, each bond is associated with a confidence score, which aids in interpretation and assimilation of the results. Results The method was tested on nine different eukaryotic Glycosyltransferases possessing disulfide bonding topologies of varying complexity. Its performance was found to be characterized by high efficiency (in terms of time and the fraction of search space

  4. LHCb: Experience with LHCb alignment software on first data

    CERN Multimedia

    Deissenroth, M

    2009-01-01

    We report results obtained with different track-based algorithms for the alignment of the LHCb detector with first data. The large-area Muon Detector and Outer Tracker have been aligned with a large sample of tracks from cosmic rays. The three silicon detectors --- VELO, TT-station and Inner Tracker --- have been aligned with beam-induced events from the LHC injection line. We compare the results from the track-based alignment with expectations from detector survey.

  5. Evolving attractive faces using morphing technology and a genetic algorithm: a new approach to determining ideal facial aesthetics.

    Science.gov (United States)

    Wong, Brian J F; Karimi, Koohyar; Devcic, Zlatko; McLaren, Christine E; Chen, Wen-Pin

    2008-06-01

    The objectives of this study were to: 1) determine if a genetic algorithm in combination with morphing software can be used to evolve more attractive faces; and 2) evaluate whether this approach can be used as a tool to define or identify the attributes of the ideal attractive face. Basic research study incorporating focus group evaluations. Digital images were acquired of 250 female volunteers (18-25 y). Randomly selected images were used to produce a parent generation (P) of 30 synthetic faces using morphing software. Then, a focus group of 17 trained volunteers (18-25 y) scored each face on an attractiveness scale ranging from 1 (unattractive) to 10 (attractive). A genetic algorithm was used to select 30 new pairs from the parent generation, and these were morphed using software to produce a new first generation (F1) of faces. The F1 faces were scored by the focus group, and the process was repeated for a total of four iterations of the algorithm. The algorithm mimics natural selection by using the attractiveness score as the selection pressure; the more attractive faces are more likely to morph. All five generations (P-F4) were then scored by three focus groups: a) surgeons (n = 12), b) cos-metology students (n = 44), and c) undergraduate students (n = 44). Morphometric measurements were made of 33 specific features on each of the 150 synthetic faces, and correlated with attractiveness scores using univariate and multivariate analysis. The average facial attractiveness scores increased with each generation and were 3.66 (+0.60), 4.59 (+/-0.73), 5.50 (+/-0.62), 6.23 (+/-0.31), and 6.39 (+/-0.24) for P and F1-F4 generations, respectively. Histograms of attractiveness score distributions show a significant shift in the skew of each curve toward more attractive faces with each generation. Univariate analysis identified nasal width, eyebrow arch height, and lip thickness as being significantly correlated with attractiveness scores. Multivariate analysis identified a

  6. Detecting chaos, determining the dimensions of tori and predicting slow diffusion in Fermi-Pasta-Ulam lattices by the Generalized Alignment Index method

    Science.gov (United States)

    Skokos, C.; Bountis, T.; Antonopoulos, C.

    2008-12-01

    The recently introduced GALI method is used for rapidly detecting chaos, determining the dimensionality of regular motion and predicting slow diffusion in multi-dimensional Hamiltonian systems. We propose an efficient computation of the GALIk indices, which represent volume elements of k randomly chosen deviation vectors from a given orbit, based on the Singular Value Decomposition (SVD) algorithm. We obtain theoretically and verify numerically asymptotic estimates of GALIs long-time behavior in the case of regular orbits lying on low-dimensional tori. The GALIk indices are applied to rapidly detect chaotic oscillations, identify low-dimensional tori of Fermi-Pasta-Ulam (FPU) lattices at low energies and predict weak diffusion away from quasiperiodic motion, long before it is actually observed in the oscillations.

  7. The GLAS Algorithm Theoretical Basis Document for Precision Attitude Determination (PAD)

    Science.gov (United States)

    Bae, Sungkoo; Smith, Noah; Schutz, Bob E.

    2013-01-01

    The Geoscience Laser Altimeter System (GLAS) was the sole instrument for NASAs Ice, Cloud and land Elevation Satellite (ICESat) laser altimetry mission. The primary purpose of the ICESat mission was to make ice sheet elevation measurements of the polar regions. Additional goals were to measure the global distribution of clouds and aerosols and to map sea ice, land topography and vegetation. ICESat was the benchmark Earth Observing System (EOS) mission to be used to determine the mass balance of the ice sheets, as well as for providing cloud property information, especially for stratospheric clouds common over polar areas.

  8. A New Algorithm for Determining Ultimate Pit Limits Based on Network Optimization

    OpenAIRE

    Ali Asghar Khodayari

    2013-01-01

    One of the main concerns of the mining industry is to determine ultimate pit limits. Final pit is a collection of blocks, which can be removed with maximum profit while following restrictions on the slope of the mine’s walls. The size, location and final shape of an open-pit are very important in designing the location of waste dumps, stockpiles, processing plants, access roads and other surface facilities as well as in developing a production program. There are numerous methods for designing...

  9. Sparse alignment for robust tensor learning.

    Science.gov (United States)

    Lai, Zhihui; Wong, Wai Keung; Xu, Yong; Zhao, Cairong; Sun, Mingming

    2014-10-01

    Multilinear/tensor extensions of manifold learning based algorithms have been widely used in computer vision and pattern recognition. This paper first provides a systematic analysis of the multilinear extensions for the most popular methods by using alignment techniques, thereby obtaining a general tensor alignment framework. From this framework, it is easy to show that the manifold learning based tensor learning methods are intrinsically different from the alignment techniques. Based on the alignment framework, a robust tensor learning method called sparse tensor alignment (STA) is then proposed for unsupervised tensor feature extraction. Different from the existing tensor learning methods, L1- and L2-norms are introduced to enhance the robustness in the alignment step of the STA. The advantage of the proposed technique is that the difficulty in selecting the size of the local neighborhood can be avoided in the manifold learning based tensor feature extraction algorithms. Although STA is an unsupervised learning method, the sparsity encodes the discriminative information in the alignment step and provides the robustness of STA. Extensive experiments on the well-known image databases as well as action and hand gesture databases by encoding object images as tensors demonstrate that the proposed STA algorithm gives the most competitive performance when compared with the tensor-based unsupervised learning methods.

  10. Aligning Biomolecular Networks Using Modular Graph Kernels

    Science.gov (United States)

    Towfic, Fadi; Greenlee, M. Heather West; Honavar, Vasant

    Comparative analysis of biomolecular networks constructed using measurements from different conditions, tissues, and organisms offer a powerful approach to understanding the structure, function, dynamics, and evolution of complex biological systems. We explore a class of algorithms for aligning large biomolecular networks by breaking down such networks into subgraphs and computing the alignment of the networks based on the alignment of their subgraphs. The resulting subnetworks are compared using graph kernels as scoring functions. We provide implementations of the resulting algorithms as part of BiNA, an open source biomolecular network alignment toolkit. Our experiments using Drosophila melanogaster, Saccharomyces cerevisiae, Mus musculus and Homo sapiens protein-protein interaction networks extracted from the DIP repository of protein-protein interaction data demonstrate that the performance of the proposed algorithms (as measured by % GO term enrichment of subnetworks identified by the alignment) is competitive with some of the state-of-the-art algorithms for pair-wise alignment of large protein-protein interaction networks. Our results also show that the inter-species similarity scores computed based on graph kernels can be used to cluster the species into a species tree that is consistent with the known phylogenetic relationships among the species.

  11. The Decision Support System (DSS) Application to Determination of Diabetes Mellitus Patient Menu Using a Genetic Algorithm Method

    Science.gov (United States)

    Zuliyana, Nia; Suseno, Jatmiko Endro; Adi, Kusworo

    2018-02-01

    Composition of foods containing sugar in people with Diabetes Mellitus should be balanced, so an app is required for facilitate the public and nutritionists in determining the appropriate food menu with calorie requirement of diabetes patient. This research will be recommended to determination of food variation for using Genetic Algorithm. The data used is nutrient content of food obtained from Tabel Komposisi Pangan Indonesia (TKPI). The requirement of caloric value the patient can be used the PERKENI 2015 method. Then the data is processed to determine the best food menu consisting of energy (E), carbohydrate (K), fat (L) and protein (P) requirements. The system is comparised with variation of Genetic Algorithm parameters is the total of chromosomes, Probability of Crossover (Pc) and Probability of Mutation (Pm). Maximum value of the probability generation of crossover and probability of mutation will be the more variations of food that will come out. For example, patient with gender is women aged 61 years old, height 160 cm, weight 55 kg, will be resulted number of calories: (E=1621.4, K=243.21, P=60.80, L=45.04), with the gene=4, chromosomes=3, generation=3, Pc=0.2, and Pm=0.2. The result obtained is the three varians: E=1607.25, K=198.877, P=95.385, L=47.508), (E=1633.25, K=196.677, P=85.885, L=55.758), (E=1630.90, K=177.455, P=85.245, L=64.335).

  12. Optimization, Monotonicity and the Determination of Nash Equilibria — An Algorithmic Analysis

    Science.gov (United States)

    Lozovanu, D.; Pickl, S. W.; Weber, G.-W.

    2004-08-01

    This paper is concerned with the optimization of a nonlinear time-discrete model exploiting the special structure of the underlying cost game and the property of inverse matrices. The costs are interlinked by a system of linear inequalities. It is shown that, if the players cooperate, i.e., minimize the sum of all the costs, they achieve a Nash equilibrium. In order to determine Nash equilibria, the simplex method can be applied with respect to the dual problem. An introduction into the TEM model and its relationship to an economic Joint Implementation program is given. The equivalence problem is presented. The construction of the emission cost game and the allocation problem is explained. The assumption of inverse monotony for the matrices leads to a new result in the area of such allocation problems. A generalization of such problems is presented.

  13. ALGORITHM OF DETERMINATION OF POWER AND ENERGY INDEXES OF SCREW INTENSIFIER ON THE BULLDOZER WORKING EQUIPMENT AT TRENCH REFILLINGS

    Directory of Open Access Journals (Sweden)

    KROL R. N.

    2016-03-01

    Full Text Available Raising of problem. A bulldozer work at trench refilings is conducted by cyclic, machine shuttle motions that increases a right-of-way; increasing of time charges, fuel and labour by the side of the continuous refilling method. Besides the indicated defects gets worse also the quality of the trench refilling: the uneven soil output into a trench with large portions results the damages of pipes isolation and emptinesses formation, in consequence  settling and washing of soil. A bulldozer with the screw intensifier (SI, is deprived lacks of an odinary bulldozer  moving along a trench, it moves the loose soil that does not fall on a pipeline, but rolles on it. Thus the circuitous speed of a cutting edge of SI exceeds the speed of the base machine moving that provides the strong soil treatment (before dispersion before output into a trench. Purpose. The algorithm development of the rotational moment determination on the SI driveshaft, the consumable energy, the energy intensity and the working process productivity of the reverse trench refillings depending on physical and mechanical properties of soil, geometrical parameters of SI and bulldozer optimal speed. Conclusion. The developed algorithm allows to define that at the fixed value of the rotational speed the rotational moment and indicated efficiency of SI at the optimum speed increasing of the base machine change on a linear law; the optimum speed change of the base machine practically does not influence on the energy intensity at the considered change of the rotational speed .

  14. Improving a maximum horizontal gradient algorithm to determine geological body boundaries and fault systems based on gravity data

    Science.gov (United States)

    Van Kha, Tran; Van Vuong, Hoang; Thanh, Do Duc; Hung, Duong Quoc; Anh, Le Duc

    2018-05-01

    The maximum horizontal gradient method was first proposed by Blakely and Simpson (1986) for determining the boundaries between geological bodies with different densities. The method involves the comparison of a center point with its eight nearest neighbors in four directions within each 3 × 3 calculation grid. The horizontal location and magnitude of the maximum values are found by interpolating a second-order polynomial through the trio of points provided that the magnitude of the middle point is greater than its two nearest neighbors in one direction. In theoretical models of multiple sources, however, the above condition does not allow the maximum horizontal locations to be fully located, and it could be difficult to correlate the edges of complicated sources. In this paper, the authors propose an additional condition to identify more maximum horizontal locations within the calculation grid. This additional condition will improve the method algorithm for interpreting the boundaries of magnetic and/or gravity sources. The improved algorithm was tested on gravity models and applied to gravity data for the Phu Khanh basin on the continental shelf of the East Vietnam Sea. The results show that the additional locations of the maximum horizontal gradient could be helpful for connecting the edges of complicated source bodies.

  15. Inverting Image Data For Optical Testing And Alignment

    Science.gov (United States)

    Shao, Michael; Redding, David; Yu, Jeffrey W.; Dumont, Philip J.

    1993-01-01

    Data from images produced by slightly incorrectly figured concave primary mirror in telescope processed into estimate of spherical aberration of mirror, by use of algorithm finding nonlinear least-squares best fit between actual images and synthetic images produced by multiparameter mathematical model of telescope optical system. Estimated spherical aberration, in turn, converted into estimate of deviation of reflector surface from nominal precise shape. Algorithm devised as part of effort to determine error in surface figure of primary mirror of Hubble space telescope, so corrective lens designed. Modified versions of algorithm also used to find optical errors in other components of telescope or of other optical systems, for purposes of testing, alignment, and/or correction.

  16. K-Means Algorithm Performance Analysis With Determining The Value Of Starting Centroid With Random And KD-Tree Method

    Science.gov (United States)

    Sirait, Kamson; Tulus; Budhiarti Nababan, Erna

    2017-12-01

    Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.

  17. Evolutionary rates at codon sites may be used to align sequences and infer protein domain function

    Directory of Open Access Journals (Sweden)

    Hazelhurst Scott

    2010-03-01

    Full Text Available Abstract Background Sequence alignments form part of many investigations in molecular biology, including the determination of phylogenetic relationships, the prediction of protein structure and function, and the measurement of evolutionary rates. However, to obtain meaningful results, a significant degree of sequence similarity is required to ensure that the alignments are accurate and the inferences correct. Limitations arise when sequence similarity is low, which is particularly problematic when working with fast-evolving genes, evolutionary distant taxa, genomes with nucleotide biases, and cases of convergent evolution. Results A novel approach was conceptualized to address the "low sequence similarity" alignment problem. We developed an alignment algorithm termed FIRE (Functional Inference using the Rates of Evolution, which aligns sequences using the evolutionary rate at codon sites, as measured by the dN/dS ratio, rather than nucleotide or amino acid residues. FIRE was used to test the hypotheses that evolutionary rates can be used to align sequences and that the alignments may be used to infer protein domain function. Using a range of test data, we found that aligning domains based on evolutionary rates was possible even when sequence similarity was very low (for example, antibody variable regions. Furthermore, the alignment has the potential to infer protein domain function, indicating that domains with similar functions are subject to similar evolutionary constraints. These data suggest that an evolutionary rate-based approach to sequence analysis (particularly when combined with structural data may be used to study cases of convergent evolution or when sequences have very low similarity. However, when aligning homologous gene sets with sequence similarity, FIRE did not perform as well as the best traditional alignment algorithms indicating that the conventional approach of aligning residues as opposed to evolutionary rates remains the

  18. Determining decoupling points in a supply chain networks using NSGA II algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ebrahimiarjestan, M.; Wang, G.

    2017-07-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  19. Determining decoupling points in a supply chain networks using NSGA II algorithm

    International Nuclear Information System (INIS)

    Ebrahimiarjestan, M.; Wang, G.

    2017-01-01

    Purpose: In the model, we used the concepts of Lee and Amaral (2002) and Tang and Zhou (2009) and offer a multi-criteria decision-making model that identify the decoupling points to aim to minimize production costs, minimize the product delivery time to customer and maximize their satisfaction. Design/methodology/approach: We encounter with a triple-objective model that meta-heuristic method (NSGA II) is used to solve the model and to identify the Pareto optimal points. The max (min) method was used. Findings: Our results of using NSGA II to find Pareto optimal solutions demonstrate good performance of NSGA II to extract Pareto solutions in proposed model that considers determining of decoupling point in a supply network. Originality/value: So far, several approaches to model the future have been proposed, of course, each of them modeled a part of this concept. This concept has been considered more general in the model that defined in follow. In this model, we face with a multi-criteria decision problem that includes minimization of the production costs and product delivery time to customers as well as customer consistency maximization.

  20. Interfacial chemistry and energy band alignment of TiAlO on 4H-SiC determined by X-ray photoelectron spectroscopy

    International Nuclear Information System (INIS)

    Wang, Qian; Cheng, Xinhong; Zheng, Li; Ye, Peiyi; Li, Menglu; Shen, Lingyan; Li, Jingjie; Zhang, Dongliang; Gu, Ziyue; Yu, Yuehui

    2017-01-01

    Highlights: • Composite TiAlO rather than TiO_2-Al_2O_3 laminations is deposited on 4H-SiC by PEALD. • An interfacial layer composed of Ti, Si, O and C forms between TiAlO and 4H-SiC. • TiAlO offers competitive barrier heights (>1 eV) for both electrons and holes. - Abstract: Intermixing of TiO_2 with Al_2O_3 to form TiAlO films on 4H-SiC is expected to simultaneously boost the dielectric constant and achieve sufficient conduction/valence band offsets (CBO/VBO) between dielectrics and 4H-SiC. In this work, a composite TiAlO film rather than TiO_2-Al_2O_3 laminations is deposited on 4H-SiC by plasma enhanced atomic layer deposition (PEALD). X-ray photoelectron spectroscopy (XPS) is performed to systematically analyze the interfacial chemistry and energy band alignment between TiAlO and 4H-SiC. An interfacial layer composed of Ti, Si, O and C forms between TiAlO and 4H-SiC during PEALD process. The VBO and CBO between TiAlO and 4H-SiC are determined to be 1.45 eV and 1.10 eV, respectively, which offer competitive barrier heights (>1 eV) for both electrons and holes and make it suitable for the fabrication of 4H-SiC metal-oxide-semiconductor field effect transistors (MOSFETs).

  1. Interfacial chemistry and energy band alignment of TiAlO on 4H-SiC determined by X-ray photoelectron spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qian [State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Micro-System & Information Technology, Chinese Academy of Sciences, Changning Road 865, Shanghai 200050 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Cheng, Xinhong, E-mail: xh_cheng@mail.sim.ac.cn [State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Micro-System & Information Technology, Chinese Academy of Sciences, Changning Road 865, Shanghai 200050 (China); Zheng, Li, E-mail: zhengli@mail.sim.ac.cn [State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Micro-System & Information Technology, Chinese Academy of Sciences, Changning Road 865, Shanghai 200050 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Ye, Peiyi; Li, Menglu [Department of Materials Science and Engineering, University of California, Los Angeles, CA, 90095 (United States); Shen, Lingyan; Li, Jingjie; Zhang, Dongliang; Gu, Ziyue [State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Micro-System & Information Technology, Chinese Academy of Sciences, Changning Road 865, Shanghai 200050 (China); University of Chinese Academy of Sciences, Beijing 100049 (China); Yu, Yuehui [State Key Laboratory of Functional Materials for Informatics, Shanghai Institute of Micro-System & Information Technology, Chinese Academy of Sciences, Changning Road 865, Shanghai 200050 (China)

    2017-07-01

    Highlights: • Composite TiAlO rather than TiO{sub 2}-Al{sub 2}O{sub 3} laminations is deposited on 4H-SiC by PEALD. • An interfacial layer composed of Ti, Si, O and C forms between TiAlO and 4H-SiC. • TiAlO offers competitive barrier heights (>1 eV) for both electrons and holes. - Abstract: Intermixing of TiO{sub 2} with Al{sub 2}O{sub 3} to form TiAlO films on 4H-SiC is expected to simultaneously boost the dielectric constant and achieve sufficient conduction/valence band offsets (CBO/VBO) between dielectrics and 4H-SiC. In this work, a composite TiAlO film rather than TiO{sub 2}-Al{sub 2}O{sub 3} laminations is deposited on 4H-SiC by plasma enhanced atomic layer deposition (PEALD). X-ray photoelectron spectroscopy (XPS) is performed to systematically analyze the interfacial chemistry and energy band alignment between TiAlO and 4H-SiC. An interfacial layer composed of Ti, Si, O and C forms between TiAlO and 4H-SiC during PEALD process. The VBO and CBO between TiAlO and 4H-SiC are determined to be 1.45 eV and 1.10 eV, respectively, which offer competitive barrier heights (>1 eV) for both electrons and holes and make it suitable for the fabrication of 4H-SiC metal-oxide-semiconductor field effect transistors (MOSFETs).

  2. Heuristics for multiobjective multiple sequence alignment.

    Science.gov (United States)

    Abbasi, Maryam; Paquete, Luís; Pereira, Francisco B

    2016-07-15

    Aligning multiple sequences arises in many tasks in Bioinformatics. However, the alignments produced by the current software packages are highly dependent on the parameters setting, such as the relative importance of opening gaps with respect to the increase of similarity. Choosing only one parameter setting may provide an undesirable bias in further steps of the analysis and give too simplistic interpretations. In this work, we reformulate multiple sequence alignment from a multiobjective point of view. The goal is to generate several sequence alignments that represent a trade-off between maximizing the substitution score and minimizing the number of indels/gaps in the sum-of-pairs score function. This trade-off gives to the practitioner further information about the similarity of the sequences, from which she could analyse and choose the most plausible alignment. We introduce several heuristic approaches, based on local search procedures, that compute a set of sequence alignments, which are representative of the trade-off between the two objectives (substitution score and indels). Several algorithm design options are discussed and analysed, with particular emphasis on the influence of the starting alignment and neighborhood search definitions on the overall performance. A perturbation technique is proposed to improve the local search, which provides a wide range of high-quality alignments. The proposed approach is tested experimentally on a wide range of instances. We performed several experiments with sequences obtained from the benchmark database BAliBASE 3.0. To evaluate the quality of the results, we calculate the hypervolume indicator of the set of score vectors returned by the algorithms. The results obtained allow us to identify reasonably good choices of parameters for our approach. Further, we compared our method in terms of correctly aligned pairs ratio and columns correctly aligned ratio with respect to reference alignments. Experimental results show

  3. Statistical distributions of optimal global alignment scores of random protein sequences

    Directory of Open Access Journals (Sweden)

    Tang Jiaowei

    2005-10-01

    Full Text Available Abstract Background The inference of homology from statistically significant sequence similarity is a central issue in sequence alignments. So far the statistical distribution function underlying the optimal global alignments has not been completely determined. Results In this study, random and real but unrelated sequences prepared in six different ways were selected as reference datasets to obtain their respective statistical distributions of global alignment scores. All alignments were carried out with the Needleman-Wunsch algorithm and optimal scores were fitted to the Gumbel, normal and gamma distributions respectively. The three-parameter gamma distribution performs the best as the theoretical distribution function of global alignment scores, as it agrees perfectly well with the distribution of alignment scores. The normal distribution also agrees well with the score distribution frequencies when the shape parameter of the gamma distribution is sufficiently large, for this is the scenario when the normal distribution can be viewed as an approximation of the gamma distribution. Conclusion We have shown that the optimal global alignment scores of random protein sequences fit the three-parameter gamma distribution function. This would be useful for the inference of homology between sequences whose relationship is unknown, through the evaluation of gamma distribution significance between sequences.

  4. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm

    Science.gov (United States)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-01

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10- 7-10- 8 M with a correlation coefficient (R2) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56 × 10- 9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully.

  5. Spectrophotometric determination of fluoxetine by molecularly imprinted polypyrrole and optimization by experimental design, artificial neural network and genetic algorithm.

    Science.gov (United States)

    Nezhadali, Azizollah; Motlagh, Maryam Omidvar; Sadeghzadeh, Samira

    2018-02-05

    A selective method based on molecularly imprinted polymer (MIP) solid-phase extraction (SPE) using UV-Vis spectrophotometry as a detection technique was developed for the determination of fluoxetine (FLU) in pharmaceutical and human serum samples. The MIPs were synthesized using pyrrole as a functional monomer in the presence of FLU as a template molecule. The factors that affecting the preparation and extraction ability of MIP such as amount of sorbent, initiator concentration, the amount of monomer to template ratio, uptake shaking rate, uptake time, washing buffer pH, take shaking rate, Taking time and polymerization time were considered for optimization. First a Plackett-Burman design (PBD) consists of 12 randomized runs were applied to determine the influence of each factor. The other optimization processes were performed using central composite design (CCD), artificial neural network (ANN) and genetic algorithm (GA). At optimal condition the calibration curve showed linearity over a concentration range of 10 -7 -10 -8 M with a correlation coefficient (R 2 ) of 0.9970. The limit of detection (LOD) for FLU was obtained 6.56×10 -9 M. The repeatability of the method was obtained 1.61%. The synthesized MIP sorbent showed a good selectivity and sensitivity toward FLU. The MIP/SPE method was used for the determination of FLU in pharmaceutical, serum and plasma samples, successfully. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. A fast algorithm for determining bounds and accurate approximate p-values of the rank product statistic for replicate experiments.

    Science.gov (United States)

    Heskes, Tom; Eisinga, Rob; Breitling, Rainer

    2014-11-21

    The rank product method is a powerful statistical technique for identifying differentially expressed molecules in replicated experiments. A critical issue in molecule selection is accurate calculation of the p-value of the rank product statistic to adequately address multiple testing. Both exact calculation and permutation and gamma approximations have been proposed to determine molecule-level significance. These current approaches have serious drawbacks as they are either computationally burdensome or provide inaccurate estimates in the tail of the p-value distribution. We derive strict lower and upper bounds to the exact p-value along with an accurate approximation that can be used to assess the significance of the rank product statistic in a computationally fast manner. The bounds and the proposed approximation are shown to provide far better accuracy over existing approximate methods in determining tail probabilities, with the slightly conservative upper bound protecting against false positives. We illustrate the proposed method in the context of a recently published analysis on transcriptomic profiling performed in blood. We provide a method to determine upper bounds and accurate approximate p-values of the rank product statistic. The proposed algorithm provides an order of magnitude increase in throughput as compared with current approaches and offers the opportunity to explore new application domains with even larger multiple testing issue. The R code is published in one of the Additional files and is available at http://www.ru.nl/publish/pages/726696/rankprodbounds.zip .

  7. Antares automatic beam alignment system

    International Nuclear Information System (INIS)

    Appert, Q.; Swann, T.; Sweatt, W.; Saxman, A.

    1980-01-01

    Antares is a 24-beam-line CO 2 laser system for controlled fusion research, under construction at Los Alamos Scientific Laboratory (LASL). Rapid automatic alignment of this system is required prior to each experiment shot. The alignment requirements, operational constraints, and a developed prototype system are discussed. A visible-wavelength alignment technique is employed that uses a telescope/TV system to view point light sources appropriately located down the beamline. Auto-alignment is accomplished by means of a video centroid tracker, which determines the off-axis error of the point sources. The error is nulled by computer-driven, movable mirrors in a closed-loop system. The light sources are fiber-optic terminations located at key points in the optics path, primarily at the center of large copper mirrors, and remotely illuminated to reduce heating effects

  8. Optimal Nonlinear Filter for INS Alignment

    Institute of Scientific and Technical Information of China (English)

    赵瑞; 顾启泰

    2002-01-01

    All the methods to handle the inertial navigation system (INS) alignment were sub-optimal in the past. In this paper, particle filtering (PF) as an optimal method is used for solving the problem of INS alignment. A sub-optimal two-step filtering algorithm is presented to improve the real-time performance of PF. The approach combines particle filtering with Kalman filtering (KF). Simulation results illustrate the superior performance of these approaches when compared with extended Kalman filtering (EKF).

  9. Coordination Analysis Using Global Structural Constraints and Alignment-based Local Features

    Science.gov (United States)

    Hara, Kazuo; Shimbo, Masashi; Matsumoto, Yuji

    We propose a hybrid approach to coordinate structure analysis that combines a simple grammar to ensure consistent global structure of coordinations in a sentence, and features based on sequence alignment to capture local symmetry of conjuncts. The weight of the alignment-based features, which in turn determines the score of coordinate structures, is optimized by perceptron training on a given corpus. A bottom-up chart parsing algorithm efficiently finds the best scoring structure, taking both nested or non-overlapping flat coordinations into account. We demonstrate that our approach outperforms existing parsers in coordination scope detection on the Genia corpus.

  10. Alignment of the Measurement Scale Mark during Immersion Hydrometer Calibration Using an Image Processing System

    Directory of Open Access Journals (Sweden)

    Jose Emilio Vargas-Soto

    2013-10-01

    Full Text Available The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration.

  11. Alignment of the Measurement Scale Mark during Immersion Hydrometer Calibration Using an Image Processing System

    Science.gov (United States)

    Peña-Perez, Luis Manuel; Pedraza-Ortega, Jesus Carlos; Ramos-Arreguin, Juan Manuel; Arriaga, Saul Tovar; Fernandez, Marco Antonio Aceves; Becerra, Luis Omar; Hurtado, Efren Gorrostieta; Vargas-Soto, Jose Emilio

    2013-01-01

    The present work presents an improved method to align the measurement scale mark in an immersion hydrometer calibration system of CENAM, the National Metrology Institute (NMI) of Mexico, The proposed method uses a vision system to align the scale mark of the hydrometer to the surface of the liquid where it is immersed by implementing image processing algorithms. This approach reduces the variability in the apparent mass determination during the hydrostatic weighing in the calibration process, therefore decreasing the relative uncertainty of calibration. PMID:24284770

  12. Aligning the unalignable: bacteriophage whole genome alignments.

    Science.gov (United States)

    Bérard, Sèverine; Chateau, Annie; Pompidor, Nicolas; Guertin, Paul; Bergeron, Anne; Swenson, Krister M

    2016-01-13

    In recent years, many studies focused on the description and comparison of large sets of related bacteriophage genomes. Due to the peculiar mosaic structure of these genomes, few informative approaches for comparing whole genomes exist: dot plots diagrams give a mostly qualitative assessment of the similarity/dissimilarity between two or more genomes, and clustering techniques are used to classify genomes. Multiple alignments are conspicuously absent from this scene. Indeed, whole genome aligners interpret lack of similarity between sequences as an indication of rearrangements, insertions, or losses. This behavior makes them ill-prepared to align bacteriophage genomes, where even closely related strains can accomplish the same biological function with highly dissimilar sequences. In this paper, we propose a multiple alignment strategy that exploits functional collinearity shared by related strains of bacteriophages, and uses partial orders to capture mosaicism of sets of genomes. As classical alignments do, the computed alignments can be used to predict that genes have the same biological function, even in the absence of detectable similarity. The Alpha aligner implements these ideas in visual interactive displays, and is used to compute several examples of alignments of Staphylococcus aureus and Mycobacterium bacteriophages, involving up to 29 genomes. Using these datasets, we prove that Alpha alignments are at least as good as those computed by standard aligners. Comparison with the progressive Mauve aligner - which implements a partial order strategy, but whose alignments are linearized - shows a greatly improved interactive graphic display, while avoiding misalignments. Multiple alignments of whole bacteriophage genomes work, and will become an important conceptual and visual tool in comparative genomics of sets of related strains. A python implementation of Alpha, along with installation instructions for Ubuntu and OSX, is available on bitbucket (https://bitbucket.org/thekswenson/alpha).

  13. Longitudinal trends of recent HIV-1 infections in Slovenia (1986-2012) determined using an incidence algorithm.

    Science.gov (United States)

    Lunar, Maja M; Matković, Ivana; Tomažič, Janez; Vovko, Tomaž D; Pečavar, Blaž; Poljak, Mario

    2015-09-01

    Resolving dilemma whether the rise in the number of HIV diagnoses represents an actual increase in HIV transmissions or is a result of improved HIV surveillance is crucial before implementing national HIV prevention strategies. Annual proportions of recent infections (RI) among newly diagnosed persons infected with HIV-1 in Slovenia during 27 years (1986-2012) were determined using an algorithm consisting of routine baseline CD4 and HIV viral load measurements and the Aware BED EIA HIV-1 Incidence Test (BED test). The study included the highest coverage of persons diagnosed with HIV during the entire duration of an HIV epidemic in a given country/region (71%). Out of 416 patients, 170 (40.9%) had a baseline CD4 cell count less than 200 cells/mm(3) and/or HIV-1 viral load less than 400 copies/ml and were characterized as having a long-standing infection (LSI). The remaining 246 patients were additionally tested using the BED test. Overall, 23% (97/416) of the patients were labeled RI. The characteristics significantly associated with RI were as follows: younger age, acute retroviral syndrome, CDC class A and other than C, no AIDS defining illnesses, HIV test performed in the past, a higher viral load, and a higher CD4 cell count. An interesting trend in the proportion of RI was observed, with a peak in 2005 (47% of RI) and the lowest point in 2008 (12%) in parallel with a rise in the numbers of new HIV diagnoses. This study could help promote the idea of introducing periodic HIV incidence monitoring using a simple and affordable algorithm. © 2015 Wiley Periodicals, Inc.

  14. Vertically aligned CNT growth on a microfabricated silicon heater with integrated temperature control—determination of the activation energy from a continuous thermal gradient

    DEFF Research Database (Denmark)

    Engstrøm, Daniel Southcott; Rupesinghe, Nalin L; Teo, Kenneth B K

    2011-01-01

    Silicon microheaters for local growth of a vertically aligned carbon nanotube (VACNT) were fabricated. The microheaters had a four-point-probe structure that measured the silicon conductivity variations in the heated region which is a measure of the temperature. Through FEM simulations the temper...

  15. Automatic Angular alignment of LHC Collimators

    CERN Document Server

    Azzopardi, Gabriella; Salvachua Ferrando, Belen Maria; Mereghetti, Alessio; Bruce, Roderik; Redaelli, Stefano; CERN. Geneva. ATS Department

    2017-01-01

    The LHC is equipped with a complex collimation system to protect sensitive equipment from unavoidable beam losses. Collimators are positioned close to the beam using an alignment procedure. Until now they have always been aligned assuming no tilt between the collimator and the beam, however, tank misalignments or beam envelope angles at large-divergence locations could introduce a tilt limiting the collimation performance. Three different algorithms were implemented to automatically align a chosen collimator at various angles. The implementation was tested on a number of collimators during this MD and no human intervention was required.

  16. Inner Detector Track Reconstruction and Alignment at the ATLAS Experiment

    CERN Document Server

    Danninger, Matthias; The ATLAS collaboration

    2017-01-01

    The Inner Detector of the ATLAS experiment at the LHC is responsible for reconstructing the trajectories of charged particles (‘tracks’) with high efficiency and accuracy. It consists of three subdetectors, each using a different technology to provide measurements points. An overview of the use of each of these subdetectors in track reconstruction, as well as the algorithmic approaches taken to the specific tasks of pattern recognition and track fitting, is given. The performance of the Inner Detector tracking will be summarised. Of crucial importance for optimal tracking performance is precise knowledge of the relative positions of the detector elements. ATLAS uses a sophisticated, highly granular software alignment procedure to determine and correct for the positions of the sensors, including time-dependent effects appearing within single data runs. This alignment procedure will be discussed in detail, and its effect on Inner Detector tracking for LHC Run 2 proton-proton collision data highlighted.

  17. Prediction of molecular alignment of nucleic acids in aligned media

    International Nuclear Information System (INIS)

    Wu Bin; Petersen, Michael; Girard, Frederic; Tessari, Marco; Wijmenga, Sybren S.

    2006-01-01

    We demonstrate - using the data base of all deposited DNA and RNA structures aligned in Pf1-medium and RDC refined - that for nucleic acids in a Pf1-medium the electrostatic alignment tensor can be predicted reliably and accurately via a simple and fast calculation based on the gyration tensor spanned out by the phosphodiester atoms. The rhombicity is well predicted over its full range from 0 to 0.66, while the alignment tensor orientation is predicted correctly for rhombicities up to ca. 0.4, for larger rhombicities it appears to deviate somewhat more than expected based on structural noise and measurement error. This simple analytical approach is based on the Debye-Huckel approximation for the electrostatic interaction potential, valid at distances sufficiently far away from a poly-ionic charged surface, a condition naturally enforced when the charge of alignment medium and solute are of equal sign, as for nucleic acids in a Pf1-phage medium. For the usual salt strengths and nucleic acid sizes, the Debye-Huckel screening length is smaller than the nucleic acid size, but large enough for the collective of Debye-Huckel spheres to encompass the whole molecule. The molecular alignment is then purely electrostatic, but it's functional form is under these conditions similar to that for steric alignment. The proposed analytical expression allows for very fast calculation of the alignment tensor and hence RDCs from the conformation of the nucleic acid molecule. This information provides opportunities for improved structure determination of nucleic acids, including better assessment of dynamics in (multi-domain) nucleic acids and the possibility to incorporate alignment tensor prediction from shape directly into the structure calculation process. The procedures are incorporated into MATLAB scripts, which are available on request

  18. CMS Tracker Alignment Performance Results Start-Up 2017

    CERN Document Server

    CMS Collaboration

    2017-01-01

    During the LHC shutdown in Winter 2016/17, the CMS pixel detector, the inner component of the CMS Tracker, was replaced by the Phase-1 upgrade detector. Among others improvements, the new pixel detector consists of four instead of three layers in the central barrel region (BPIX) and three instead of two disks in the endcap regions (FPIX). In this report, performance plots of the first pixel-detector alignment results are presented, which were obtained with cosmic-ray data taken prior to the start of the 2017 LHC pp operation. Alignment constants have been derived using the data collected initially at 0T and later at 3.8T magnetic field to the level of single module positions in the pixel detector, while keeping the alignment parameters of the strip detector fixed at the values determined in the end of 2016. The complete understanding of the alignment and biases was derived by using two algorithms, Millepede-II and HipPy. The results confirm each other.

  19. The Analysis of Alpha Beta Pruning and MTD(f) Algorithm to Determine the Best Algorithm to be Implemented at Connect Four Prototype

    Science.gov (United States)

    Tommy, Lukas; Hardjianto, Mardi; Agani, Nazori

    2017-04-01

    Connect Four is a two-player game which the players take turns dropping discs into a grid to connect 4 of one’s own discs next to each other vertically, horizontally, or diagonally. At Connect Four, Computer requires artificial intelligence (AI) in order to play properly like human. There are many AI algorithms that can be implemented to Connect Four, but the suitable algorithms are unknown. The suitable algorithm means optimal in choosing move and its execution time is not slow at search depth which is deep enough. In this research, analysis and comparison between standard alpha beta (AB) Pruning and MTD(f) will be carried out at the prototype of Connect Four in terms of optimality (win percentage) and speed (execution time and the number of leaf nodes). Experiments are carried out by running computer versus computer mode with 12 different conditions, i.e. varied search depth (5 through 10) and who moves first. The percentage achieved by MTD(f) based on experiments is win 45,83%, lose 37,5% and draw 16,67%. In the experiments with search depth 8, MTD(f) execution time is 35, 19% faster and evaluate 56,27% fewer leaf nodes than AB Pruning. The results of this research are MTD(f) is as optimal as AB Pruning at Connect Four prototype, but MTD(f) on average is faster and evaluates fewer leaf nodes than AB Pruning. The execution time of MTD(f) is not slow and much faster than AB Pruning at search depth which is deep enough.

  20. SWAMP+: multiple subsequence alignment using associative massive parallelism

    Energy Technology Data Exchange (ETDEWEB)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  1. Automated Ontology Alignment with Fuselets for Community of Interest (COI) Integration

    National Research Council Canada - National Science Library

    Starz, James; Roberts, Joe

    2008-01-01

    Discusses the ontology alignment problem by presenting a tool called Ontrapro-the Ontology Translation Protocol, which allows users to apply a myriad of ontology alignment algorithms in an iterative fashion...

  2. Multiple sequence alignment accuracy and phylogenetic inference.

    Science.gov (United States)

    Ogden, T Heath; Rosenberg, Michael S

    2006-04-01

    Phylogenies are often thought to be more dependent upon the specifics of the sequence alignment rather than on the method of reconstruction. Simulation of sequences containing insertion and deletion events was performed in order to determine the role that alignment accuracy plays during phylogenetic inference. Data sets were simulated for pectinate, balanced, and random tree shapes under different conditions (ultrametric equal branch length, ultrametric random branch length, nonultrametric random branch length). Comparisons between hypothesized alignments and true alignments enabled determination of two measures of alignment accuracy, that of the total data set and that of individual branches. In general, our results indicate that as alignment error increases, topological accuracy decreases. This trend was much more pronounced for data sets derived from more pectinate topologies. In contrast, for balanced, ultrametric, equal branch length tree shapes, alignment inaccuracy had little average effect on tree reconstruction. These conclusions are based on average trends of many analyses under different conditions, and any one specific analysis, independent of the alignment accuracy, may recover very accurate or inaccurate topologies. Maximum likelihood and Bayesian, in general, outperformed neighbor joining and maximum parsimony in terms of tree reconstruction accuracy. Results also indicated that as the length of the branch and of the neighboring branches increase, alignment accuracy decreases, and the length of the neighboring branches is the major factor in topological accuracy. Thus, multiple-sequence alignment can be an important factor in downstream effects on topological reconstruction.

  3. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics.

    Directory of Open Access Journals (Sweden)

    Emanuel Santos

    Full Text Available Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system.

  4. Ontology Alignment Repair through Modularization and Confidence-Based Heuristics.

    Science.gov (United States)

    Santos, Emanuel; Faria, Daniel; Pesquita, Catia; Couto, Francisco M

    2015-01-01

    Ontology Matching aims at identifying a set of semantic correspondences, called an alignment, between related ontologies. In recent years, there has been a growing interest in efficient and effective matching methods for large ontologies. However, alignments produced for large ontologies are often logically incoherent. It was only recently that the use of repair techniques to improve the coherence of ontology alignments began to be explored. This paper presents a novel modularization technique for ontology alignment repair which extracts fragments of the input ontologies that only contain the necessary classes and relations to resolve all detectable incoherences. The paper presents also an alignment repair algorithm that uses a global repair strategy to minimize both the degree of incoherence and the number of mappings removed from the alignment, while overcoming the scalability problem by employing the proposed modularization technique. Our evaluation shows that our modularization technique produces significantly small fragments of the ontologies and that our repair algorithm produces more complete alignments than other current alignment repair systems, while obtaining an equivalent degree of incoherence. Additionally, we also present a variant of our repair algorithm that makes use of the confidence values of the mappings to improve alignment repair. Our repair algorithm was implemented as part of AgreementMakerLight, a free and open-source ontology matching system.

  5. A MEMORY EFFICIENT HARDWARE BASED PATTERN MATCHING AND PROTEIN ALIGNMENT SCHEMES FOR HIGHLY COMPLEX DATABASES

    OpenAIRE

    Bennet, M.Anto; Sankaranarayanan, S.; Deepika, M.; Nanthini, N.; Bhuvaneshwari, S.; Priyanka, M.

    2017-01-01

    Protein sequence alignment to find correlation between different species, or genetic mutations etc. is the most computational intensive task when performing protein comparison. To speed-up the alignment, Systolic Arrays (SAs) have been used. In order to avoid the internal-loop problem which reduces the performance, pipeline interleaving strategy has been presented. This strategy is applied to an SA for Smith Waterman (SW) algorithm which is an alignment algorithm to locally align two proteins...

  6. Numerical optimization of alignment reproducibility for customizable surgical guides.

    Science.gov (United States)

    Kroes, Thomas; Valstar, Edward; Eisemann, Elmar

    2015-10-01

    Computer-assisted orthopedic surgery aims at minimizing invasiveness, postoperative pain, and morbidity with computer-assisted preoperative planning and intra-operative guidance techniques, of which camera-based navigation and patient-specific templates (PST) are the most common. PSTs are one-time templates that guide the surgeon initially in cutting slits or drilling holes. This method can be extended to reusable and customizable surgical guides (CSG), which can be adapted to the patients' bone. Determining the right set of CSG input parameters by hand is a challenging task, given the vast amount of input parameter combinations and the complex physical interaction between the PST/CSG and the bone. This paper introduces a novel algorithm to solve the problem of choosing the right set of input parameters. Our approach predicts how well a CSG instance is able to reproduce the planned alignment based on a physical simulation and uses a genetic optimization algorithm to determine optimal configurations. We validate our technique with a prototype of a pin-based CSG and nine rapid prototyped distal femora. The proposed optimization technique has been compared to manual optimization by experts, as well as participants with domain experience. Using the optimization technique, the alignment errors remained within practical boundaries of 1.2 mm translation and [Formula: see text] rotation error. In all cases, the proposed method outperformed manual optimization. Manually optimizing CSG parameters turns out to be a counterintuitive task. Even after training, subjects with and without anatomical background fail in choosing appropriate CSG configurations. Our optimization algorithm ensures that the CSG is configured correctly, and we could demonstrate that the intended alignment of the CSG is accurately reproduced on all tested bone geometries.

  7. STELLAR: fast and exact local alignments

    Directory of Open Access Journals (Sweden)

    Weese David

    2011-10-01

    Full Text Available Abstract Background Large-scale comparison of genomic sequences requires reliable tools for the search of local alignments. Practical local aligners are in general fast, but heuristic, and hence sometimes miss significant matches. Results We present here the local pairwise aligner STELLAR that has full sensitivity for ε-alignments, i.e. guarantees to report all local alignments of a given minimal length and maximal error rate. The aligner is composed of two steps, filtering and verification. We apply the SWIFT algorithm for lossless filtering, and have developed a new verification strategy that we prove to be exact. Our results on simulated and real genomic data confirm and quantify the conjecture that heuristic tools like BLAST or BLAT miss a large percentage of significant local alignments. Conclusions STELLAR is very practical and fast on very long sequences which makes it a suitable new tool for finding local alignments between genomic sequences under the edit distance model. Binaries are freely available for Linux, Windows, and Mac OS X at http://www.seqan.de/projects/stellar. The source code is freely distributed with the SeqAn C++ library version 1.3 and later at http://www.seqan.de.

  8. Adaptive Processing for Sequence Alignment

    KAUST Repository

    Zidan, Mohammed A.; Bonny, Talal; Salama, Khaled N.

    2012-01-01

    Disclosed are various embodiments for adaptive processing for sequence alignment. In one embodiment, among others, a method includes obtaining a query sequence and a plurality of database sequences. A first portion of the plurality of database sequences is distributed to a central processing unit (CPU) and a second portion of the plurality of database sequences is distributed to a graphical processing unit (GPU) based upon a predetermined splitting ratio associated with the plurality of database sequences, where the database sequences of the first portion are shorter than the database sequences of the second portion. A first alignment score for the query sequence is determined with the CPU based upon the first portion of the plurality of database sequences and a second alignment score for the query sequence is determined with the GPU based upon the second portion of the plurality of database sequences.

  9. Adaptive Processing for Sequence Alignment

    KAUST Repository

    Zidan, Mohammed A.

    2012-01-26

    Disclosed are various embodiments for adaptive processing for sequence alignment. In one embodiment, among others, a method includes obtaining a query sequence and a plurality of database sequences. A first portion of the plurality of database sequences is distributed to a central processing unit (CPU) and a second portion of the plurality of database sequences is distributed to a graphical processing unit (GPU) based upon a predetermined splitting ratio associated with the plurality of database sequences, where the database sequences of the first portion are shorter than the database sequences of the second portion. A first alignment score for the query sequence is determined with the CPU based upon the first portion of the plurality of database sequences and a second alignment score for the query sequence is determined with the GPU based upon the second portion of the plurality of database sequences.

  10. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G. Gomez and J. Pivarski

    2011-01-01

    Alignment efforts in the first few months of 2011 have shifted away from providing alignment constants (now a well established procedure) and focussed on some critical remaining issues. The single most important task left was to understand the systematic differences observed between the track-based (TB) and hardware-based (HW) barrel alignments: a systematic difference in r-φ and in z, which grew as a function of z, and which amounted to ~4-5 mm differences going from one end of the barrel to the other. This difference is now understood to be caused by the tracker alignment. The systematic differences disappear when the track-based barrel alignment is performed using the new “twist-free” tracker alignment. This removes the largest remaining source of systematic uncertainty. Since the barrel alignment is based on hardware, it does not suffer from the tracker twist. However, untwisting the tracker causes endcap disks (which are aligned ...

  11. Cryo-EM image alignment based on nonuniform fast Fourier transform.

    Science.gov (United States)

    Yang, Zhengfan; Penczek, Pawel A

    2008-08-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis.

  12. Cryo-EM image alignment based on nonuniform fast Fourier transform

    International Nuclear Information System (INIS)

    Yang Zhengfan; Penczek, Pawel A.

    2008-01-01

    In single particle analysis, two-dimensional (2-D) alignment is a fundamental step intended to put into register various particle projections of biological macromolecules collected at the electron microscope. The efficiency and quality of three-dimensional (3-D) structure reconstruction largely depends on the computational speed and alignment accuracy of this crucial step. In order to improve the performance of alignment, we introduce a new method that takes advantage of the highly accurate interpolation scheme based on the gridding method, a version of the nonuniform fast Fourier transform, and utilizes a multi-dimensional optimization algorithm for the refinement of the orientation parameters. Using simulated data, we demonstrate that by using less than half of the sample points and taking twice the runtime, our new 2-D alignment method achieves dramatically better alignment accuracy than that based on quadratic interpolation. We also apply our method to image to volume registration, the key step in the single particle EM structure refinement protocol. We find that in this case the accuracy of the method not only surpasses the accuracy of the commonly used real-space implementation, but results are achieved in much shorter time, making gridding-based alignment a perfect candidate for efficient structure determination in single particle analysis

  13. Multiple network alignment on quantum computers

    Science.gov (United States)

    Daskin, Anmer; Grama, Ananth; Kais, Sabre

    2014-12-01

    Comparative analyses of graph-structured datasets underly diverse problems. Examples of these problems include identification of conserved functional components (biochemical interactions) across species, structural similarity of large biomolecules, and recurring patterns of interactions in social networks. A large class of such analyses methods quantify the topological similarity of nodes across networks. The resulting correspondence of nodes across networks, also called node alignment, can be used to identify invariant subgraphs across the input graphs. Given graphs as input, alignment algorithms use topological information to assign a similarity score to each -tuple of nodes, with elements (nodes) drawn from each of the input graphs. Nodes are considered similar if their neighbors are also similar. An alternate, equivalent view of these network alignment algorithms is to consider the Kronecker product of the input graphs and to identify high-ranked nodes in the Kronecker product graph. Conventional methods such as PageRank and HITS (Hypertext-Induced Topic Selection) can be used for this purpose. These methods typically require computation of the principal eigenvector of a suitably modified Kronecker product matrix of the input graphs. We adopt this alternate view of the problem to address the problem of multiple network alignment. Using the phase estimation algorithm, we show that the multiple network alignment problem can be efficiently solved on quantum computers. We characterize the accuracy and performance of our method and show that it can deliver exponential speedups over conventional (non-quantum) methods.

  14. Adaptive step-size algorithm for Fourier beam-propagation method with absorbing boundary layer of auto-determined width.

    Science.gov (United States)

    Learn, R; Feigenbaum, E

    2016-06-01

    Two algorithms that enhance the utility of the absorbing boundary layer are presented, mainly in the framework of the Fourier beam-propagation method. One is an automated boundary layer width selector that chooses a near-optimal boundary size based on the initial beam shape. The second algorithm adjusts the propagation step sizes based on the beam shape at the beginning of each step in order to reduce aliasing artifacts.

  15. Band alignment of atomic layer deposited MgO/Zn0.8Al0.2O heterointerface determined by charge corrected X-ray photoelectron spectroscopy

    Science.gov (United States)

    Yan, Baojun; Liu, Shulin; Yang, Yuzhen; Heng, Yuekun

    2016-05-01

    Pure magnesium (MgO) and zinc oxide doped with aluminum oxide (Zn0.8Al0.2O) were prepared via atomic layer deposition. We have studied the structure and band gap of bulk Zn0.8Al0.2O material by X-ray diffractometer (XRD) and Tauc method, and the band offsets and alignment of atomic layer deposited MgO/Zn0.8Al0.2O heterointerface were investigated systematically using X-ray photoelectron spectroscopy (XPS) in this study. Different methodologies, such as neutralizing electron gun, the use of C 1s peak recalibration and zero charging method, were applied to recover the actual position of the core levels in insulator materials which were easily influenced by differential charging phenomena. Schematic band alignment diagram, valence band offset (ΔEV) and conduction band offset (ΔEC) for the interface of the MgO/Zn0.8Al0.2O heterostructure have been constructed. An accurate value of ΔEV = 0.72 ± 0.11 eV was obtained from various combinations of core levels of heterojunction with varied MgO thickness. Given the experimental band gaps of 7.83 eV for MgO and 5.29 eV for Zn0.8Al0.2O, a type-II heterojunction with a ΔEC of 3.26 ± 0.11 eV was found. Band offsets and alignment studies of these heterojunctions are important for gaining deep consideration to the design of various optoelectronic devices based on such heterointerface.

  16. Real-time driver fatigue detection based on face alignment

    Science.gov (United States)

    Tao, Huanhuan; Zhang, Guiying; Zhao, Yong; Zhou, Yi

    2017-07-01

    The performance and robustness of fatigue detection largely decrease if the driver with glasses. To address this issue, this paper proposes a practical driver fatigue detection method based on face alignment at 3000 FPS algorithm. Firstly, the eye regions of the driver are localized by exploiting 6 landmarks surrounding each eye. Secondly, the HOG features of the extracted eye regions are calculated and put into SVM classifier to recognize the eye state. Finally, the value of PERCLOS is calculated to determine whether the driver is drowsy or not. An alarm will be generated if the eye is closed for a specified period of time. The accuracy and real-time on testing videos with different drivers demonstrate that the proposed algorithm is robust and obtain better accuracy for driver fatigue detection compared with some previous method.

  17. Cascaded face alignment via intimacy definition feature

    Science.gov (United States)

    Li, Hailiang; Lam, Kin-Man; Chiu, Man-Yau; Wu, Kangheng; Lei, Zhibin

    2017-09-01

    Recent years have witnessed the emerging popularity of regression-based face aligners, which directly learn mappings between facial appearance and shape-increment manifolds. We propose a random-forest based, cascaded regression model for face alignment by using a locally lightweight feature, namely intimacy definition feature. This feature is more discriminative than the pose-indexed feature, more efficient than the histogram of oriented gradients feature and the scale-invariant feature transform feature, and more compact than the local binary feature (LBF). Experimental validation of our algorithm shows that our approach achieves state-of-the-art performance when testing on some challenging datasets. Compared with the LBF-based algorithm, our method achieves about twice the speed, 20% improvement in terms of alignment accuracy and saves an order of magnitude on memory requirement.

  18. Anatomically Plausible Surface Alignment and Reconstruction

    DEFF Research Database (Denmark)

    Paulsen, Rasmus R.; Larsen, Rasmus

    2010-01-01

    With the increasing clinical use of 3D surface scanners, there is a need for accurate and reliable algorithms that can produce anatomically plausible surfaces. In this paper, a combined method for surface alignment and reconstruction is proposed. It is based on an implicit surface representation...

  19. Face Alignment Using Boosting and Evolutionary Search

    NARCIS (Netherlands)

    Zhang, Hua; Liu, Duanduan; Poel, Mannes; Nijholt, Antinus; Zha, H.; Taniguchi, R.-I.; Maybank, S.

    2010-01-01

    In this paper, we present a face alignment approach using granular features, boosting, and an evolutionary search algorithm. Active Appearance Models (AAM) integrate a shape-texture-combined morphable face model into an efficient fitting strategy, then Boosting Appearance Models (BAM) consider the

  20. Alignment methods: strategies, challenges, benchmarking, and comparative overview.

    Science.gov (United States)

    Löytynoja, Ari

    2012-01-01

    Comparative evolutionary analyses of molecular sequences are solely based on the identities and differences detected between homologous characters. Errors in this homology statement, that is errors in the alignment of the sequences, are likely to lead to errors in the downstream analyses. Sequence alignment and phylogenetic inference are tightly connected and many popular alignment programs use the phylogeny to divide the alignment problem into smaller tasks. They then neglect the phylogenetic tree, however, and produce alignments that are not evolutionarily meaningful. The use of phylogeny-aware methods reduces the error but the resulting alignments, with evolutionarily correct representation of homology, can challenge the existing practices and methods for viewing and visualising the sequences. The inter-dependency of alignment and phylogeny can be resolved by joint estimation of the two; methods based on statistical models allow for inferring the alignment parameters from the data and correctly take into account the uncertainty of the solution but remain computationally challenging. Widely used alignment methods are based on heuristic algorithms and unlikely to find globally optimal solutions. The whole concept of one correct alignment for the sequences is questionable, however, as there typically exist vast numbers of alternative, roughly equally good alignments that should also be considered. This uncertainty is hidden by many popular alignment programs and is rarely correctly taken into account in the downstream analyses. The quest for finding and improving the alignment solution is complicated by the lack of suitable measures of alignment goodness. The difficulty of comparing alternative solutions also affects benchmarks of alignment methods and the results strongly depend on the measure used. As the effects of alignment error cannot be predicted, comparing the alignments' performance in downstream analyses is recommended.

  1. Node fingerprinting: an efficient heuristic for aligning biological networks.

    Science.gov (United States)

    Radu, Alex; Charleston, Michael

    2014-10-01

    With the continuing increase in availability of biological data and improvements to biological models, biological network analysis has become a promising area of research. An emerging technique for the analysis of biological networks is through network alignment. Network alignment has been used to calculate genetic distance, similarities between regulatory structures, and the effect of external forces on gene expression, and to depict conditional activity of expression modules in cancer. Network alignment is algorithmically complex, and therefore we must rely on heuristics, ideally as efficient and accurate as possible. The majority of current techniques for network alignment rely on precomputed information, such as with protein sequence alignment, or on tunable network alignment parameters, which may introduce an increased computational overhead. Our presented algorithm, which we call Node Fingerprinting (NF), is appropriate for performing global pairwise network alignment without precomputation or tuning, can be fully parallelized, and is able to quickly compute an accurate alignment between two biological networks. It has performed as well as or better than existing algorithms on biological and simulated data, and with fewer computational resources. The algorithmic validation performed demonstrates the low computational resource requirements of NF.

  2. Automatic Determination of the Need for Intravenous Contrast in Musculoskeletal MRI Examinations Using IBM Watson's Natural Language Processing Algorithm.

    Science.gov (United States)

    Trivedi, Hari; Mesterhazy, Joseph; Laguna, Benjamin; Vu, Thienkhai; Sohn, Jae Ho

    2018-04-01

    Magnetic resonance imaging (MRI) protocoling can be time- and resource-intensive, and protocols can often be suboptimal dependent upon the expertise or preferences of the protocoling radiologist. Providing a best-practice recommendation for an MRI protocol has the potential to improve efficiency and decrease the likelihood of a suboptimal or erroneous study. The goal of this study was to develop and validate a machine learning-based natural language classifier that can automatically assign the use of intravenous contrast for musculoskeletal MRI protocols based upon the free-text clinical indication of the study, thereby improving efficiency of the protocoling radiologist and potentially decreasing errors. We utilized a deep learning-based natural language classification system from IBM Watson, a question-answering supercomputer that gained fame after challenging the best human players on Jeopardy! in 2011. We compared this solution to a series of traditional machine learning-based natural language processing techniques that utilize a term-document frequency matrix. Each classifier was trained with 1240 MRI protocols plus their respective clinical indications and validated with a test set of 280. Ground truth of contrast assignment was obtained from the clinical record. For evaluation of inter-reader agreement, a blinded second reader radiologist analyzed all cases and determined contrast assignment based on only the free-text clinical indication. In the test set, Watson demonstrated overall accuracy of 83.2% when compared to the original protocol. This was similar to the overall accuracy of 80.2% achieved by an ensemble of eight traditional machine learning algorithms based on a term-document matrix. When compared to the second reader's contrast assignment, Watson achieved 88.6% agreement. When evaluating only the subset of cases where the original protocol and second reader were concordant (n = 251), agreement climbed further to 90.0%. The classifier was

  3. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G.Gomez

    2010-01-01

    The main developments in muon alignment since March 2010 have been the production, approval and deployment of alignment constants for the ICHEP data reprocessing. In the barrel, a new geometry, combining information from both hardware and track-based alignment systems, has been developed for the first time. The hardware alignment provides an initial DT geometry, which is then anchored as a rigid solid, using the link alignment system, to a reference frame common to the tracker. The “GlobalPositionRecords” for both the Tracker and Muon systems are being used for the first time, and the initial tracker-muon relative positioning, based on the link alignment, yields good results within the photogrammetry uncertainties of the Tracker and alignment ring positions. For the first time, the optical and track-based alignments show good agreement between them; the optical alignment being refined by the track-based alignment. The resulting geometry is the most complete to date, aligning all 250 DTs, ...

  4. STAR/SVT alignment within a finite magnetic field

    International Nuclear Information System (INIS)

    Barannikova, O.Yu.; Belaga, V.V.; Ososkov, G.A.; Panebrattsev, Yu.A.; Bellweid, R.K.; Pruneau, C.A.; Wilson, W.K.

    1999-01-01

    We report on the development of SVT (Silicon Vertex Tracker) software for the purpose of the SVT and TPC (Time Projection Chamber) relative alignment as well as the internal alignment of the SVT components. The alignment procedure described complements the internal SVT alignment procedure discussed in Star Note 356. It involves track reconstruction in both the Star TPC and SVT for the calibration of the SVT geometry in the presence of a finite magnetic field. This new software has been integrated under the package SAL already running under STAR. Both the implementation and the performance of the alignment algorithm are described. We find that the current software implementation in SAL should enable a very satisfactory internal SVT alignment as well as an excellent SVT to TPC relative alignment

  5. Simultaneous determination of aquifer parameters and zone structures with fuzzy c-means clustering and meta-heuristic harmony search algorithm

    Science.gov (United States)

    Ayvaz, M. Tamer

    2007-11-01

    This study proposes an inverse solution algorithm through which both the aquifer parameters and the zone structure of these parameters can be determined based on a given set of observations on piezometric heads. In the zone structure identification problem fuzzy c-means ( FCM) clustering method is used. The association of the zone structure with the transmissivity distribution is accomplished through an optimization model. The meta-heuristic harmony search ( HS) algorithm, which is conceptualized using the musical process of searching for a perfect state of harmony, is used as an optimization technique. The optimum parameter zone structure is identified based on three criteria which are the residual error, parameter uncertainty, and structure discrimination. A numerical example given in the literature is solved to demonstrate the performance of the proposed algorithm. Also, a sensitivity analysis is performed to test the performance of the HS algorithm for different sets of solution parameters. Results indicate that the proposed solution algorithm is an effective way in the simultaneous identification of aquifer parameters and their corresponding zone structures.

  6. Differential evolution-simulated annealing for multiple sequence alignment

    Science.gov (United States)

    Addawe, R. C.; Addawe, J. M.; Sueño, M. R. K.; Magadia, J. C.

    2017-10-01

    Multiple sequence alignments (MSA) are used in the analysis of molecular evolution and sequence structure relationships. In this paper, a hybrid algorithm, Differential Evolution - Simulated Annealing (DESA) is applied in optimizing multiple sequence alignments (MSAs) based on structural information, non-gaps percentage and totally conserved columns. DESA is a robust algorithm characterized by self-organization, mutation, crossover, and SA-like selection scheme of the strategy parameters. Here, the MSA problem is treated as a multi-objective optimization problem of the hybrid evolutionary algorithm, DESA. Thus, we name the algorithm as DESA-MSA. Simulated sequences and alignments were generated to evaluate the accuracy and efficiency of DESA-MSA using different indel sizes, sequence lengths, deletion rates and insertion rates. The proposed hybrid algorithm obtained acceptable solutions particularly for the MSA problem evaluated based on the three objectives.

  7. New algorithm to determine true colocalization in combination with image restoration and time-lapse confocal microscopy to MAP kinases in mitochondria.

    Directory of Open Access Journals (Sweden)

    Jorge Ignacio Villalta

    Full Text Available The subcellular localization and physiological functions of biomolecules are closely related and thus it is crucial to precisely determine the distribution of different molecules inside the intracellular structures. This is frequently accomplished by fluorescence microscopy with well-characterized markers and posterior evaluation of the signal colocalization. Rigorous study of colocalization requires statistical analysis of the data, albeit yet no single technique has been established as a standard method. Indeed, the few methods currently available are only accurate in images with particular characteristics. Here, we introduce a new algorithm to automatically obtain the true colocalization between images that is suitable for a wide variety of biological situations. To proceed, the algorithm contemplates the individual contribution of each pixel's fluorescence intensity in a pair of images to the overall Pearsońs correlation and Manders' overlap coefficients. The accuracy and reliability of the algorithm was validated on both simulated and real images that reflected the characteristics of a range of biological samples. We used this algorithm in combination with image restoration by deconvolution and time-lapse confocal microscopy to address the localization of MEK1 in the mitochondria of different cell lines. Appraising the previously described behavior of Akt1 corroborated the reliability of the combined use of these techniques. Together, the present work provides a novel statistical approach to accurately and reliably determine the colocalization in a variety of biological images.

  8. libgapmis: extending short-read alignments.

    Science.gov (United States)

    Alachiotis, Nikolaos; Berger, Simon; Flouri, Tomáš; Pissis, Solon P; Stamatakis, Alexandros

    2013-01-01

    A wide variety of short-read alignment programmes have been published recently to tackle the problem of mapping millions of short reads to a reference genome, focusing on different aspects of the procedure such as time and memory efficiency, sensitivity, and accuracy. These tools allow for a small number of mismatches in the alignment; however, their ability to allow for gaps varies greatly, with many performing poorly or not allowing them at all. The seed-and-extend strategy is applied in most short-read alignment programmes. After aligning a substring of the reference sequence against the high-quality prefix of a short read--the seed--an important problem is to find the best possible alignment between a substring of the reference sequence succeeding and the remaining suffix of low quality of the read--extend. The fact that the reads are rather short and that the gap occurrence frequency observed in various studies is rather low suggest that aligning (parts of) those reads with a single gap is in fact desirable. In this article, we present libgapmis, a library for extending pairwise short-read alignments. Apart from the standard CPU version, it includes ultrafast SSE- and GPU-based implementations. libgapmis is based on an algorithm computing a modified version of the traditional dynamic-programming matrix for sequence alignment. Extensive experimental results demonstrate that the functions of the CPU version provided in this library accelerate the computations by a factor of 20 compared to other programmes. The analogous SSE- and GPU-based implementations accelerate the computations by a factor of 6 and 11, respectively, compared to the CPU version. The library also provides the user the flexibility to split the read into fragments, based on the observed gap occurrence frequency and the length of the read, thereby allowing for a variable, but bounded, number of gaps in the alignment. We present libgapmis, a library for extending pairwise short-read alignments. We

  9. Evaluation of elastix-based propagated align algorithm for VOI- and voxel-based analysis of longitudinal (18)F-FDG PET/CT data from patients with non-small cell lung cancer (NSCLC).

    Science.gov (United States)

    Kerner, Gerald Sma; Fischer, Alexander; Koole, Michel Jb; Pruim, Jan; Groen, Harry Jm

    2015-01-01

    Deformable image registration allows volume of interest (VOI)- and voxel-based analysis of longitudinal changes in fluorodeoxyglucose (FDG) tumor uptake in patients with non-small cell lung cancer (NSCLC). This study evaluates the performance of the elastix toolbox deformable image registration algorithm for VOI and voxel-wise assessment of longitudinal variations in FDG tumor uptake in NSCLC patients. Evaluation of the elastix toolbox was performed using (18)F-FDG PET/CT at baseline and after 2 cycles of therapy (follow-up) data in advanced NSCLC patients. The elastix toolbox, an integrated part of the IMALYTICS workstation, was used to apply a CT-based non-linear image registration of follow-up PET/CT data using the baseline PET/CT data as reference. Lesion statistics were compared to assess the impact on therapy response assessment. Next, CT-based deformable image registration was performed anew on the deformed follow-up PET/CT data using the original follow-up PET/CT data as reference, yielding a realigned follow-up PET dataset. Performance was evaluated by determining the correlation coefficient between original and realigned follow-up PET datasets. The intra- and extra-thoracic tumors were automatically delineated on the original PET using a 41% of maximum standardized uptake value (SUVmax) adaptive threshold. Equivalence between reference and realigned images was tested (determining 95% range of the difference) and estimating the percentage of voxel values that fell within that range. Thirty-nine patients with 191 tumor lesions were included. In 37/39 and 12/39 patients, respectively, thoracic and non-thoracic lesions were evaluable for response assessment. Using the EORTC/SUVmax-based criteria, 5/37 patients had a discordant response of thoracic, and 2/12 a discordant response of non-thoracic lesions between the reference and the realigned image. FDG uptake values of corresponding tumor voxels in the original and realigned reference PET correlated well (R

  10. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G.Gomez

    2010-01-01

    Most of the work in muon alignment since December 2009 has focused on the geometry reconstruction from the optical systems and improvements in the internal alignment of the DT chambers. The barrel optical alignment system has progressively evolved from reconstruction of single active planes to super-planes (December 09) to a new, full barrel reconstruction. Initial validation studies comparing this full barrel alignment at 0T with photogrammetry provide promising results. In addition, the method has been applied to CRAFT09 data, and the resulting alignment at 3.8T yields residuals from tracks (extrapolated from the tracker) which look smooth, suggesting a good internal barrel alignment with a small overall offset with respect to the tracker. This is a significant improvement, which should allow the optical system to provide a start-up alignment for 2010. The end-cap optical alignment has made considerable progress in the analysis of transfer line data. The next set of alignment constants for CSCs will there...

  11. Tidal alignment of galaxies

    Energy Technology Data Exchange (ETDEWEB)

    Blazek, Jonathan; Vlah, Zvonimir; Seljak, Uroš

    2015-08-01

    We develop an analytic model for galaxy intrinsic alignments (IA) based on the theory of tidal alignment. We calculate all relevant nonlinear corrections at one-loop order, including effects from nonlinear density evolution, galaxy biasing, and source density weighting. Contributions from density weighting are found to be particularly important and lead to bias dependence of the IA amplitude, even on large scales. This effect may be responsible for much of the luminosity dependence in IA observations. The increase in IA amplitude for more highly biased galaxies reflects their locations in regions with large tidal fields. We also consider the impact of smoothing the tidal field on halo scales. We compare the performance of this consistent nonlinear model in describing the observed alignment of luminous red galaxies with the linear model as well as the frequently used "nonlinear alignment model," finding a significant improvement on small and intermediate scales. We also show that the cross-correlation between density and IA (the "GI" term) can be effectively separated into source alignment and source clustering, and we accurately model the observed alignment down to the one-halo regime using the tidal field from the fully nonlinear halo-matter cross correlation. Inside the one-halo regime, the average alignment of galaxies with density tracers no longer follows the tidal alignment prediction, likely reflecting nonlinear processes that must be considered when modeling IA on these scales. Finally, we discuss tidal alignment in the context of cosmic shear measurements.

  12. A fast fiducial marker tracking model for fully automatic alignment in electron tomography

    KAUST Repository

    Han, Renmin; Zhang, Fa; Gao, Xin

    2017-01-01

    Automatic alignment, especially fiducial marker-based alignment, has become increasingly important due to the high demand of subtomogram averaging and the rapid development of large-field electron microscopy. Among the alignment steps, fiducial marker tracking is a crucial one that determines the quality of the final alignment. Yet, it is still a challenging problem to track the fiducial markers accurately and effectively in a fully automatic manner.In this paper, we propose a robust and efficient scheme for fiducial marker tracking. Firstly, we theoretically prove the upper bound of the transformation deviation of aligning the positions of fiducial markers on two micrographs by affine transformation. Secondly, we design an automatic algorithm based on the Gaussian mixture model to accelerate the procedure of fiducial marker tracking. Thirdly, we propose a divide-and-conquer strategy against lens distortions to ensure the reliability of our scheme. To our knowledge, this is the first attempt that theoretically relates the projection model with the tracking model. The real-world experimental results further support our theoretical bound and demonstrate the effectiveness of our algorithm. This work facilitates the fully automatic tracking for datasets with a massive number of fiducial markers.The C/C ++ source code that implements the fast fiducial marker tracking is available at https://github.com/icthrm/gmm-marker-tracking. Markerauto 1.6 version or later (also integrated in the AuTom platform at http://ear.ict.ac.cn/) offers a complete implementation for fast alignment, in which fast fiducial marker tracking is available by the

  13. A fast fiducial marker tracking model for fully automatic alignment in electron tomography

    KAUST Repository

    Han, Renmin

    2017-10-20

    Automatic alignment, especially fiducial marker-based alignment, has become increasingly important due to the high demand of subtomogram averaging and the rapid development of large-field electron microscopy. Among the alignment steps, fiducial marker tracking is a crucial one that determines the quality of the final alignment. Yet, it is still a challenging problem to track the fiducial markers accurately and effectively in a fully automatic manner.In this paper, we propose a robust and efficient scheme for fiducial marker tracking. Firstly, we theoretically prove the upper bound of the transformation deviation of aligning the positions of fiducial markers on two micrographs by affine transformation. Secondly, we design an automatic algorithm based on the Gaussian mixture model to accelerate the procedure of fiducial marker tracking. Thirdly, we propose a divide-and-conquer strategy against lens distortions to ensure the reliability of our scheme. To our knowledge, this is the first attempt that theoretically relates the projection model with the tracking model. The real-world experimental results further support our theoretical bound and demonstrate the effectiveness of our algorithm. This work facilitates the fully automatic tracking for datasets with a massive number of fiducial markers.The C/C ++ source code that implements the fast fiducial marker tracking is available at https://github.com/icthrm/gmm-marker-tracking. Markerauto 1.6 version or later (also integrated in the AuTom platform at http://ear.ict.ac.cn/) offers a complete implementation for fast alignment, in which fast fiducial marker tracking is available by the

  14. Mango: multiple alignment with N gapped oligos.

    Science.gov (United States)

    Zhang, Zefeng; Lin, Hao; Li, Ming

    2008-06-01

    Multiple sequence alignment is a classical and challenging task. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state-of-the-art works suffer from the "once a gap, always a gap" phenomenon. Is there a radically new way to do multiple sequence alignment? In this paper, we introduce a novel and orthogonal multiple sequence alignment method, using both multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole and tries to build the alignment vertically, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds have proved significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks, showing that MANGO compares favorably, in both accuracy and speed, against state-of-the-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, ProbConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0, and Kalign 2.0. We have further demonstrated the scalability of MANGO on very large datasets of repeat elements. MANGO can be downloaded at http://www.bioinfo.org.cn/mango/ and is free for academic usage.

  15. Advanced Alignment of the ATLAS Inner Detector

    CERN Document Server

    Stahlman, JM; The ATLAS collaboration

    2012-01-01

    The primary goal of the ATLAS Inner Detector (ID) is to measure the trajectories of charged particles in the high particle density environment of the Large Hadron Collider (LHC) collisions. This is achieved using a combination of different technologies, including silicon pixels, silicon microstrips, and gaseous drift-tubes, all immersed in a 2 Tesla magnetic field. With over one million alignable degrees of freedom, it is crucial that an accurate model of the detector positions be produced using an automated and robust algorithm in order to achieve good tracking performance. This has been accomplished using a variety of alignment techniques resulting in near optimal hit and momentum resolutions.

  16. Hardware Accelerated Sequence Alignment with Traceback

    Directory of Open Access Journals (Sweden)

    Scott Lloyd

    2009-01-01

    in a timely manner. Known methods to accelerate alignment on reconfigurable hardware only address sequence comparison, limit the sequence length, or exhibit memory and I/O bottlenecks. A space-efficient, global sequence alignment algorithm and architecture is presented that accelerates the forward scan and traceback in hardware without memory and I/O limitations. With 256 processing elements in FPGA technology, a performance gain over 300 times that of a desktop computer is demonstrated on sequence lengths of 16000. For greater performance, the architecture is scalable to more processing elements.

  17. Band alignment of atomic layer deposited MgO/Zn{sub 0.8}Al{sub 0.2}O heterointerface determined by charge corrected X-ray photoelectron spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Baojun, E-mail: yanbj@ihep.ac.cn [State Key Laboratory of Particle Detection and Electronics, Institute of High Energy Physics of Chinese Academy of Sciences, Beijing P. O. Box 100049 (China); Liu, Shulin [State Key Laboratory of Particle Detection and Electronics, Institute of High Energy Physics of Chinese Academy of Sciences, Beijing P. O. Box 100049 (China); Yang, Yuzhen [State Key Laboratory of Particle Detection and Electronics, Institute of High Energy Physics of Chinese Academy of Sciences, Beijing P. O. Box 100049 (China); Department of Physics, Nanjing University, Nanjing P. O. Box 210093 (China); Heng, Yuekun [State Key Laboratory of Particle Detection and Electronics, Institute of High Energy Physics of Chinese Academy of Sciences, Beijing P. O. Box 100049 (China)

    2016-05-15

    Highlights: • Band alignment of MgO/Zn{sub 0.8}Al{sub 0.2}O heterojunction were investigated systematically using charge corrected X-ray photoelectron spectroscopy. • Differential charging phenomenon is observed in determination VBOs of insulator/semiconductor heterojunction. • Valence and conduction band offsets have been determined to be 0.72 ± 0.11 eV and 3.26 ± 0.11 eV, respectively, with a type-II band line-up. - Abstract: Pure magnesium (MgO) and zinc oxide doped with aluminum oxide (Zn{sub 0.8}Al{sub 0.2}O) were prepared via atomic layer deposition. We have studied the structure and band gap of bulk Zn{sub 0.8}Al{sub 0.2}O material by X-ray diffractometer (XRD) and Tauc method, and the band offsets and alignment of atomic layer deposited MgO/Zn{sub 0.8}Al{sub 0.2}O heterointerface were investigated systematically using X-ray photoelectron spectroscopy (XPS) in this study. Different methodologies, such as neutralizing electron gun, the use of C 1s peak recalibration and zero charging method, were applied to recover the actual position of the core levels in insulator materials which were easily influenced by differential charging phenomena. Schematic band alignment diagram, valence band offset (ΔE{sub V}) and conduction band offset (ΔE{sub C}) for the interface of the MgO/Zn{sub 0.8}Al{sub 0.2}O heterostructure have been constructed. An accurate value of ΔE{sub V} = 0.72 ± 0.11 eV was obtained from various combinations of core levels of heterojunction with varied MgO thickness. Given the experimental band gaps of 7.83 eV for MgO and 5.29 eV for Zn{sub 0.8}Al{sub 0.2}O, a type-II heterojunction with a ΔE{sub C} of 3.26 ± 0.11 eV was found. Band offsets and alignment studies of these heterojunctions are important for gaining deep consideration to the design of various optoelectronic devices based on such heterointerface.

  18. A computerized traffic control algorithm to determine optimal traffic signal settings. Ph.D. Thesis - Toledo Univ.

    Science.gov (United States)

    Seldner, K.

    1977-01-01

    An algorithm was developed to optimally control the traffic signals at each intersection using a discrete time traffic model applicable to heavy or peak traffic. Off line optimization procedures were applied to compute the cycle splits required to minimize the lengths of the vehicle queues and delay at each intersection. The method was applied to an extensive traffic network in Toledo, Ohio. Results obtained with the derived optimal settings are compared with the control settings presently in use.

  19. An improved algorithm for the determination of the system paramters of a visual binary by least squares

    Science.gov (United States)

    Xu, Yu-Lin

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in our orbit problem. D.C. Brown proposed an algorithm solving a more general least squares adjustment problem in which the scalar residual function, however, is still constructed by first-order approximation. Not long ago, a completely general solution was published by W.H Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in our problem. The normal equations were first solved by Newton's scheme. Practical examples show that this converges fast if the observational errors are sufficiently small and the initial approximate solution is sufficiently accurate, and that it fails otherwise. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also

  20. An improved algorithm for the determination of the system parameters of a visual binary by least squares

    International Nuclear Information System (INIS)

    Xu, Yu-Lin.

    1988-01-01

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in the orbit problem. Not long ago, a completely general solution was published by W. H. Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in this problem. The normal equations were first solved by Newton's scheme. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered

  1. Using genetic algorithms to determine near-optimal pricing, investment and operating strategies in the electric power industry

    Science.gov (United States)

    Wu, Dongjun

    Network industries have technologies characterized by a spatial hierarchy, the "network," with capital-intensive interconnections and time-dependent, capacity-limited flows of products and services through the network to customers. This dissertation studies service pricing, investment and business operating strategies for the electric power network. First-best solutions for a variety of pricing and investment problems have been studied. The evaluation of genetic algorithms (GA, which are methods based on the idea of natural evolution) as a primary means of solving complicated network problems, both w.r.t. pricing: as well as w.r.t. investment and other operating decisions, has been conducted. New constraint-handling techniques in GAs have been studied and tested. The actual application of such constraint-handling techniques in solving practical non-linear optimization problems has been tested on several complex network design problems with encouraging initial results. Genetic algorithms provide solutions that are feasible and close to optimal when the optimal solution is know; in some instances, the near-optimal solutions for small problems by the proposed GA approach can only be tested by pushing the limits of currently available non-linear optimization software. The performance is far better than several commercially available GA programs, which are generally inadequate in solving any of the problems studied in this dissertation, primarily because of their poor handling of constraints. Genetic algorithms, if carefully designed, seem very promising in solving difficult problems which are intractable by traditional analytic methods.

  2. AlignNemo: a local network alignment method to integrate homology and topology.

    Directory of Open Access Journals (Sweden)

    Giovanni Ciriello

    Full Text Available Local network alignment is an important component of the analysis of protein-protein interaction networks that may lead to the identification of evolutionary related complexes. We present AlignNemo, a new algorithm that, given the networks of two organisms, uncovers subnetworks of proteins that relate in biological function and topology of interactions. The discovered conserved subnetworks have a general topology and need not to correspond to specific interaction patterns, so that they more closely fit the models of functional complexes proposed in the literature. The algorithm is able to handle sparse interaction data with an expansion process that at each step explores the local topology of the networks beyond the proteins directly interacting with the current solution. To assess the performance of AlignNemo, we ran a series of benchmarks using statistical measures as well as biological knowledge. Based on reference datasets of protein complexes, AlignNemo shows better performance than other methods in terms of both precision and recall. We show our solutions to be biologically sound using the concept of semantic similarity applied to Gene Ontology vocabularies. The binaries of AlignNemo and supplementary details about the algorithms and the experiments are available at: sourceforge.net/p/alignnemo.

  3. Genetic algorithms for protein threading.

    Science.gov (United States)

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  4. K2 and K2*: efficient alignment-free sequence similarity measurement based on Kendall statistics.

    Science.gov (United States)

    Lin, Jie; Adjeroh, Donald A; Jiang, Bing-Hua; Jiang, Yue

    2018-05-15

    Alignment-free sequence comparison methods can compute the pairwise similarity between a huge number of sequences much faster than sequence-alignment based methods. We propose a new non-parametric alignment-free sequence comparison method, called K2, based on the Kendall statistics. Comparing to the other state-of-the-art alignment-free comparison methods, K2 demonstrates competitive performance in generating the phylogenetic tree, in evaluating functionally related regulatory sequences, and in computing the edit distance (similarity/dissimilarity) between sequences. Furthermore, the K2 approach is much faster than the other methods. An improved method, K2*, is also proposed, which is able to determine the appropriate algorithmic parameter (length) automatically, without first considering different values. Comparative analysis with the state-of-the-art alignment-free sequence similarity methods demonstrates the superiority of the proposed approaches, especially with increasing sequence length, or increasing dataset sizes. The K2 and K2* approaches are implemented in the R language as a package and is freely available for open access (http://community.wvu.edu/daadjeroh/projects/K2/K2_1.0.tar.gz). yueljiang@163.com. Supplementary data are available at Bioinformatics online.

  5. The GEM Detector projective alignment simulation system

    International Nuclear Information System (INIS)

    Wuest, C.R.; Belser, F.C.; Holdener, F.R.; Roeben, M.D.; Paradiso, J.A.; Mitselmakher, G.; Ostapchuk, A.; Pier-Amory, J.

    1993-01-01

    Precision position knowledge (< 25 microns RMS) of the GEM Detector muon system at the Superconducting Super Collider Laboratory (SSCL) is an important physics requirement necessary to minimize sagitta error in detecting and tracking high energy muons that are deflected by the magnetic field within the GEM Detector. To validate the concept of the sagitta correction function determined by projective alignment of the muon detectors (Cathode Strip Chambers or CSCs), the basis of the proposed GEM alignment scheme, a facility, called the ''Alignment Test Stand'' (ATS), is being constructed. This system simulates the environment that the CSCs and chamber alignment systems are expected to experience in the GEM Detector, albeit without the 0.8 T magnetic field and radiation environment. The ATS experimental program will allow systematic study and characterization of the projective alignment approach, as well as general mechanical engineering of muon chamber mounting concepts, positioning systems and study of the mechanical behavior of the proposed 6 layer CSCs. The ATS will consist of a stable local coordinate system in which mock-ups of muon chambers (i.e., non-working mechanical analogs, representing the three superlayers of a selected barrel and endcap alignment tower) are implemented, together with a sufficient number of alignment monitors to overdetermine the sagitta correction function, providing a self-consistency check. This paper describes the approach to be used for the alignment of the GEM muon system, the design of the ATS, and the experiments to be conducted using the ATS

  6. Heuristic for Solving the Multiple Alignment Sequence Problem

    Directory of Open Access Journals (Sweden)

    Roman Anselmo Mora Gutiérrez

    2011-03-01

    Full Text Available In this paper we developed a new algorithm for solving the problem of multiple sequence alignment (AM S, which is a hybrid metaheuristic based on harmony search and simulated annealing. The hybrid was validated with the methodology of Julie Thompson. This is a basic algorithm and and results obtained during this stage are encouraging.

  7. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G.Gomez

    2011-01-01

    The Muon Alignment work now focuses on producing a new track-based alignment with higher track statistics, making systematic studies between the results of the hardware and track-based alignment methods and aligning the barrel using standalone muon tracks. Currently, the muon track reconstruction software uses a hardware-based alignment in the barrel (DT) and a track-based alignment in the endcaps (CSC). An important task is to assess the muon momentum resolution that can be achieved using the current muon alignment, especially for highly energetic muons. For this purpose, cosmic ray muons are used, since the rate of high-energy muons from collisions is very low and the event statistics are still limited. Cosmics have the advantage of higher statistics in the pT region above 100 GeV/c, but they have the disadvantage of having a mostly vertical topology, resulting in a very few global endcap muons. Only the barrel alignment has therefore been tested so far. Cosmic muons traversing CMS from top to bottom are s...

  8. Alignment in double capture processes

    International Nuclear Information System (INIS)

    Moretto-Capelle, P.; Benhenni, M.; Bordenave-Montesquieu, A.; Benoit-Cattin, P.; Gleizes, A.

    1993-01-01

    The electron spectra emitted when a double capture occurs in N 7+ +He and Ne 8+ +He systems at 10 qkeV collisional energy, allow us to determine the angular distributions of the 3 ell 3 ell ' lines through a special spectra fitting procedure which includes interferences between neighbouring states. It is found that the doubly excited states populated in double capture processes are generally aligned

  9. Alignment in double capture processes

    Energy Technology Data Exchange (ETDEWEB)

    Moretto-Capelle, P.; Benhenni, M.; Bordenave-Montesquieu, A.; Benoit-Cattin, P.; Gleizes, A. (IRSAMC, URA CNRS 770, Univ. Paul Sabatier, 118 rte de Narbonne, 31062 Toulouse Cedex (France))

    1993-06-05

    The electron spectra emitted when a double capture occurs in N[sup 7+]+He and Ne[sup 8+]+He systems at 10 qkeV collisional energy, allow us to determine the angular distributions of the 3[ell]3[ell] [prime] lines through a special spectra fitting procedure which includes interferences between neighbouring states. It is found that the doubly excited states populated in double capture processes are generally aligned.

  10. Interest alignment and competitive advantage

    OpenAIRE

    Gottschalg, Oliver; Zollo, Mauricio

    2006-01-01

    This paper articulates a theory of the conditions under which the alignment between individual and collective interests generates sustainable competitive advantage. The theory is based on the influence of tacitness, context-specificity and casual ambiguity in the determinants of different types of motivation (extrinsic, normative intrinsic and hedonic intrinsic), under varying conditions of environmental dynamism. The analysis indicates the need to consider mitivational processes as a complem...

  11. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G. Gomez

    2011-01-01

    A new set of muon alignment constants was approved in August. The relative position between muon chambers is essentially unchanged, indicating good detector stability. The main changes concern the global positioning of the barrel and of the endcap rings to match the new Tracker geometry. Detailed studies of the differences between track-based and optical alignment of DTs have proven to be a valuable tool for constraining Tracker alignment weak modes, and this information is now being used as part of the alignment procedure. In addition to the “split-cosmic” analysis used to investigate the muon momentum resolution at high momentum, a new procedure based on reconstructing the invariant mass of di-muons from boosted Zs is under development. Both procedures show an improvement in the momentum precision of Global Muons with respect to Tracker-only Muons. Recent developments in track-based alignment include a better treatment of the tails of residual distributions and accounting for correla...

  12. Determination of Optimal Initial Weights of an Artificial Neural Network by Using the Harmony Search Algorithm: Application to Breakwater Armor Stones

    Directory of Open Access Journals (Sweden)

    Anzy Lee

    2016-05-01

    Full Text Available In this study, an artificial neural network (ANN model is developed to predict the stability number of breakwater armor stones based on the experimental data reported by Van der Meer in 1988. The harmony search (HS algorithm is used to determine the near-global optimal initial weights in the training of the model. The stratified sampling is used to sample the training data. A total of 25 HS-ANN hybrid models are tested with different combinations of HS algorithm parameters. The HS-ANN models are compared with the conventional ANN model, which uses a Monte Carlo simulation to determine the initial weights. Each model is run 50 times and the statistical analyses are conducted for the model results. The present models using stratified sampling are shown to be more accurate than those of previous studies. The statistical analyses for the model results show that the HS-ANN model with proper values of HS algorithm parameters can give much better and more stable prediction than the conventional ANN model.

  13. Accelerating large-scale protein structure alignments with graphics processing units

    Directory of Open Access Journals (Sweden)

    Pang Bin

    2012-02-01

    Full Text Available Abstract Background Large-scale protein structure alignment, an indispensable tool to structural bioinformatics, poses a tremendous challenge on computational resources. To ensure structure alignment accuracy and efficiency, efforts have been made to parallelize traditional alignment algorithms in grid environments. However, these solutions are costly and of limited accessibility. Others trade alignment quality for speedup by using high-level characteristics of structure fragments for structure comparisons. Findings We present ppsAlign, a parallel protein structure Alignment framework designed and optimized to exploit the parallelism of Graphics Processing Units (GPUs. As a general-purpose GPU platform, ppsAlign could take many concurrent methods, such as TM-align and Fr-TM-align, into the parallelized algorithm design. We evaluated ppsAlign on an NVIDIA Tesla C2050 GPU card, and compared it with existing software solutions running on an AMD dual-core CPU. We observed a 36-fold speedup over TM-align, a 65-fold speedup over Fr-TM-align, and a 40-fold speedup over MAMMOTH. Conclusions ppsAlign is a high-performance protein structure alignment tool designed to tackle the computational complexity issues from protein structural data. The solution presented in this paper allows large-scale structure comparisons to be performed using massive parallel computing power of GPU.

  14. Background suppression for a top quark mass measurement in the lepton+jets t anti t decay channel and alignment of the ATLAS silicon detectors with cosmic rays

    International Nuclear Information System (INIS)

    Goettfert, Tobias

    2010-01-01

    The investigation of top quark properties will be amongst the first measurements of observables of the Standard Model of particle physics at the Large Hadron Collider. This thesis deals with the suppression of background sources contributing to the event sample used for the determination of the top quark mass. Several techniques to reduce the contamination of the selected sample with events from W+jets production and combinatorial background from wrong jet associations are evaluated. The usage of the jet merging scales of a k T jet algorithm as event shapes is laid out and a multivariate technique (Fisher discriminant) is applied to discriminate signal from physics background. Several kinematic variables are reviewed upon their capability to suppress wrong jet associations. The second part presents the achievements on the alignment of the silicon part of the Inner Detector of the ATLAS experiment. A well-aligned tracking detector will be crucial for measurements that involve particle trajectories, e.g. for reliably identifying b-quark jets. Around 700,000 tracks from cosmic ray muons are used to infer the alignment of all silicon modules of ATLAS using the track-based local χ 2 alignment algorithm. Various additions to the method that deal with the peculiarities of alignment with cosmic rays are developed and presented. The achieved alignment precision is evaluated and compared to previous results. (orig.)

  15. Background suppression for a top quark mass measurement in the lepton+jets t anti t decay channel and alignment of the ATLAS silicon detectors with cosmic rays

    Energy Technology Data Exchange (ETDEWEB)

    Goettfert, Tobias

    2010-01-21

    The investigation of top quark properties will be amongst the first measurements of observables of the Standard Model of particle physics at the Large Hadron Collider. This thesis deals with the suppression of background sources contributing to the event sample used for the determination of the top quark mass. Several techniques to reduce the contamination of the selected sample with events from W+jets production and combinatorial background from wrong jet associations are evaluated. The usage of the jet merging scales of a k{sub T} jet algorithm as event shapes is laid out and a multivariate technique (Fisher discriminant) is applied to discriminate signal from physics background. Several kinematic variables are reviewed upon their capability to suppress wrong jet associations. The second part presents the achievements on the alignment of the silicon part of the Inner Detector of the ATLAS experiment. A well-aligned tracking detector will be crucial for measurements that involve particle trajectories, e.g. for reliably identifying b-quark jets. Around 700,000 tracks from cosmic ray muons are used to infer the alignment of all silicon modules of ATLAS using the track-based local {chi}{sup 2} alignment algorithm. Various additions to the method that deal with the peculiarities of alignment with cosmic rays are developed and presented. The achieved alignment precision is evaluated and compared to previous results. (orig.)

  16. Vis-NIR spectrometric determination of Brix and sucrose in sugar production samples using kernel partial least squares with interval selection based on the successive projections algorithm.

    Science.gov (United States)

    de Almeida, Valber Elias; de Araújo Gomes, Adriano; de Sousa Fernandes, David Douglas; Goicoechea, Héctor Casimiro; Galvão, Roberto Kawakami Harrop; Araújo, Mario Cesar Ugulino

    2018-05-01

    This paper proposes a new variable selection method for nonlinear multivariate calibration, combining the Successive Projections Algorithm for interval selection (iSPA) with the Kernel Partial Least Squares (Kernel-PLS) modelling technique. The proposed iSPA-Kernel-PLS algorithm is employed in a case study involving a Vis-NIR spectrometric dataset with complex nonlinear features. The analytical problem consists of determining Brix and sucrose content in samples from a sugar production system, on the basis of transflectance spectra. As compared to full-spectrum Kernel-PLS, the iSPA-Kernel-PLS models involve a smaller number of variables and display statistically significant superiority in terms of accuracy and/or bias in the predictions. Published by Elsevier B.V.

  17. Tracker Alignment Performance Plots after Commissioning

    CERN Document Server

    CMS Collaboration

    2017-01-01

    During the LHC shutdown in Winter 2016/17, the CMS pixel detector, the inner component of the CMS Tracker, was replaced by the Phase-1 upgrade detector. Among others improvements, the new pixel detector consists of four instead of three layers in the central barrel region (BPIX) and three instead of two disks in the endcap regions (FPIX). In this report, performance plots of pixel detector alignment results are presented, which were obtained with both cosmic-ray and pp collision data acquired at the beginning of the 2017 LHC operation. Alignment constants have been derived for each data-taking period to the level of single module positions in both the pixel and the strip detectors. The complete understanding of the alignment and biases was derived by using two algorithms, Millepede-II and HipPy. The results confirm each other.

  18. Tracing magnetic fields with aligned grains

    International Nuclear Information System (INIS)

    Lazarian, A.

    2007-01-01

    Magnetic fields play a crucial role in various astrophysical processes, including star formation, accretion of matter, transport processes (e.g., transport of heat), and cosmic rays. One of the easiest ways to determine the magnetic field direction is via polarization of radiation resulting from extinction or/and emission by aligned dust grains. Reliability of interpretation of the polarization maps in terms of magnetic fields depends on how well we understand the grain-alignment theory. Explaining what makes grains aligned has been one of the big issues of the modern astronomy. Numerous exciting physical effects have been discovered in the course of research undertaken in this field. As both the theory and observations matured, it became clear that the grain-alignment phenomenon is inherent not only in diffuse interstellar medium or molecular clouds but also is a generic property of the dust in circumstellar regions, interplanetary space and cometary comae. Currently the grain-alignment theory is a predictive one, and its results nicely match observations. Among its predictions is a subtle phenomenon of radiative torques. This phenomenon, after having stayed in oblivion for many years after its discovery, is currently viewed as the most powerful means of alignment. In this article, I shall review the basic physical processes involved in grain alignment, and the currently known mechanisms of alignment. I shall also discuss possible niches for different alignment mechanisms. I shall dwell on the importance of the concept of grain helicity for understanding of many properties of grain alignment, and shall demonstrate that rather arbitrarily shaped grains exhibit helicity when they interact with gaseous and radiative flows

  19. An auroral westward flow channel (AWFC and its relationship to field-aligned current, ring current, and plasmapause location determined using multiple spacecraft observations

    Directory of Open Access Journals (Sweden)

    M. L. Parkinson

    2007-02-01

    Full Text Available An auroral westward flow channel (AWFC is a latitudinally narrow channel of unstable F-region plasma with intense westward drift in the dusk-to-midnight sector ionosphere. AWFCs tend to overlap the equatorward edge of the auroral oval, and their life cycle is often synchronised to that of substorms: they commence close to substorm expansion phase onset, intensify during the expansion phase, and then decay during the recovery phase. Here we define for the first time the relationship between an AWFC, large-scale field-aligned current (FAC, the ring current, and plasmapause location. The Tasman International Geospace Environment Radar (TIGER, a Southern Hemisphere HF SuperDARN radar, observed a jet-like AWFC during ~08:35 to 13:28 UT on 7 April 2001. The initiation of the AWFC was preceded by a band of equatorward expanding ionospheric scatter (BEES which conveyed an intense poleward electric field through the inner plasma sheet. Unlike previous AWFCs, this event was not associated with a distinct substorm surge; rather it occurred during an interval of persistent, moderate magnetic activity characterised by AL~−200 nT. The four Cluster spacecraft had perigees within the dusk sector plasmasphere, and their trajectories were magnetically conjugate to the radar observations. The Waves of High frequency and Sounder for Probing Electron density by Relaxation (WHISPER instruments on board Cluster were used to identify the plasmapause location. The Imager for Magnetopause-to-Aurora Global Exploration (IMAGE EUV experiment also provided global-scale observations of the plasmapause. The Cluster fluxgate magnetometers (FGM provided successive measurements specifying the relative location of the ring current and filamentary plasma sheet current. An analysis of Iridium spacecraft magnetometer measurements provided estimates of large-scale ionospheric FAC in relation to the AWFC evolution. Peak flows in the AWFC were located close to the peak of a Region 2

  20. Angular momentum alignment in molecular beam scattering

    International Nuclear Information System (INIS)

    Treffers, M.A.

    1985-01-01

    It is shown how the angular momentum alignment in a molecular beam can be determined using laser-induced fluorescence in combination with precession of the angular momenta in a magnetic field. After a general analysis of the method, some results are presented to illustrate the possibilities of the method. Experimental data are presented on the alignment production for Na 2 molecules that made a collision induced angular momentum transition. Magnitude as well as direction of the alignment have been determined for scattering with several scattering partners and for a large number of scattering angles and transitions. The last chapter deals with the total alignment production in a final J-state, i.e. without state selection of the initial rotational state. (orig.)

  1. Evaluation of elastix-based propagated align algorithm for VOI- and voxel-based analysis of longitudinal F-18-FDG PET/CT data from patients with non-small cell lung cancer (NSCLC)

    OpenAIRE

    Kerner, Gerald S. M. A.; Fischer, Alexander; Koole, Michel J. B.; Pruim, Jan; Groen, Harry J. M.

    2015-01-01

    Background: Deformable image registration allows volume of interest (VOI)- and voxel-based analysis of longitudinal changes in fluorodeoxyglucose (FDG) tumor uptake in patients with non-small cell lung cancer (NSCLC). This study evaluates the performance of the elastix toolbox deformable image registration algorithm for VOI and voxel-wise assessment of longitudinal variations in FDG tumor uptake in NSCLC patients. Methods: Evaluation of the elastix toolbox was performed using F-18-FDG PET/CT ...

  2. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  3. Belt Aligning Revisited

    Directory of Open Access Journals (Sweden)

    Yurchenko Vadim

    2017-01-01

    parts of the conveyor, the sides of the belt wear intensively. This results in reducing the life of the belt. The reasons for this phenomenon are well investigated, but the difficulty lies in the fact that they all act simultaneously. The belt misalignment prevention can be carried out in two ways: by minimizing the effect of causes and by aligning the belt. The construction of aligning devices and errors encountered in practice are considered in this paper. Self-aligning roller supports rotational in plan view are recommended as a means of combating the belt misalignment.

  4. Hybrid vehicle motor alignment

    Science.gov (United States)

    Levin, Michael Benjamin

    2001-07-03

    A rotor of an electric motor for a motor vehicle is aligned to an axis of rotation for a crankshaft of an internal combustion engine having an internal combustion engine and an electric motor. A locator is provided on the crankshaft, a piloting tool is located radially by the first locator to the crankshaft. A stator of the electric motor is aligned to a second locator provided on the piloting tool. The stator is secured to the engine block. The rotor is aligned to the crankshaft and secured thereto.

  5. Precision alignment device

    Science.gov (United States)

    Jones, N.E.

    1988-03-10

    Apparatus for providing automatic alignment of beam devices having an associated structure for directing, collimating, focusing, reflecting, or otherwise modifying the main beam. A reference laser is attached to the structure enclosing the main beam producing apparatus and produces a reference beam substantially parallel to the main beam. Detector modules containing optical switching devices and optical detectors are positioned in the path of the reference beam and are effective to produce an electrical output indicative of the alignment of the main beam. This electrical output drives servomotor operated adjustment screws to adjust the position of elements of the structure associated with the main beam to maintain alignment of the main beam. 5 figs.

  6. Alignment for CSR

    International Nuclear Information System (INIS)

    Wang Shoujin; Man Kaidi; Guo Yizhen; Cai Guozhu; Guo Yuhui

    2002-01-01

    Cooled Storage Ring of Heavy Ion Research Facility in Lanzhou (HIRFL-CSR) belongs to China great scientific project in China. The alignment for it is very difficult because of very large area and very high accuracy. For the special case in HIRFL-CSR, some new methods and new instruments are used, including the construction of survey control network, the usage of laser tracker, and CSR alignment database system with applications developed to store and analyze data. The author describes the whole procedure of CSR alignment

  7. Application of a Dynamic Fuzzy Search Algorithm to Determine Optimal Wind Plant Sizes and Locations in Iowa

    International Nuclear Information System (INIS)

    Milligan, M. R.; Factor, T.

    2001-01-01

    This paper illustrates a method for choosing the optimal mix of wind capacity at several geographically dispersed locations. The method is based on a dynamic fuzzy search algorithm that can be applied to different optimization targets. We illustrate the method using two objective functions for the optimization: maximum economic benefit and maximum reliability. We also illustrate the sensitivity of the fuzzy economic benefit solutions to small perturbations of the capacity selections at each wind site. We find that small changes in site capacity and/or location have small effects on the economic benefit provided by wind power plants. We use electric load and generator data from Iowa, along with high-quality wind-speed data collected by the Iowa Wind Energy Institute

  8. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Science.gov (United States)

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  9. Application of a Dynamic Fuzzy Search Algorithm to Determine Optimal Wind Plant Sizes and Locations in Iowa

    Energy Technology Data Exchange (ETDEWEB)

    Milligan, M. R., National Renewable Energy Laboratory; Factor, T., Iowa Wind Energy Institute

    2001-09-21

    This paper illustrates a method for choosing the optimal mix of wind capacity at several geographically dispersed locations. The method is based on a dynamic fuzzy search algorithm that can be applied to different optimization targets. We illustrate the method using two objective functions for the optimization: maximum economic benefit and maximum reliability. We also illustrate the sensitivity of the fuzzy economic benefit solutions to small perturbations of the capacity selections at each wind site. We find that small changes in site capacity and/or location have small effects on the economic benefit provided by wind power plants. We use electric load and generator data from Iowa, along with high-quality wind-speed data collected by the Iowa Wind Energy Institute.

  10. Alignment of the VISA Undulator

    International Nuclear Information System (INIS)

    Ruland, Robert E.

    2000-01-01

    As part of the R and D program towards a fourth generation light source, a Self-Amplified Spontaneous Emission (SASE) demonstration is being prepared. The Visible-Infrared SASE Amplifier (VISA) undulator is being installed at Brookhaven National Laboratory. The VISA undulator is an in-vacuum, 4-meter long, 1.8 cm period, pure-permanent magnet device, with a novel, strong focusing, permanent magnet FODO array included within the fixed, 6 mm undulator gap. The undulator is constructed of 99 cm long segments. To attain maximum SASE gain requires establishing overlap of electron and photon beams to within 50 pm rms. This imposes challenging tolerances on mechanical fabrication and magnetic field quality, and necessitates use of laser straightness interferometry for calibration and alignment of the magnetic axes of the undulator segments. This paper describes the magnetic centerline determination, and the fiducialization and alignment processes, which were performed to meet the tolerance goal

  11. Self-adapting denoising, alignment and reconstruction in electron tomography in materials science

    Energy Technology Data Exchange (ETDEWEB)

    Printemps, Tony, E-mail: tony.printemps@cea.fr [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France); Mula, Guido [Dipartimento di Fisica, Università di Cagliari, Cittadella Universitaria, S.P. 8km 0.700, 09042 Monserrato (Italy); Sette, Daniele; Bleuet, Pierre; Delaye, Vincent; Bernier, Nicolas; Grenier, Adeline; Audoit, Guillaume; Gambacorti, Narciso; Hervé, Lionel [Université Grenoble Alpes, F-38000 Grenoble (France); CEA, LETI, MINATEC Campus, F-38054 Grenoble (France)

    2016-01-15

    An automatic procedure for electron tomography is presented. This procedure is adapted for specimens that can be fashioned into a needle-shaped sample and has been evaluated on inorganic samples. It consists of self-adapting denoising, automatic and accurate alignment including detection and correction of tilt axis, and 3D reconstruction. We propose the exploitation of a large amount of information of an electron tomography acquisition to achieve robust and automatic mixed Poisson–Gaussian noise parameter estimation and denoising using undecimated wavelet transforms. The alignment is made by mixing three techniques, namely (i) cross-correlations between neighboring projections, (ii) common line algorithm to get a precise shift correction in the direction of the tilt axis and (iii) intermediate reconstructions to precisely determine the tilt axis and shift correction in the direction perpendicular to that axis. Mixing alignment techniques turns out to be very efficient and fast. Significant improvements are highlighted in both simulations and real data reconstructions of porous silicon in high angle annular dark field mode and agglomerated silver nanoparticles in incoherent bright field mode. 3D reconstructions obtained with minimal user-intervention present fewer artefacts and less noise, which permits easier and more reliable segmentation and quantitative analysis. After careful sample preparation and data acquisition, the denoising procedure, alignment and reconstruction can be achieved within an hour for a 3D volume of about a hundred million voxels, which is a step toward a more routine use of electron tomography. - Highlights: • Goal: perform a reliable and user-independent 3D electron tomography reconstruction. • Proposed method: self-adapting denoising and alignment prior to 3D reconstruction. • Noise estimation and denoising are performed using wavelet transform. • Tilt axis determination is done automatically as well as projection alignment.

  12. Introducing difference recurrence relations for faster semi-global alignment of long sequences.

    Science.gov (United States)

    Suzuki, Hajime; Kasahara, Masahiro

    2018-02-19

    The read length of single-molecule DNA sequencers is reaching 1 Mb. Popular alignment software tools widely used for analyzing such long reads often take advantage of single-instruction multiple-data (SIMD) operations to accelerate calculation of dynamic programming (DP) matrices in the Smith-Waterman-Gotoh (SWG) algorithm with a fixed alignment start position at the origin. Nonetheless, 16-bit or 32-bit integers are necessary for storing the values in a DP matrix when sequences to be aligned are long; this situation hampers the use of the full SIMD width of modern processors. We proposed a faster semi-global alignment algorithm, "difference recurrence relations," that runs more rapidly than the state-of-the-art algorithm by a factor of 2.1. Instead of calculating and storing all the values in a DP matrix directly, our algorithm computes and stores mainly the differences between the values of adjacent cells in the matrix. Although the SWG algorithm and our algorithm can output exactly the same result, our algorithm mainly involves 8-bit integer operations, enabling us to exploit the full width of SIMD operations (e.g., 32) on modern processors. We also developed a library, libgaba, so that developers can easily integrate our algorithm into alignment programs. Our novel algorithm and optimized library implementation will facilitate accelerating nucleotide long-read analysis algorithms that use pairwise alignment stages. The library is implemented in the C programming language and available at https://github.com/ocxtal/libgaba .

  13. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G.Gomez

    Since September, the muon alignment system shifted from a mode of hardware installation and commissioning to operation and data taking. All three optical subsystems (Barrel, Endcap and Link alignment) have recorded data before, during and after CRAFT, at different magnetic fields and during ramps of the magnet. This first data taking experience has several interesting goals: •    study detector deformations and movements under the influence of the huge magnetic forces; •    study the stability of detector structures and of the alignment system over long periods, •    study geometry reproducibility at equal fields (specially at 0T and 3.8T); •    reconstruct B=0T geometry and compare to nominal/survey geometries; •    reconstruct B=3.8T geometry and provide DT and CSC alignment records for CMSSW. However, the main goal is to recons...

  14. Alignment of CEBAF cryomodules

    International Nuclear Information System (INIS)

    Schneider, W.J.; Bisognano, J.J.; Fischer, J.

    1993-06-01

    CEBAF, the Continuous Electron Beam Accelerator Facility, when completed, will house a 4 GeV recirculating accelerator. Each of the accelerator's two linacs contains 160 superconducting radio frequency (SRF) 1497 MHz niobium cavities in 20 cryomodules. Alignments of the cavities within the cryomodule with respect to beam axis is critical to achieving the optimum accelerator performance. This paper discusses the rationale for the current specification on cavity mechanical alignment: 2 mrad (rms) applied to the 0.5 m active length cavities. We describe the tooling that was developed to achieve the tolerance at the time of cavity pair assembly, to preserve and integrate alignment during cryomodule assembly, and to translate alignment to appropriate installation in the beam line

  15. Biaxial magnetic grain alignment

    International Nuclear Information System (INIS)

    Staines, M.; Genoud, J.-Y.; Mawdsley, A.; Manojlovic, V.

    2000-01-01

    Full text: We describe a dynamic magnetic grain alignment technique which can be used to produce YBCO thick films with a high degree of biaxial texture. The technique is, however, generally applicable to preparing ceramics or composite materials from granular materials with orthorhombic or lower crystal symmetry and is therefore not restricted to superconducting applications. Because magnetic alignment is a bulk effect, textured substrates are not required, unlike epitaxial coated tape processes such as RABiTS. We have used the technique to produce thick films of Y-247 on untextured silver substrates. After processing to Y-123 the films show a clear enhancement of critical current density relative to identically prepared untextured or uniaxially textured samples. We describe procedures for preparing materials using magnetic biaxial grain alignment with the emphasis on alignment in epoxy, which can give extremely high texture. X-ray rocking curves with FWHM of as little as 1-2 degrees have been measured

  16. Simultaneous and Direct Determination of Vancomycin and Cephalexin in Human Plasma by Using HPLC-DAD Coupled with Second-Order Calibration Algorithms

    Directory of Open Access Journals (Sweden)

    Le-Qian Hu

    2012-01-01

    Full Text Available A simple, rapid, and sensitive method for the simultaneous determination of vancomycin and cephalexin in human plasma was developed by using HPLC-DAD with second-order calibration algorithms. Instead of a completely chromatographic separation, mathematical separation was performed by using two trilinear decomposition algorithms, that is, PARAFAC-alternative least squares (PARAFAC-ALSs and self-weight-alternative-trilinear-decomposition- (SWATLD- coupled high-performance liquid chromatography with DAD detection. The average recoveries attained from PARAFAC-ALS and SWATLD with the factor number of 4 (N=4 were 101±5% and 102±4% for vancomycin, and 96±3% and 97±3% for cephalexininde in real human samples, respectively. The statistical comparison between PARAFAC-ALS and SWATLD is demonstrated to be similar. The results indicated that the combination of HPLC-DAD detection with second-order calibration algorithms is a powerful tool to quantify the analytes of interest from overlapped chromatographic profiles for complex analysis of drugs in plasma.

  17. How accurate is anatomic limb alignment in predicting mechanical limb alignment after total knee arthroplasty?

    Science.gov (United States)

    Lee, Seung Ah; Choi, Sang-Hee; Chang, Moon Jong

    2015-10-27

    Anatomic limb alignment often differs from mechanical limb alignment after total knee arthroplasty (TKA). We sought to assess the accuracy, specificity, and sensitivity for each of three commonly used ranges for anatomic limb alignment (3-9°, 5-10° and 2-10°) in predicting an acceptable range (neutral ± 3°) for mechanical limb alignment after TKA. We also assessed whether the accuracy of anatomic limb alignment was affected by anatomic variation. This retrospective study included 314 primary TKAs. The alignment of the limb was measured with both anatomic and mechanical methods of measurement. We also measured anatomic variation, including the femoral bowing angle, tibial bowing angle, and neck-shaft angle of the femur. All angles were measured on the same full-length standing anteroposterior radiographs. The accuracy, specificity, and sensitivity for each range of anatomic limb alignment were calculated and compared using mechanical limb alignment as the reference standard. The associations between the accuracy of anatomic limb alignment and anatomic variation were also determined. The range of 2-10° for anatomic limb alignment showed the highest accuracy, but it was only 73 % (3-9°, 65 %; 5-10°, 67 %). The specificity of the 2-10° range was 81 %, which was higher than that of the other ranges (3-9°, 69 %; 5-10°, 67 %). However, the sensitivity of the 2-10° range to predict varus malalignment was only 16 % (3-9°, 35 %; 5-10°, 68 %). In addition, the sensitivity of the 2-10° range to predict valgus malalignment was only 43 % (3-9°, 71 %; 5-10°, 43 %). The accuracy of anatomical limb alignment was lower for knees with greater femoral (odds ratio = 1.2) and tibial (odds ratio = 1.2) bowing. Anatomic limb alignment did not accurately predict mechanical limb alignment after TKA, and its accuracy was affected by anatomic variation. Thus, alignment after TKA should be assessed by measuring mechanical alignment rather than anatomic

  18. Genetic algorithm-based wavelength selection in multicomponent spectrophotometric determination by PLS: Application on sulfamethoxazole and trimethoprim mixture in bovine milk

    Directory of Open Access Journals (Sweden)

    Givianrad Hadi Mohammad

    2013-01-01

    Full Text Available The simultaneous determination of sulfamethoxazole (SMX and trimethoprim (TMP mixtures in bovine milk by spectrophotometric method is a difficult problem in analytical chemistry, due to spectral interferences. By means of multivariate calibration methods, such as partial least square (PLS regression, it is possible to obtain a model adjusted to the concentration values of the mixtures used in the calibration range. Genetic algorithm (GA is a suitable method for selecting wavelengths for PLS calibration of mixtures with almost identical spectra without loss of prediction capacity using the spectrophotometric method. In this study, the calibration model based on absorption spectra in the 200-400 nm range for 25 different mixtures of SMX and TMP Calibration matrices were formed form samples containing 0.25-20 and 0.3-21 μg mL-1 for SMX and TMP, at pH=10, respectively. The root mean squared error of deviation (RMSED for SMX and TMP with PLS and genetic algorithm partial least square (GAPLS were 0.242, 0.066 μgmL-1 and 0.074, 0.027 μg mL-1, respectively. This procedure was allowed the simultaneous determination of SMX and TMP in synthetic and real samples and good reliability of the determination was proved.

  19. A Novel Algorithm for Determining the Contextual Characteristics of Movement Behaviors by Combining Accelerometer Features and Wireless Beacons: Development and Implementation.

    Science.gov (United States)

    Magistro, Daniele; Sessa, Salvatore; Kingsnorth, Andrew P; Loveday, Adam; Simeone, Alessandro; Zecca, Massimiliano; Esliger, Dale W

    2018-04-20

    Unfortunately, global efforts to promote "how much" physical activity people should be undertaking have been largely unsuccessful. Given the difficulty of achieving a sustained lifestyle behavior change, many scientists are reexamining their approaches. One such approach is to focus on understanding the context of the lifestyle behavior (ie, where, when, and with whom) with a view to identifying promising intervention targets. The aim of this study was to develop and implement an innovative algorithm to determine "where" physical activity occurs using proximity sensors coupled with a widely used physical activity monitor. A total of 19 Bluetooth beacons were placed in fixed locations within a multilevel, mixed-use building. In addition, 4 receiver-mode sensors were fitted to the wrists of a roving technician who moved throughout the building. The experiment was divided into 4 trials with different walking speeds and dwelling times. The data were analyzed using an original and innovative algorithm based on graph generation and Bayesian filters. Linear regression models revealed significant correlations between beacon-derived location and ground-truth tracking time, with intraclass correlations suggesting a high goodness of fit (R 2 =.9780). The algorithm reliably predicted indoor location, and the robustness of the algorithm improved with a longer dwelling time (>100 s; error location of an individual within an indoor environment. This novel implementation of "context sensing" will facilitate a wealth of new research questions on promoting healthy behavior change, the optimization of patient care, and efficient health care planning (eg, patient-clinician flow, patient-clinician interaction). ©Daniele Magistro, Salvatore Sessa, Andrew P Kingsnorth, Adam Loveday, Alessandro Simeone, Massimiliano Zecca, Dale W Esliger. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 20.04.2018.

  20. Eigenvectors determination of the ribosome dynamics model during mRNA translation using the Kleene Star algorithm

    Science.gov (United States)

    Ernawati; Carnia, E.; Supriatna, A. K.

    2018-03-01

    Eigenvalues and eigenvectors in max-plus algebra have the same important role as eigenvalues and eigenvectors in conventional algebra. In max-plus algebra, eigenvalues and eigenvectors are useful for knowing dynamics of the system such as in train system scheduling, scheduling production systems and scheduling learning activities in moving classes. In the translation of proteins in which the ribosome move uni-directionally along the mRNA strand to recruit the amino acids that make up the protein, eigenvalues and eigenvectors are used to calculate protein production rates and density of ribosomes on the mRNA. Based on this, it is important to examine the eigenvalues and eigenvectors in the process of protein translation. In this paper an eigenvector formula is given for a ribosome dynamics during mRNA translation by using the Kleene star algorithm in which the resulting eigenvector formula is simpler and easier to apply to the system than that introduced elsewhere. This paper also discusses the properties of the matrix {B}λ \\otimes n of model. Among the important properties, it always has the same elements in the first column for n = 1, 2,… if the eigenvalue is the time of initiation, λ = τin , and the column is the eigenvector of the model corresponding to λ.

  1. A pull-back algorithm to determine the unloaded vascular geometry in anisotropic hyperelastic AAA passive mechanics.

    Science.gov (United States)

    Riveros, Fabián; Chandra, Santanu; Finol, Ender A; Gasser, T Christian; Rodriguez, Jose F

    2013-04-01

    Biomechanical studies on abdominal aortic aneurysms (AAA) seek to provide for better decision criteria to undergo surgical intervention for AAA repair. More accurate results can be obtained by using appropriate material models for the tissues along with accurate geometric models and more realistic boundary conditions for the lesion. However, patient-specific AAA models are generated from gated medical images in which the artery is under pressure. Therefore, identification of the AAA zero pressure geometry would allow for a more realistic estimate of the aneurysmal wall mechanics. This study proposes a novel iterative algorithm to find the zero pressure geometry of patient-specific AAA models. The methodology allows considering the anisotropic hyperelastic behavior of the aortic wall, its thickness and accounts for the presence of the intraluminal thrombus. Results on 12 patient-specific AAA geometric models indicate that the procedure is computational tractable and efficient, and preserves the global volume of the model. In addition, a comparison of the peak wall stress computed with the zero pressure and CT-based geometries during systole indicates that computations using CT-based geometric models underestimate the peak wall stress by 59 ± 64 and 47 ± 64 kPa for the isotropic and anisotropic material models of the arterial wall, respectively.

  2. Determination of foodborne pathogenic bacteria by multiplex PCR-microchip capillary electrophoresis with genetic algorithm-support vector regression optimization.

    Science.gov (United States)

    Li, Yongxin; Li, Yuanqian; Zheng, Bo; Qu, Lingli; Li, Can

    2009-06-08

    A rapid and sensitive method based on microchip capillary electrophoresis with condition optimization of genetic algorithm-support vector regression (GA-SVR) was developed and applied to simultaneous analysis of multiplex PCR products of four foodborne pathogenic bacteria. Four pairs of oligonucleotide primers were designed to exclusively amplify the targeted gene of Vibrio parahemolyticus, Salmonella, Escherichia coli (E. coli) O157:H7, Shigella and the quadruplex PCR parameters were optimized. At the same time, GA-SVR was employed to optimize the separation conditions of DNA fragments in microchip capillary electrophoresis. The proposed method was applied to simultaneously detect the multiplex PCR products of four foodborne pathogenic bacteria under the optimal conditions within 8 min. The levels of detection were as low as 1.2 x 10(2) CFU mL(-1) of Vibrio parahemolyticus, 2.9 x 10(2) CFU mL(-1) of Salmonella, 8.7 x 10(1) CFU mL(-1) of E. coli O157:H7 and 5.2 x 10(1) CFU mL(-1) of Shigella, respectively. The relative standard deviation of migration time was in the range of 0.74-2.09%. The results demonstrated that the good resolution and less analytical time were achieved due to the application of the multivariate strategy. This study offers an efficient alternative to routine foodborne pathogenic bacteria detection in a fast, reliable, and sensitive way.

  3. Validation of an Arab name algorithm in the determination of Arab ancestry for use in health research.

    Science.gov (United States)

    El-Sayed, Abdulrahman M; Lauderdale, Diane S; Galea, Sandro

    2010-12-01

    Data about Arab-Americans, a growing ethnic minority, are not routinely collected in vital statistics, registry, or administrative data in the USA. The difficulty in identifying Arab-Americans using publicly available data sources is a barrier to health research about this group. Here, we validate an empirically based probabilistic Arab name algorithm (ANA) for identifying Arab-Americans in health research. We used data from all Michigan birth certificates between 2000 and 2005. Fathers' surnames and mothers' maiden names were coded as Arab or non-Arab according to the ANA. We calculated sensitivity, specificity, and positive (PPV) and negative predictive values (NPV) of Arab ethnicity inferred using the ANA as compared to self-reported Arab ancestry. Statewide, the ANA had a specificity of 98.9%, a sensitivity of 50.3%, a PPV of 57.0%, and an NPV of 98.6%. Both the false-positive and false-negative rates were higher among men than among women. As the concentration of Arab-Americans in a study locality increased, the ANA false-positive rate increased and false-negative rate decreased. The ANA is highly specific but only moderately sensitive as a means of detecting Arab ancestry. Future research should compare health characteristics among Arab-American populations defined by Arab ancestry and those defined by the ANA.

  4. Validation of an Arab names algorithm in the determination of Arab ancestry for use in health research

    Science.gov (United States)

    El-Sayed, Abdulrahman M.; Lauderdale, Diane S.; Galea, Sandro

    2010-01-01

    Objective Data about Arab-Americans, a growing ethnic minority, is not routinely collected in vital statistics, registry, or administrative data in the US. The difficulty in identifying Arab-Americans using publicly available data sources is a barrier to health research about this group. Here, we validate an empirically-based, probabilistic Arab name algorithm (ANA) for identifying Arab-Americans in health research. Design We used data from all Michigan birth certificates between 2000-2005. Fathers’ surnames and mothers’ maiden names were coded as Arab or non-Arab according to the ANA. We calculated sensitivity, specificity, and positive (PPV) and negative predictive values (NPV) of Arab ethnicity inferred using the ANA as compared to self-reported Arab ancestry. Results State-wide, the ANA had a specificity of 98.9%, a sensitivity of 50.3%, a PPV of 57.0%, and a NPV of 98.6%. Both the false positive and false negative rates were higher among men than among women. As the concentration of Arab-Americans in a study locality increased, the ANA false positive rate increased and false-negative rate decreased. Conclusion The ANA is highly specific but only moderately sensitive as a means of detecting Arab ancestry. Future research should compare health characteristics among Arab-American populations defined by Arab ancestry and those defined by the ANA. PMID:20845117

  5. Direct determination of energy level alignment and charge transport at metal-Alq3 interfaces via ballistic-electron-emission spectroscopy.

    Science.gov (United States)

    Jiang, J S; Pearson, J E; Bader, S D

    2011-04-15

    Using ballistic-electron-emission spectroscopy (BEES), we directly determined the energy barrier for electron injection at clean interfaces of Alq(3) with Al and Fe to be 2.1 and 2.2 eV, respectively. We quantitatively modeled the sub-barrier BEES spectra with an accumulated space charge layer, and found that the transport of nonballistic electrons is consistent with random hopping over the injection barrier.

  6. Desktop aligner for fabrication of multilayer microfluidic devices.

    Science.gov (United States)

    Li, Xiang; Yu, Zeta Tak For; Geraldo, Dalton; Weng, Shinuo; Alve, Nitesh; Dun, Wu; Kini, Akshay; Patel, Karan; Shu, Roberto; Zhang, Feng; Li, Gang; Jin, Qinghui; Fu, Jianping

    2015-07-01

    Multilayer assembly is a commonly used technique to construct multilayer polydimethylsiloxane (PDMS)-based microfluidic devices with complex 3D architecture and connectivity for large-scale microfluidic integration. Accurate alignment of structure features on different PDMS layers before their permanent bonding is critical in determining the yield and quality of assembled multilayer microfluidic devices. Herein, we report a custom-built desktop aligner capable of both local and global alignments of PDMS layers covering a broad size range. Two digital microscopes were incorporated into the aligner design to allow accurate global alignment of PDMS structures up to 4 in. in diameter. Both local and global alignment accuracies of the desktop aligner were determined to be about 20 μm cm(-1). To demonstrate its utility for fabrication of integrated multilayer PDMS microfluidic devices, we applied the desktop aligner to achieve accurate alignment of different functional PDMS layers in multilayer microfluidics including an organs-on-chips device as well as a microfluidic device integrated with vertical passages connecting channels located in different PDMS layers. Owing to its convenient operation, high accuracy, low cost, light weight, and portability, the desktop aligner is useful for microfluidic researchers to achieve rapid and accurate alignment for generating multilayer PDMS microfluidic devices.

  7. Alignment for new Subaru ring

    International Nuclear Information System (INIS)

    Zhang, Ch.; Matsui, S.; Hashimoto, S.

    1999-01-01

    The New SUBARU is a synchrotron light source being constructed at the SPring-8 site. The main facility is a 1.5 GeV electron storage ring that provides light beam in the region from VUV to soft X-ray using SPring-8's 1 GeV linac as an injector. The ring, with a circumference of about 119 meters, is composed of six bending cells. Each bending cell has two normal dipoles of 34 degree and one inverse dipole of -8 degree. The ring has six straight sections: two very long straight sections for a 11-m long undulator and an optical klystron, four short straight sections for a 2.3-m undulator, a super-conducting wiggler, rf cavity and injection, etc. The magnets of the storage ring are composed of 12 dipoles (BMs), 6 invert dipoles (BIs), 56 quadrupoles and 44 sextupoles, etc. For the magnet alignment, positions of the dipoles (the BMs and BIs) are determined by network survey method. The multipoles, which are mounted on girders between the dipoles, are aligned with a laser-CCD camera system. This article presents the methodology used to position the different components and particularly to assure the precise alignment of the multipoles. (authors)

  8. Understanding the critical challenges of self-aligned octuple patterning

    Science.gov (United States)

    Yu, Ji; Xiao, Wei; Kang, Weiling; Chen, Yijian

    2014-03-01

    In this paper, we present a thorough investigation of self-aligned octuple patterning (SAOP) process characteristics, cost structure, integration challenges, and layout decomposition. The statistical characteristics of SAOP CD variations such as multi-modality are analyzed and contributions from various features to CDU and MTT (mean-to-target) budgets are estimated. The gap space is found to have the worst CDU+MTT performance and is used to determine the required overlay accuracy to ensure a satisfactory edge-placement yield of a cut process. Moreover, we propose a 5-mask positive-tone SAOP (pSAOP) process for memory FEOL patterning and a 3-mask negative-tone SAOP (nSAOP) process for logic BEOL patterning. The potential challenges of 2-D SAOP layout decomposition for BEOL applications are identified. Possible decomposition approaches are explored and the functionality of several developed algorithm is verified using 2-D layout examples from Open Cell Library.

  9. Spike Pattern Recognition for Automatic Collimation Alignment

    CERN Document Server

    Azzopardi, Gabriella; Salvachua Ferrando, Belen Maria; Mereghetti, Alessio; Redaelli, Stefano; CERN. Geneva. ATS Department

    2017-01-01

    The LHC makes use of a collimation system to protect its sensitive equipment by intercepting potentially dangerous beam halo particles. The appropriate collimator settings to protect the machine against beam losses relies on a very precise alignment of all the collimators with respect to the beam. The beam center at each collimator is then found by touching the beam halo using an alignment procedure. Until now, in order to determine whether a collimator is aligned with the beam or not, a user is required to follow the collimator’s BLM loss data and detect spikes. A machine learning (ML) model was trained in order to automatically recognize spikes when a collimator is aligned. The model was loosely integrated with the alignment implementation to determine the classification performance and reliability, without effecting the alignment process itself. The model was tested on a number of collimators during this MD and the machine learning was able to output the classifications in real-time.

  10. International Business And Aligning CSR

    Directory of Open Access Journals (Sweden)

    Daniel Miret

    2017-11-01

    Full Text Available The labor relationship between the employer and the workers is evaluated and directed by the labor rights which is a group of legal rights that are derived from human rights. Labor rights are more precisely relative to CSR as CSR are based on perspective and point of view of a given corporation. In this perspective implementing the workers and labor rights becomes more difficult compared to the implementation of the CSR. If an international corporation can be able to align CSR with the labor laws the friction between the employees and the corporation and the employee is likely to reduce. There is need to explore whether multinational corporations can be able to align CSR with the labor rights and employee initiatives global market. In this case the analysis focuses on China Brazil and India as the reference countries with cross-sectional secondary data obtained from a survey of the existing sources on the internet. The pertinent question is whether multinational corporations be successful while aligning CSR Corporate Social Responsibility with labor rights and employee initiatives in a competitive global market based on that cross-sectional data. The findings reveal that the uphold of labor rights largely determines morale of the employees and the will to participate in the growth and development of a given business both locally and international. Notably the continued change of CSR has resulted in the replacement of management and government dominated trade unions with more democratic unions of workers that pay attention to the initiatives of the workers. The combination of the internal code of conduct with the workers association labor associations and movements is one of the credible routes that show CSR can be aligned with labor rights.

  11. A Practical Guide to Multi-image Alignment

    OpenAIRE

    Aguerrebere, Cecilia; Delbracio, Mauricio; Bartesaghi, Alberto; Sapiro, Guillermo

    2018-01-01

    Multi-image alignment, bringing a group of images into common register, is an ubiquitous problem and the first step of many applications in a wide variety of domains. As a result, a great amount of effort is being invested in developing efficient multi-image alignment algorithms. Little has been done, however, to answer fundamental practical questions such as: what is the comparative performance of existing methods? is there still room for improvement? under which conditions should one techni...

  12. Incremental Yield of Including Determine-TB LAM Assay in Diagnostic Algorithms for Hospitalized and Ambulatory HIV-Positive Patients in Kenya.

    Science.gov (United States)

    Huerga, Helena; Ferlazzo, Gabriella; Bevilacqua, Paolo; Kirubi, Beatrice; Ardizzoni, Elisa; Wanjala, Stephen; Sitienei, Joseph; Bonnet, Maryline

    2017-01-01

    Determine-TB LAM assay is a urine point-of-care test useful for TB diagnosis in HIV-positive patients. We assessed the incremental diagnostic yield of adding LAM to algorithms based on clinical signs, sputum smear-microscopy, chest X-ray and Xpert MTB/RIF in HIV-positive patients with symptoms of pulmonary TB (PTB). Prospective observational cohort of ambulatory (either severely ill or CD4<200cells/μl or with Body Mass Index<17Kg/m2) and hospitalized symptomatic HIV-positive adults in Kenya. Incremental diagnostic yield of adding LAM was the difference in the proportion of confirmed TB patients (positive Xpert or MTB culture) diagnosed by the algorithm with LAM compared to the algorithm without LAM. The multivariable mortality model was adjusted for age, sex, clinical severity, BMI, CD4, ART initiation, LAM result and TB confirmation. Among 474 patients included, 44.1% were severely ill, 69.6% had CD4<200cells/μl, 59.9% had initiated ART, 23.2% could not produce sputum. LAM, smear-microscopy, Xpert and culture in sputum were positive in 39.0% (185/474), 21.6% (76/352), 29.1% (102/350) and 39.7% (92/232) of the patients tested, respectively. Of 156 patients with confirmed TB, 65.4% were LAM positive. Of those classified as non-TB, 84.0% were LAM negative. Adding LAM increased the diagnostic yield of the algorithms by 36.6%, from 47.4% (95%CI:39.4-55.6) to 84.0% (95%CI:77.3-89.4%), when using clinical signs and X-ray; by 19.9%, from 62.2% (95%CI:54.1-69.8) to 82.1% (95%CI:75.1-87.7), when using clinical signs and microscopy; and by 13.4%, from 74.4% (95%CI:66.8-81.0) to 87.8% (95%CI:81.6-92.5), when using clinical signs and Xpert. LAM positive patients had an increased risk of 2-months mortality (aOR:2.7; 95%CI:1.5-4.9). LAM should be included in TB diagnostic algorithms in parallel to microscopy or Xpert request for HIV-positive patients either ambulatory (severely ill or CD4<200cells/μl) or hospitalized. LAM allows same day treatment initiation in patients at

  13. DNA motif alignment by evolving a population of Markov chains.

    Science.gov (United States)

    Bi, Chengpeng

    2009-01-30

    Deciphering cis-regulatory elements or de novo motif-finding in genomes still remains elusive although much algorithmic effort has been expended. The Markov chain Monte Carlo (MCMC) method such as Gibbs motif samplers has been widely employed to solve the de novo motif-finding problem through sequence local alignment. Nonetheless, the MCMC-based motif samplers still suffer from local maxima like EM. Therefore, as a prerequisite for finding good local alignments, these motif algorithms are often independently run a multitude of times, but without information exchange between different chains. Hence it would be worth a new algorithm design enabling such information exchange. This paper presents a novel motif-finding algorithm by evolving a population of Markov chains with information exchange (PMC), each of which is initialized as a random alignment and run by the Metropolis-Hastings sampler (MHS). It is progressively updated through a series of local alignments stochastically sampled. Explicitly, the PMC motif algorithm performs stochastic sampling as specified by a population-based proposal distribution rather than individual ones, and adaptively evolves the population as a whole towards a global maximum. The alignment information exchange is accomplished by taking advantage of the pooled motif site distributions. A distinct method for running multiple independent Markov chains (IMC) without information exchange, or dubbed as the IMC motif algorithm, is also devised to compare with its PMC counterpart. Experimental studies demonstrate that the performance could be improved if pooled information were used to run a population of motif samplers. The new PMC algorithm was able to improve the convergence and outperformed other popular algorithms tested using simulated and biological motif sequences.

  14. Design of practical alignment device in KSTAR Thomson diagnostic

    Energy Technology Data Exchange (ETDEWEB)

    Lee, J. H., E-mail: jhlee@nfri.re.kr [National Fusion Research Institute, Daejeon (Korea, Republic of); University of Science and Technology (UST), Daejeon (Korea, Republic of); Lee, S. H. [National Fusion Research Institute, Daejeon (Korea, Republic of); Yamada, I. [National Institute for Fusion Science, Toki (Japan)

    2016-11-15

    The precise alignment of the laser path and collection optics in Thomson scattering measurements is essential for accurately determining electron temperature and density in tokamak experiments. For the last five years, during the development stage, the KSTAR tokamak’s Thomson diagnostic system has had alignment fibers installed in its optical collection modules, but these lacked a proper alignment detection system. In order to address these difficulties, an alignment verifying detection device between lasers and an object field of collection optics is developed. The alignment detection device utilizes two types of filters: a narrow laser band wavelength for laser, and a broad wavelength filter for Thomson scattering signal. Four such alignment detection devices have been successfully developed for the KSTAR Thomson scattering system in this year, and these will be tested in KSTAR experiments in 2016. In this paper, we present the newly developed alignment detection device for KSTAR’s Thomson scattering diagnostics.

  15. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    Gervasio Gomez

    2012-01-01

      The new alignment for the DT chambers has been successfully used in physics analysis starting with the 52X Global Tag. The remaining main areas of development over the next few months will be preparing a new track-based CSC alignment and producing realistic APEs (alignment position errors) and MC misalignment scenarios to match the latest muon alignment constants. Work on these items has been delayed from the intended timeline, mostly due to a large involvement of the muon alignment man-power in physics analyses over the first half of this year. As CMS keeps probing higher and higher energies, special attention must be paid to the reconstruction of very-high-energy muons. Recent muon POG reports from mid-June show a φ-dependence in curvature bias in Monte Carlo samples. This bias is observed already at the tracker level, where it is constant with muon pT, while it grows with pT as muon chamber information is added to the tracks. Similar studies show a much smaller effect in data, at le...

  16. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G. Gomez

    2010-01-01

    For the last three months, the Muon Alignment group has focussed on providing a new, improved set of alignment constants for the end-of-year data reprocessing. These constants were delivered on time and approved by the CMS physics validation team on November 17. The new alignment incorporates several improvements over the previous one from March for nearly all sub-systems. Motivated by the loss of information from a hardware failure in May (an entire MAB was lost), the optical barrel alignment has moved from a modular, super-plane reconstruction, to a full, single loop calculation of the entire geometry for all DTs in stations 1, 2 and 3. This makes better use of the system redundancy, mitigating the effect of the information loss. Station 4 is factorised and added afterwards to make the system smaller (and therefore faster to run), and also because the MAB calibration at the MB4 zone is less precise. This new alignment procedure was tested at 0 T against photogrammetry resulting in precisions of the order...

  17. Probabilistic biological network alignment.

    Science.gov (United States)

    Todor, Andrei; Dobra, Alin; Kahveci, Tamer

    2013-01-01

    Interactions between molecules are probabilistic events. An interaction may or may not happen with some probability, depending on a variety of factors such as the size, abundance, or proximity of the interacting molecules. In this paper, we consider the problem of aligning two biological networks. Unlike existing methods, we allow one of the two networks to contain probabilistic interactions. Allowing interaction probabilities makes the alignment more biologically relevant at the expense of explosive growth in the number of alternative topologies that may arise from different subsets of interactions that take place. We develop a novel method that efficiently and precisely characterizes this massive search space. We represent the topological similarity between pairs of aligned molecules (i.e., proteins) with the help of random variables and compute their expected values. We validate our method showing that, without sacrificing the running time performance, it can produce novel alignments. Our results also demonstrate that our method identifies biologically meaningful mappings under a comprehensive set of criteria used in the literature as well as the statistical coherence measure that we developed to analyze the statistical significance of the similarity of the functions of the aligned protein pairs.

  18. Predicting RNA hyper-editing with a novel tool when unambiguous alignment is impossible.

    Science.gov (United States)

    McKerrow, Wilson H; Savva, Yiannis A; Rezaei, Ali; Reenan, Robert A; Lawrence, Charles E

    2017-07-10

    Repetitive elements are now known to have relevant cellular functions, including self-complementary sequences that form double stranded (ds) RNA. There are numerous pathways that determine the fate of endogenous dsRNA, and misregulation of endogenous dsRNA is a driver of autoimmune disease, particularly in the brain. Unfortunately, the alignment of high-throughput, short-read sequences to repeat elements poses a dilemma: Such sequences may align equally well to multiple genomic locations. In order to differentiate repeat elements, current alignment methods depend on sequence variation in the reference genome. Reads are discarded when no such variations are present. However, RNA hyper-editing, a possible fate for dsRNA, introduces enough variation to distinguish between repeats that are otherwise identical. To take advantage of this variation, we developed a new algorithm, RepProfile, that simultaneously aligns reads and predicts novel variations. RepProfile accurately aligns hyper-edited reads that other methods discard. In particular we predict hyper-editing of Drosophila melanogaster repeat elements in vivo at levels previously described only in vitro, and provide validation by Sanger sequencing sixty-two individual cloned sequences. We find that hyper-editing is concentrated in genes involved in cell-cell communication at the synapse, including some that are associated with neurodegeneration. We also find that hyper-editing tends to occur in short runs. Previous studies of RNA hyper-editing discarded ambiguously aligned reads, ignoring hyper-editing in long, perfect dsRNA - the perfect substrate for hyper-editing. We provide a method that simulation and Sanger validation show accurately predicts such RNA editing, yielding a superior picture of hyper-editing.

  19. Automated Registration of Multimodal Optic Disc Images: Clinical Assessment of Alignment Accuracy.

    Science.gov (United States)

    Ng, Wai Siene; Legg, Phil; Avadhanam, Venkat; Aye, Kyaw; Evans, Steffan H P; North, Rachel V; Marshall, Andrew D; Rosin, Paul; Morgan, James E

    2016-04-01

    To determine the accuracy of automated alignment algorithms for the registration of optic disc images obtained by 2 different modalities: fundus photography and scanning laser tomography. Images obtained with the Heidelberg Retina Tomograph II and paired photographic optic disc images of 135 eyes were analyzed. Three state-of-the-art automated registration techniques Regional Mutual Information, rigid Feature Neighbourhood Mutual Information (FNMI), and nonrigid FNMI (NRFNMI) were used to align these image pairs. Alignment of each composite picture was assessed on a 5-point grading scale: "Fail" (no alignment of vessels with no vessel contact), "Weak" (vessels have slight contact), "Good" (vessels with 50% contact), and "Excellent" (complete alignment). Custom software generated an image mosaic in which the modalities were interleaved as a series of alternate 5×5-pixel blocks. These were graded independently by 3 clinically experienced observers. A total of 810 image pairs were assessed. All 3 registration techniques achieved a score of "Good" or better in >95% of the image sets. NRFNMI had the highest percentage of "Excellent" (mean: 99.6%; range, 95.2% to 99.6%), followed by Regional Mutual Information (mean: 81.6%; range, 86.3% to 78.5%) and FNMI (mean: 73.1%; range, 85.2% to 54.4%). Automated registration of optic disc images by different modalities is a feasible option for clinical application. All 3 methods provided useful levels of alignment, but the NRFNMI technique consistently outperformed the others and is recommended as a practical approach to the automated registration of multimodal disc images.

  20. FEAST: sensitive local alignment with multiple rates of evolution.

    Science.gov (United States)

    Hudek, Alexander K; Brown, Daniel G

    2011-01-01

    We present a pairwise local aligner, FEAST, which uses two new techniques: a sensitive extension algorithm for identifying homologous subsequences, and a descriptive probabilistic alignment model. We also present a new procedure for training alignment parameters and apply it to the human and mouse genomes, producing a better parameter set for these sequences. Our extension algorithm identifies homologous subsequences by considering all evolutionary histories. It has higher maximum sensitivity than Viterbi extensions, and better balances specificity. We model alignments with several submodels, each with unique statistical properties, describing strongly similar and weakly similar regions of homologous DNA. Training parameters using two submodels produces superior alignments, even when we align with only the parameters from the weaker submodel. Our extension algorithm combined with our new parameter set achieves sensitivity 0.59 on synthetic tests. In contrast, LASTZ with default settings achieves sensitivity 0.35 with the same false positive rate. Using the weak submodel as parameters for LASTZ increases its sensitivity to 0.59 with high error. FEAST is available at http://monod.uwaterloo.ca/feast/.

  1. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  2. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  3. Algorithm of athletes’ fitness structure individual features’ determination with the help of multidimensional analysis (on example of basketball

    Directory of Open Access Journals (Sweden)

    Zh.L. Kozina

    2017-10-01

    Full Text Available Purpose: to determine main laws of determination of athletes’ fitness structure’s individual characteristics with the help of multidimensional analysis (on example of basketball. Material: in the research elite basketball players (n=54 participated. Pedagogic testing included 12 tests, applied in combined teams of Ukraine and Russia. For every test three attempts were given and the best result was registered. The tests were passed during 2-3 training sessions. Results: we worked out general scheme of ways of athletes’ training individualization. For every athlete we determined the groups of leading and secondary factors in individual structure of fitness. The process of athletes’ training shall contain basic and variable components. Basic component was 70% of means in general system of athletes' training. Variable component was 30% of means and implies application of individual training means. Percentage of means in individual programs varies depending on the following: leading factors in fitness individual structure; period of individual dynamic of competition efficiency. In every micro-cycle 30% is assigned for athletes’ individual training: athletes received individual tasks; groups on the base of cluster analysis data were formed, if necessary. Conclusions: when working out individual training programs, development of leading factors in individual factorial structure of athletes’ fitness shall be accented. Application of individual programs, combined with universal individualization methods creates preconditions for rising competition activities’ efficiency.

  4. Iterative Stable Alignment and Clustering of 2D Transmission Electron Microscope Images

    Science.gov (United States)

    Yang, Zhengfan; Fang, Jia; Chittuluru, Johnathan; Asturias, Francisco J.; Penczek, Pawel A.

    2012-01-01

    SUMMARY Identification of homogeneous subsets of images in a macromolecular electron microscopy (EM) image data set is a critical step in single-particle analysis. The task is handled by iterative algorithms, whose performance is compromised by the compounded limitations of image alignment and K-means clustering. Here we describe an approach, iterative stable alignment and clustering (ISAC) that, relying on a new clustering method and on the concepts of stability and reproducibility, can extract validated, homogeneous subsets of images. ISAC requires only a small number of simple parameters and, with minimal human intervention, can eliminate bias from two-dimensional image clustering and maximize the quality of group averages that can be used for ab initio three-dimensional structural determination and analysis of macromolecular conformational variability. Repeated testing of the stability and reproducibility of a solution within ISAC eliminates heterogeneous or incorrect classes and introduces critical validation to the process of EM image clustering. PMID:22325773

  5. MANGO: a new approach to multiple sequence alignment.

    Science.gov (United States)

    Zhang, Zefeng; Lin, Hao; Li, Ming

    2007-01-01

    Multiple sequence alignment is a classical and challenging task for biological sequence analysis. The problem is NP-hard. The full dynamic programming takes too much time. The progressive alignment heuristics adopted by most state of the art multiple sequence alignment programs suffer from the 'once a gap, always a gap' phenomenon. Is there a radically new way to do multiple sequence alignment? This paper introduces a novel and orthogonal multiple sequence alignment method, using multiple optimized spaced seeds and new algorithms to handle these seeds efficiently. Our new algorithm processes information of all sequences as a whole, avoiding problems caused by the popular progressive approaches. Because the optimized spaced seeds are provably significantly more sensitive than the consecutive k-mers, the new approach promises to be more accurate and reliable. To validate our new approach, we have implemented MANGO: Multiple Alignment with N Gapped Oligos. Experiments were carried out on large 16S RNA benchmarks showing that MANGO compares favorably, in both accuracy and speed, against state-of-art multiple sequence alignment methods, including ClustalW 1.83, MUSCLE 3.6, MAFFT 5.861, Prob-ConsRNA 1.11, Dialign 2.2.1, DIALIGN-T 0.2.1, T-Coffee 4.85, POA 2.0 and Kalign 2.0.

  6. A Review of State Licensing Regulations to Determine Alignment with Best Practices to Prevent Human Norovirus Infections in Child-Care Centers.

    Science.gov (United States)

    Leone, Cortney M; Jaykus, Lee-Ann; Cates, Sheryl M; Fraser, Angela M

    2016-01-01

    Close, frequent contact between children and care providers in child-care centers presents many opportunities to spread human noroviruses. We compared state licensing regulations for child-care centers with national guidelines written to prevent human noroviruses. We reviewed child-care licensing regulations for all 50 U.S. states and the District of Columbia in effect in June 2015 to determine if these regulations fully, partially, or did not address 14 prevention practices in four topic areas: (1) hand hygiene, (2) exclusion of ill people, (3) environmental sanitation, and (4) diapering. Approximately two-thirds (8.9) of the 14 practices across all state regulations were partially or fully addressed, with few (2.6) fully addressed. Practices related to exclusion of ill people and diapering were fully addressed most often, while practices related to hand hygiene and environmental sanitation were fully addressed least often. Regulations based on guidelines for best practices are one way to prevent the spread of human noroviruses in child-care facilities, if the regulations are enforced. Our findings show that, in mid-2015, many state child-care regulations did not fully address these guidelines, suggesting the need to review these regulations to be sure they are based on best practices.

  7. Aligning Responsible Business Practices

    DEFF Research Database (Denmark)

    Weller, Angeli E.

    2017-01-01

    This article offers an in-depth case study of a global high tech manufacturer that aligned its ethics and compliance, corporate social responsibility, and sustainability practices. Few large companies organize their responsible business practices this way, despite conceptual relevance and calls t...... and managers interested in understanding how responsible business practices may be collectively organized.......This article offers an in-depth case study of a global high tech manufacturer that aligned its ethics and compliance, corporate social responsibility, and sustainability practices. Few large companies organize their responsible business practices this way, despite conceptual relevance and calls...... to manage them comprehensively. A communities of practice theoretical lens suggests that intentional effort would be needed to bridge meaning between the relevant managers and practices in order to achieve alignment. The findings call attention to the important role played by employees who broker...

  8. FMIT alignment cart

    International Nuclear Information System (INIS)

    Potter, R.C.; Dauelsberg, L.B.; Clark, D.C.; Grieggs, R.J.

    1981-01-01

    The Fusion Materials Irradiation Test (FMIT) Facility alignment cart must perform several functions. It must serve as a fixture to receive the drift-tube girder assembly when it is removed from the linac tank. It must transport the girder assembly from the linac vault to the area where alignment or disassembly is to take place. It must serve as a disassembly fixture to hold the girder while individual drift tubes are removed for repair. It must align the drift tube bores in a straight line parallel to the girder, using an optical system. These functions must be performed without violating any clearances found within the building. The bore tubes of the drift tubes will be irradiated, and shielding will be included in the system for easier maintenance

  9. Track based alignment of the CMS silicon tracker and its implication on physics performance

    International Nuclear Information System (INIS)

    Draeger, Jula

    2011-08-01

    In order to fully exploit the discovery potential of the CMS detector for new physics beyond the Standard Model at the high luminosity and centre-of-mass energy provided by the Large Hadron Collider, a careful calibration of the detector and profound understanding of its impact on physics performance are necessary to provide realistic uncertainties for the measurements of physics processes. This thesis describes the track-based alignment of the inner tracking system of CMS with the Millepede II algorithm. Using the combined information of tracks from cosmic rays and collisions taken in 2010, a remarkable local alignment precision has been reached that meets the design specification for most regions of the detector and takes into account instabilities of the detector geometry over time. In addition, the impact of the alignment of b tagging or the Z boson resonance are investigated. The latter is studied to investigate the impact of correlated detector distortions which hardly influence the overall solution of the minimisation problem but introduce biases in the track parameters and thus the derived physics quantities. The determination and constraint of these weak modes present the future challenge of the alignment task at CMS. (orig.)

  10. Track based alignment of the CMS silicon tracker and its implication on physics performance

    Energy Technology Data Exchange (ETDEWEB)

    Draeger, Jula

    2011-08-15

    In order to fully exploit the discovery potential of the CMS detector for new physics beyond the Standard Model at the high luminosity and centre-of-mass energy provided by the Large Hadron Collider, a careful calibration of the detector and profound understanding of its impact on physics performance are necessary to provide realistic uncertainties for the measurements of physics processes. This thesis describes the track-based alignment of the inner tracking system of CMS with the Millepede II algorithm. Using the combined information of tracks from cosmic rays and collisions taken in 2010, a remarkable local alignment precision has been reached that meets the design specification for most regions of the detector and takes into account instabilities of the detector geometry over time. In addition, the impact of the alignment of b tagging or the Z boson resonance are investigated. The latter is studied to investigate the impact of correlated detector distortions which hardly influence the overall solution of the minimisation problem but introduce biases in the track parameters and thus the derived physics quantities. The determination and constraint of these weak modes present the future challenge of the alignment task at CMS. (orig.)

  11. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    Science.gov (United States)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  12. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    Science.gov (United States)

    de Renstrom, Pawel Brückman; Haywood, Stephen

    2006-04-01

    A least squares method to solve a generic alignment problem of a high granularity tracking system is presented. The algorithm is based on an analytical linear expansion and allows for multiple nested fits, e.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on either implicit or explicit parameters. The method has been applied to the full simulation of a subset of the ATLAS silicon tracking system. The ultimate goal is to determine ≈35,000 degrees of freedom (DoF's). We present a limited scale exercise exploring various aspects of the solution.

  13. Generation and Detection of Alignments in Gabor Patterns

    Directory of Open Access Journals (Sweden)

    Samy Blusseau

    2016-11-01

    Full Text Available This paper presents a method to be used in psychophysical experiments to compare directly visual perception to an a contrario algorithm, on a straight patterns detection task. The method is composed of two parts. The first part consists in building a stimulus, namely an array of oriented elements, in which an alignment is present with variable salience. The second part focuses on a detection algorithm, based on the a contrario theory, which is designed to predict which alignment will be considered as the most salient in a given stimulus.

  14. Test procedure for calibration, grooming and alignment of the LDUA Optical Alignment Scope

    International Nuclear Information System (INIS)

    Potter, J.D.

    1995-01-01

    The Light Duty Utility Arm (LDUA) is a remotely operated manipulator used to enter into underground waste tanks through one of the tank risers. The LDUA must be carefully aligned with the tank riser during the installation process. The Optical Alignment Scope (OAS) is used to determine when optimum alignment has been achieved between the LDUA and the riser. This procedure is used to assure that the instrumentation and equipment comprising the OAS is properly adjusted in order to achieve its intended functions successfully

  15. A UNIFIED MODEL OF GRAIN ALIGNMENT: RADIATIVE ALIGNMENT OF INTERSTELLAR GRAINS WITH MAGNETIC INCLUSIONS

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Thiem [Canadian Institute for Theoretical Astrophysics, University of Toronto, 60 St. George Street, Toronto, ON M5S 3H8 (Canada); Lazarian, A. [Department of Astronomy, University of Wisconsin-Madison (United States)

    2016-11-10

    The radiative torque (RAT) alignment of interstellar grains with ordinary paramagnetic susceptibilities has been supported by earlier studies. The alignment of such grains depends on the so-called RAT parameter q {sup max}, which is determined by the grain shape. In this paper, we elaborate on our model of RAT alignment for grains with enhanced magnetic susceptibility due to iron inclusions, such that RAT alignment is magnetically enhanced, which we term the MRAT mechanism. Such grains can be aligned with high angular momentum at the so-called high- J attractor points, achieving a high degree of alignment. Using our analytical model of RATs, we derive the critical value of the magnetic relaxation parameter δ {sub m} to produce high- J attractor points as functions of q {sup max} and the anisotropic radiation angle relative to the magnetic field ψ . We find that if about 10% of the total iron abundance present in silicate grains is forming iron clusters, this is sufficient to produce high- J attractor points for all reasonable values of q {sup max}. To calculate the degree of grain alignment, we carry out numerical simulations of MRAT alignment by including stochastic excitations from gas collisions and magnetic fluctuations. We show that large grains can achieve perfect alignment when the high- J attractor point is present, regardless of the values of q {sup max}. Our obtained results pave the way for the physical modeling of polarized thermal dust emission as well as magnetic dipole emission. We also find that millimeter-sized grains in accretion disks may be aligned with the magnetic field if they are incorporated with iron nanoparticles.

  16. Control rod housing alignment and repair method

    International Nuclear Information System (INIS)

    Dixon, R.C.; Deaver, G.A.; Punches, J.R.; Singleton, G.E.; Erbes, J.G.; Offer, H.P.

    1992-01-01

    This patent describes a method for underwater welding of a control rod drive housing inserted through a stub tube to maintain requisite alignment and elevation of the top of the control rod drive housing to an overlying and corresponding aperture in a core plate as measured by an alignment device which determines the relative elevation and angularity with respect to the aperture. It comprises providing a welding cylinder dependent from the alignment device such that the elevation of the top of the welding cylinder is in a fixed relationship to the alignment device and is gas-proof; pressurizing the welding cylinder with inert welding gas sufficient to maintain the interior of the welding cylinder dry; lowering the welding cylinder through the aperture in the core plate by depending the cylinder with respect to the alignment device, the lowering including lowering through and adjusting the elevation relationship of the welding cylinder to the alignment device such that when the alignment device is in position to measure the elevation and angularity of the new control rod drive housing, the lower distal end of the welding cylinder extends below the upper periphery of the stub where welding is to occur; inserting a new control rod drive housing through the stub tube and positioning the control rod drive housing to a predetermined relationship to the anticipated final position of the control rod drive housing; providing welding implements transversely rotatably mounted interior of the welding cylinder relative to the alignment device such that the welding implements may be accurately positioned for dispensing weldment around the periphery of the top of the stub tube and at the side of the control rod drive housing; measuring the elevation and angularity of the control rod drive housing; and dispensing weldment along the top of the stub tube and at the side of the control rod drive housing

  17. Determination of Hydrodynamic Parameters on Two--Phase Flow Gas - Liquid in Pipes with Different Inclination Angles Using Image Processing Algorithm

    Science.gov (United States)

    Montoya, Gustavo; Valecillos, María; Romero, Carlos; Gonzáles, Dosinda

    2009-11-01

    In the present research a digital image processing-based automated algorithm was developed in order to determine the phase's height, hold up, and statistical distribution of the drop size in a two-phase system water-air using pipes with 0 , 10 , and 90 of inclination. Digital images were acquired with a high speed camera (up to 4500fps), using an equipment that consist of a system with three acrylic pipes with diameters of 1.905, 3.175, and 4.445 cm. Each pipe is arranged in two sections of 8 m of length. Various flow patterns were visualized for different superficial velocities of water and air. Finally, using the image processing program designed in Matlab/Simulink^, the captured images were processed to establish the parameters previously mentioned. The image processing algorithm is based in the frequency domain analysis of the source pictures, which allows to find the phase as the edge between the water and air, through a Sobel filter that extracts the high frequency components of the image. The drop size was found using the calculation of the Feret diameter. Three flow patterns were observed: Annular, ST, and ST&MI.

  18. Using Elman recurrent neural networks with conjugate gradient algorithm in determining the anesthetic the amount of anesthetic medicine to be applied.

    Science.gov (United States)

    Güntürkün, Rüştü

    2010-08-01

    In this study, Elman recurrent neural networks have been defined by using conjugate gradient algorithm in order to determine the depth of anesthesia in the continuation stage of the anesthesia and to estimate the amount of medicine to be applied at that moment. The feed forward neural networks are also used for comparison. The conjugate gradient algorithm is compared with back propagation (BP) for training of the neural Networks. The applied artificial neural network is composed of three layers, namely the input layer, the hidden layer and the output layer. The nonlinear activation function sigmoid (sigmoid function) has been used in the hidden layer and the output layer. EEG data has been recorded with Nihon Kohden 9200 brand 22-channel EEG device. The international 8-channel bipolar 10-20 montage system (8 TB-b system) has been used in assembling the recording electrodes. EEG data have been recorded by being sampled once in every 2 milliseconds. The artificial neural network has been designed so as to have 60 neurons in the input layer, 30 neurons in the hidden layer and 1 neuron in the output layer. The values of the power spectral density (PSD) of 10-second EEG segments which correspond to the 1-50 Hz frequency range; the ratio of the total power of PSD values of the EEG segment at that moment in the same range to the total of PSD values of EEG segment taken prior to the anesthesia.

  19. AlignMe—a membrane protein sequence alignment web server

    Science.gov (United States)

    Stamm, Marcus; Staritzbichler, René; Khafizov, Kamil; Forrest, Lucy R.

    2014-01-01

    We present a web server for pair-wise alignment of membrane protein sequences, using the program AlignMe. The server makes available two operational modes of AlignMe: (i) sequence to sequence alignment, taking two sequences in fasta format as input, combining information about each sequence from multiple sources and producing a pair-wise alignment (PW mode); and (ii) alignment of two multiple sequence alignments to create family-averaged hydropathy profile alignments (HP mode). For the PW sequence alignment mode, four different optimized parameter sets are provided, each suited to pairs of sequences with a specific similarity level. These settings utilize different types of inputs: (position-specific) substitution matrices, secondary structure predictions and transmembrane propensities from transmembrane predictions or hydrophobicity scales. In the second (HP) mode, each input multiple sequence alignment is converted into a hydrophobicity profile averaged over the provided set of sequence homologs; the two profiles are then aligned. The HP mode enables qualitative comparison of transmembrane topologies (and therefore potentially of 3D folds) of two membrane proteins, which can be useful if the proteins have low sequence similarity. In summary, the AlignMe web server provides user-friendly access to a set of tools for analysis and comparison of membrane protein sequences. Access is available at http://www.bioinfo.mpg.de/AlignMe PMID:24753425

  20. Dynamic Programming Used to Align Protein Structures with a Spectrum Is Robust

    Directory of Open Access Journals (Sweden)

    Allen Holder

    2013-11-01

    Full Text Available Several efficient algorithms to conduct pairwise comparisons among large databases of protein structures have emerged in the recent literature. The central theme is the design of a measure between the Cα atoms of two protein chains, from which dynamic programming is used to compute an alignment. The efficiency and efficacy of these algorithms allows large-scale computational studies that would have been previously impractical. The computational study herein shows that the structural alignment algorithm eigen-decomposition alignment with the spectrum (EIGAs is robust against both parametric and structural variation.

  1. Aligning Mental Representations

    DEFF Research Database (Denmark)

    Kano Glückstad, Fumiko

    2013-01-01

    This work introduces a framework that implements asymmetric communication theory proposed by Sperber and Wilson [1]. The framework applies a generalization model known as the Bayesian model of generalization (BMG) [2] for aligning knowledge possessed by two communicating parties. The work focuses...

  2. MUON DETECTORS: ALIGNMENT

    CERN Multimedia

    G. Gomez and Y. Pakhotin

    2012-01-01

      A new track-based alignment for the DT chambers is ready for deployment: an offline tag has already been produced which will become part of the 52X Global Tag. This alignment was validated within the muon alignment group both at low and high momentum using a W/Z skim sample. It shows an improved mass resolution for pairs of stand-alone muons, improved curvature resolution at high momentum, and improved DT segment extrapolation residuals. The validation workflow for high-momentum muons used to depend solely on the “split cosmics” method, looking at the curvature difference between muon tracks reconstructed in the upper or lower half of CMS. The validation has now been extended to include energetic muons decaying from heavily boosted Zs: the di-muon invariant mass for global and stand-alone muons is reconstructed, and the invariant mass resolution is compared for different alignments. The main areas of development over the next few months will be preparing a new track-based C...

  3. Community Alignment ANADP

    OpenAIRE

    Halbert, Martin; Bicarregui, Juan; Anglada, Lluis; Duranti, Luciana

    2014-01-01

    Aligning National Approaches to Digital Preservation: An Action Assembly Biblioteca de Catalunya (National Library of Catalonia) November 18-20, 2013, Barcelona, Spain Auburn University Council on Library and Information Resources (CLIR) Digital Curation Centre (DCC) Digital Preservation Network (DPN) Joint Information Systems Committee (JISC) University of North Texas Virginia Tech Interuniversity Consortium for Political and Social Research (ICPSR) Innovative Inte...

  4. Discriminative Shape Alignment

    DEFF Research Database (Denmark)

    Loog, M.; de Bruijne, M.

    2009-01-01

    , not taking into account that eventually the shapes are to be assigned to two or more different classes. This work introduces a discriminative variation to well-known Procrustes alignment and demonstrates its benefit over this classical method in shape classification tasks. The focus is on two...

  5. Resource Alignment ANADP

    OpenAIRE

    Grindley, Neil; Cramer, Tom; Schrimpf, Sabine; Wilson, Tom

    2014-01-01

    Aligning National Approaches to Digital Preservation: An Action Assembly Biblioteca de Catalunya (National Library of Catalonia) November 18-20, 2013, Barcelona, Spain Auburn University Council on Library and Information Resources (CLIR) Digital Curation Centre (DCC) Digital Preservation Network (DPN) Joint Information Systems Committee (JISC) University of North Texas Virginia Tech Interuniversity Consortium for Political and Social Research (ICPSR) Innovative Inte...

  6. Capacity Alignment ANADP

    OpenAIRE

    Davidson, Joy; Whitehead, Martha; Molloy, Laura; Molinaro, Mary

    2014-01-01

    Aligning National Approaches to Digital Preservation: An Action Assembly Biblioteca de Catalunya (National Library of Catalonia) November 18-20, 2013, Barcelona, Spain Auburn University Council on Library and Information Resources (CLIR) Digital Curation Centre (DCC) Digital Preservation Network (DPN) Joint Information Systems Committee (JISC) University of North Texas Virginia Tech Interuniversity Consortium for Political and Social Research (ICPSR) Innovative Inte...

  7. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  8. Using genetic algorithm to determine the optimal order quantities for multi-item multi-period under warehouse capacity constraints in kitchenware manufacturing

    Science.gov (United States)

    Saraswati, D.; Sari, D. K.; Johan, V.

    2017-11-01

    The study was conducted on a manufacturer that produced various kinds of kitchenware with kitchen sink as the main product. There were four types of steel sheets selected as the raw materials of the kitchen sink. The problem was the manufacturer wanted to determine how much steel sheets to order from a single supplier to meet the production requirements in a way to minimize the total inventory cost. In this case, the economic order quantity (EOQ) model was developed using all-unit discount as the price of steel sheets and the warehouse capacity was limited. Genetic algorithm (GA) was used to find the minimum of the total inventory cost as a sum of purchasing cost, ordering cost, holding cost and penalty cost.

  9. Geometrical determinations of IMRT photon pencil-beam path in radiotherapy wedges and limit divergence angle with the Anisotropic Analytic Algorithm (AAA

    Directory of Open Access Journals (Sweden)

    Francisco Casesnoves

    2014-08-01

    Full Text Available Purpose: Static wedge filters (WF are commonly used in radiation therapy, forward and/or inverse planning. We calculated the exact 2D/3D geometrical pathway of the photon-beam through the usual alloy WF, in order to get a better dose related to the beam intensity attenuation factor(s, after the beam has passed through the WF. The objective was to provide general formulation into the Anisotropic Analytical Algorithm (AAA model coordinates system (depending on collimator/wedge angles that also can be applied to other models. Additionally, second purpose of this study was to develop integral formulation for 3D wedge exponential factor with statistical approximations, with introduction for the limit angle/conformal wedge.Methods: The radiotherapy model used to develop this mathematical task is the classical superposition-convolution algorithm, AAA (developed by Ulmer and Harder. We worked with optimal geometrical approximations to make the computational IMRT calculations quicker/reduce the planning-system time. Analytic geometry/computational-techniques to carry out simulations (for standard wedges are detailed/developed sharply. Integral developments/integral-statistical approximations are explained. Beam-divergence limit Angle for optimal wedge filtration formulas is calculated/sketched, with geometrical approximations. Fundamental trigonometry is used for this purpose.Results: Extent simulation tables for WF of 15º, 30º, 45º, and 60º are shown with errors. As a result, it is possible to determine the best individual treatment dose distribution for each patient. We presented these basic simulations/numerical examples for standard manufacturing WF of straight sloping surface, to check the accuracy/errors of the calculations. Simulations results give low RMS/Relative Error values (formulated for WF of 15º, 30º, 45º, and 60º.Conclusion: We obtained a series of formulas of analytic geometry for WF that can be applied for any particular dose

  10. Determination of zinc oxide content of mineral medicine calamine using near-infrared spectroscopy based on MIV and BP-ANN algorithm

    Science.gov (United States)

    Zhang, Xiaodong; Chen, Long; Sun, Yangbo; Bai, Yu; Huang, Bisheng; Chen, Keli

    2018-03-01

    Near-infrared (NIR) spectroscopy has been widely used in the analysis fields of traditional Chinese medicine. It has the advantages of fast analysis, no damage to samples and no pollution. In this research, a fast quantitative model for zinc oxide (ZnO) content in mineral medicine calamine was explored based on NIR spectroscopy. NIR spectra of 57 batches of calamine samples were collected and the first derivative (FD) method was adopted for conducting spectral pretreatment. The content of ZnO in calamine sample was determined using ethylenediaminetetraacetic acid (EDTA) titration and taken as reference value of NIR spectroscopy. 57 batches of calamine samples were categorized into calibration and prediction set using the Kennard-Stone (K-S) algorithm. Firstly, in the calibration set, to calculate the correlation coefficient (r) between the absorbance value and the ZnO content of corresponding samples at each wave number. Next, according to the square correlation coefficient (r2) value to obtain the top 50 wave numbers to compose the characteristic spectral bands (4081.8-4096.3, 4188.9-4274.7, 4335.4, 4763.6,4794.4-4802.1, 4809.9, 4817.6-4875.4 cm- 1), which were used to establish the quantitative model of ZnO content using back propagation artificial neural network (BP-ANN) algorithm. Then, the 50 wave numbers were operated by the mean impact value (MIV) algorithm to choose wave numbers whose absolute value of MIV greater than or equal to 25, to obtain the optimal characteristic spectral bands (4875.4-4836.9, 4223.6-4080.9 cm- 1). And then, both internal cross and external validation were used to screen the number of hidden layer nodes of BP-ANN. Finally, the number 4 of hidden layer nodes was chosen as the best. At last, the BP-ANN model was found to enjoy a high accuracy and strong forecasting capacity for analyzing ZnO content in calamine samples ranging within 42.05-69.98%, with relative mean square error of cross validation (RMSECV) of 1.66% and coefficient of

  11. An Adaptive Hybrid Multiprocessor technique for bioinformatics sequence alignment

    KAUST Repository

    Bonny, Talal

    2012-07-28

    Sequence alignment algorithms such as the Smith-Waterman algorithm are among the most important applications in the development of bioinformatics. Sequence alignment algorithms must process large amounts of data which may take a long time. Here, we introduce our Adaptive Hybrid Multiprocessor technique to accelerate the implementation of the Smith-Waterman algorithm. Our technique utilizes both the graphics processing unit (GPU) and the central processing unit (CPU). It adapts to the implementation according to the number of CPUs given as input by efficiently distributing the workload between the processing units. Using existing resources (GPU and CPU) in an efficient way is a novel approach. The peak performance achieved for the platforms GPU + CPU, GPU + 2CPUs, and GPU + 3CPUs is 10.4 GCUPS, 13.7 GCUPS, and 18.6 GCUPS, respectively (with the query length of 511 amino acid). © 2010 IEEE.

  12. Determination of the Three-Dimensional Rate of Cancer Cell Rotation in an Optically-Induced Electrokinetics Chip Using an Optical Flow Algorithm

    Directory of Open Access Journals (Sweden)

    Yuliang Zhao

    2018-03-01

    Full Text Available Our group has reported that Melan-A cells and lymphocytes undergo self-rotation in a homogeneous AC electric field, and found that the rotation velocity of these cells is a key indicator to characterize their physical properties. However, the determination of the rotation properties of a cell by human eyes is both gruesome and time consuming, and not always accurate. In this paper, a method is presented to more accurately determine the 3D cell rotation velocity and axis from a 2D image sequence captured by a single camera. Using the optical flow method, we obtained the 2D motion field data from the image sequence and back-project it onto a 3D sphere model, and then the rotation axis and velocity of the cell were calculated. After testing the algorithm on animated image sequences, experiments were also performed on image sequences of real rotating cells. All of these results indicate that this method is accurate, practical, and useful. Furthermore, the method presented there can also be used to determine the 3D rotation velocity of other types of spherical objects that are commonly used in microfluidic applications, such as beads and microparticles.

  13. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  14. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  15. Reducing beam shaper alignment complexity: diagnostic techniques for alignment and tuning

    Science.gov (United States)

    Lizotte, Todd E.

    2011-10-01

    Safe and efficient optical alignment is a critical requirement for industrial laser systems used in a high volume manufacturing environment. Of specific interest is the development of techniques to align beam shaping optics within a beam line; having the ability to instantly verify by a qualitative means that each element is in its proper position as the beam shaper module is being aligned. There is a need to reduce these types of alignment techniques down to a level where even a newbie to optical alignment will be able to complete the task. Couple this alignment need with the fact that most laser system manufacturers ship their products worldwide and the introduction of a new set of variables including cultural and language barriers, makes this a top priority for manufacturers. Tools and methodologies for alignment of complex optical systems need to be able to cross these barriers to ensure the highest degree of up time and reduce the cost of maintenance on the production floor. Customers worldwide, who purchase production laser equipment, understand that the majority of costs to a manufacturing facility is spent on system maintenance and is typically the largest single controllable expenditure in a production plant. This desire to reduce costs is driving the trend these days towards predictive and proactive, not reactive maintenance of laser based optical beam delivery systems [10]. With proper diagnostic tools, laser system developers can develop proactive approaches to reduce system down time, safe guard operational performance and reduce premature or catastrophic optics failures. Obviously analytical data will provide quantifiable performance standards which are more precise than qualitative standards, but each have a role in determining overall optical system performance [10]. This paper will discuss the use of film and fluorescent mirror devices as diagnostic tools for beam shaper module alignment off line or in-situ. The paper will also provide an overview

  16. Long Read Alignment with Parallel MapReduce Cloud Platform

    Science.gov (United States)

    Al-Absi, Ahmed Abdulhakim; Kang, Dae-Ki

    2015-01-01

    Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner's Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR) cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms. PMID:26839887

  17. Alignment of the ATLAS Inner Detector Tracking System

    CERN Document Server

    Lacuesta, V; The ATLAS collaboration

    2010-01-01

    ATLAS is a multipurpose experiment that records the LHC collisions. To reconstruct trajectories of charged particles produced in these collisions, ATLAS tracking system is equipped with silicon planar sensors and drift‐tube based detectors. They constitute the ATLAS Inner Detector. In order to achieve its scientific goals, the alignment of the ATLAS tracking system requires the determine accurately its almost 36000 degrees of freedom. Thus the demanded precision for the alignment of the silicon sensors is below 10 micrometers. This implies to use a large sample of high momentum and isolated charge particle tracks. The high level trigger selects those tracks online. Then the raw data with the hits information of the triggered tracks is stored in a calibration stream. Tracks from cosmic trigger during empty LHC bunches are also used as input for the alignment. The implementation of the track based alignment within the ATLAS software framework unifies different alignment approaches and allows the alignment of ...

  18. Alignment of concerns

    DEFF Research Database (Denmark)

    Andersen, Tariq Osman; Bansler, Jørgen P.; Kensing, Finn

    E-health promises to enable and support active patient participation in chronic care. However, these fairly recent innovations are complicated matters and emphasize significant challenges, such as patients’ and clinicians’ different ways of conceptualizing disease and illness. Informed by insight...... from medical phenomenology and our own empirical work in telemonitoring and medical care of heart patients, we propose a design rationale for e-health systems conceptualized as the ‘alignment of concerns’....

  19. Aligning Technology Education Teaching with Brain Development

    Science.gov (United States)

    Katsioloudis, Petros

    2015-01-01

    This exploratory study was designed to determine if there is a level of alignment between technology education curriculum and theories of intellectual development. The researcher compared Epstein's Brain Growth Theory and Piaget's Status of Intellectual Development with technology education curriculum from Australia, England, and the United…

  20. Image correlation method for DNA sequence alignment.

    Science.gov (United States)

    Curilem Saldías, Millaray; Villarroel Sassarini, Felipe; Muñoz Poblete, Carlos; Vargas Vásquez, Asticio; Maureira Butler, Iván

    2012-01-01

    The complexity of searches and the volume of genomic data make sequence alignment one of bioinformatics most active research areas. New alignment approaches have incorporated digital signal processing techniques. Among these, correlation methods are highly sensitive. This paper proposes a novel sequence alignment method based on 2-dimensional images, where each nucleic acid base is represented as a fixed gray intensity pixel. Query and known database sequences are coded to their pixel representation and sequence alignment is handled as object recognition in a scene problem. Query and database become object and scene, respectively. An image correlation process is carried out in order to search for the best match between them. Given that this procedure can be implemented in an optical correlator, the correlation could eventually be accomplished at light speed. This paper shows an initial research stage where results were "digitally" obtained by simulating an optical correlation of DNA sequences represented as images. A total of 303 queries (variable lengths from 50 to 4500 base pairs) and 100 scenes represented by 100 x 100 images each (in total, one million base pair database) were considered for the image correlation analysis. The results showed that correlations reached very high sensitivity (99.01%), specificity (98.99%) and outperformed BLAST when mutation numbers increased. However, digital correlation processes were hundred times slower than BLAST. We are currently starting an initiative to evaluate the correlation speed process of a real experimental optical correlator. By doing this, we expect to fully exploit optical correlation light properties. As the optical correlator works jointly with the computer, digital algorithms should also be optimized. The results presented in this paper are encouraging and support the study of image correlation methods on sequence alignment.

  1. Alignment at the ESRF

    International Nuclear Information System (INIS)

    Martin, D.; Levet, N.; Gatta, G.

    1999-01-01

    The ESRF Survey and Alignment group is responsible for the installation, control and periodic realignment of the accelerators and experiments which produce high quality x-rays used by scientists from Europe and around the world. Alignment tolerances are typically less than one millimetre and often in the order of several micrometers. The group is composed of one engineer, five highly trained survey technicians, one electronic and one computer technician. This team is fortified during peak periods by technicians from an external survey company. First an overview and comparative study of the main large-scale survey instrumentation and methods used by the group is made. Secondly a discussion of long term deformation on the ESRF site is presented. This is followed by presentation of the methods used in the realignment of the various machines. Two important aspects of our work, beamline and front-end alignment, and the so-called machine exotic devices are briefly discussed. Finally, the ESRF calibration bench is presented. (authors)

  2. Seeking the perfect alignment

    CERN Multimedia

    2002-01-01

    The first full-scale tests of the ATLAS Muon Spectrometer are about to begin in Prévessin. The set-up includes several layers of Monitored Drift Tubes Chambers (MDTs) and will allow tests of the performance of the detectors and of their highly accurate alignment system.   Monitored Drift Chambers in Building 887 in Prévessin, where they are just about to be tested. Muon chambers are keeping the ATLAS Muon Spectrometer team quite busy this summer. Now that most people go on holiday, the beam and alignment tests for these chambers are just starting. These chambers will measure with high accuracy the momentum of high-energy muons, and this implies very demanding requirements for their alignment. The MDT chambers consist of drift tubes, which are gas-filled metal tubes, 3 cm in diameter, with wires running down their axes. With high voltage between the wire and the tube wall, the ionisation due to traversing muons is detected as electrical pulses. With careful timing of the pulses, the position of the muon t...

  3. Aligning IT and Business Strategy: An Australian University Case Study

    Science.gov (United States)

    Dent, Alan

    2015-01-01

    Alignment with business objectives is considered to be an essential outcome of information technology (IT) strategic planning. This case study examines the process of creating an IT strategy for an Australian university using an industry standard methodology. The degree of alignment is determined by comparing the strategic priorities supported by…

  4. Trace determination of safranin O dye using ultrasound assisted dispersive solid-phase micro extraction: Artificial neural network-genetic algorithm and response surface methodology.

    Science.gov (United States)

    Dil, Ebrahim Alipanahpour; Ghaedi, Mehrorang; Asfaram, Arash; Mehrabi, Fatemeh; Bazrafshan, Ali Akbar; Ghaedi, Abdol Mohammad

    2016-11-01

    In this study, ultrasound assisted dispersive solid-phase micro extraction combined with spectrophotometry (USA-DSPME-UV) method based on activated carbon modified with Fe2O3 nanoparticles (Fe2O3-NPs-AC) was developed for pre-concentration and determination of safranin O (SO). It is known that the efficiency of USA-DSPME-UV method may be affected by pH, amount of adsorbent, ultrasound time and eluent volume and the extent and magnitude of their contribution on response (in term of main and interaction part) was studied by using central composite design (CCD) and artificial neural network-genetic algorithms (ANN-GA). Accordingly by adjustment of experimental conditions suggested by ANN-GA at pH 6.5, 1.1mg of adsorbent, 10min ultrasound and 150μL of eluent volume led to achievement of best operation performance like low LOD (6.3ngmL(-1)) and LOQ (17.5ngmL(-1)) in the range of 25-3500ngmL(-1). In following stage, the SO content in real water and wastewater samples with recoveries between 93.27-99.41% with RSD lower than 3% was successfully determined. Copyright © 2016 Elsevier B.V. All rights reserved.

  5. Detector Alignment Studies for the CMS Experiment

    CERN Document Server

    Lampén, Tapio

    2007-01-01

    This thesis presen ts studies related to trac k-based alignmen t for the future CMS exp erimen t at CERN. Excellen t geometric alignmen t is crucial to fully bene t from the outstanding resolution of individual sensors. The large num ber of sensors mak es it dicult in CMS to utilize computationally demanding alignmen t algorithms. A computationally ligh t alignmen t algorithm, called the Hits and Impact Points algorithm (HIP), is dev elop ed and studied. It is based on minimization of the hit residuals. It can be applied to individual sensors or to comp osite objects. All six alignmen t parameters (three translations and three rotations), or their subgroup can be considered. The algorithm is exp ected to be particularly suitable for the alignmen t of the innermost part of CMS, the pixel detector, during its early operation, but can be easily utilized to align other parts of CMS also. The HIP algorithm is applied to sim ulated CMS data and real data measured with a test-b eam setup. The sim ulation studies dem...

  6. Simulator for beam-based LHC collimator alignment

    Science.gov (United States)

    Valentino, Gianluca; Aßmann, Ralph; Redaelli, Stefano; Sammut, Nicholas

    2014-02-01

    In the CERN Large Hadron Collider, collimators need to be set up to form a multistage hierarchy to ensure efficient multiturn cleaning of halo particles. Automatic algorithms were introduced during the first run to reduce the beam time required for beam-based setup, improve the alignment accuracy, and reduce the risk of human errors. Simulating the alignment procedure would allow for off-line tests of alignment policies and algorithms. A simulator was developed based on a diffusion beam model to generate the characteristic beam loss signal spike and decay produced when a collimator jaw touches the beam, which is observed in a beam loss monitor (BLM). Empirical models derived from the available measurement data are used to simulate the steady-state beam loss and crosstalk between multiple BLMs. The simulator design is presented, together with simulation results and comparison to measurement data.

  7. Uncertainty evaluation in the self-alignment test of the upper plate of a press

    International Nuclear Information System (INIS)

    Lourenço, Alexandre S; E Sousa, J Alves

    2015-01-01

    This paper describes a method to evaluate uncertainty of the self-alignment test of the upper plate of a press according to EN 12390-4:2000. The method, the algorithms and the sources of uncertainty are described

  8. OXBench: A benchmark for evaluation of protein multiple sequence alignment accuracy

    Directory of Open Access Journals (Sweden)

    Searle Stephen MJ

    2003-10-01

    Full Text Available Abstract Background The alignment of two or more protein sequences provides a powerful guide in the prediction of the protein structure and in identifying key functional residues, however, the utility of any prediction is completely dependent on the accuracy of the alignment. In this paper we describe a suite of reference alignments derived from the comparison of protein three-dimensional structures together with evaluation measures and software that allow automatically generated alignments to be benchmarked. We test the OXBench benchmark suite on alignments generated by the AMPS multiple alignment method, then apply the suite to compare eight different multiple alignment algorithms. The benchmark shows the current state-of-the art for alignment accuracy and provides a baseline against which new alignment algorithms may be judged. Results The simple hierarchical multiple alignment algorithm, AMPS, performed as well as or better than more modern methods such as CLUSTALW once the PAM250 pair-score matrix was replaced by a BLOSUM series matrix. AMPS gave an accuracy in Structurally Conserved Regions (SCRs of 89.9% over a set of 672 alignments. The T-COFFEE method on a data set of families with http://www.compbio.dundee.ac.uk. Conclusions The OXBench suite of reference alignments, evaluation software and results database provide a convenient method to assess progress in sequence alignment techniques. Evaluation measures that were dependent on comparison to a reference alignment were found to give good discrimination between methods. The STAMP Sc Score which is independent of a reference alignment also gave good discrimination. Application of OXBench in this paper shows that with the exception of T-COFFEE, the majority of the improvement in alignment accuracy seen since 1985 stems from improved pair-score matrices rather than algorithmic refinements. The maximum theoretical alignment accuracy obtained by pooling results over all methods was 94

  9. Attitude sensor alignment calibration for the solar maximum mission

    Science.gov (United States)

    Pitone, Daniel S.; Shuster, Malcolm D.

    1990-01-01

    An earlier heuristic study of the fine attitude sensors for the Solar Maximum Mission (SMM) revealed a temperature dependence of the alignment about the yaw axis of the pair of fixed-head star trackers relative to the fine pointing Sun sensor. Here, new sensor alignment algorithms which better quantify the dependence of the alignments on the temperature are developed and applied to the SMM data. Comparison with the results from the previous study reveals the limitations of the heuristic approach. In addition, some of the basic assumptions made in the prelaunch analysis of the alignments of the SMM are examined. The results of this work have important consequences for future missions with stringent attitude requirements and where misalignment variations due to variations in the temperature will be significant.

  10. MUSCLE: multiple sequence alignment with high accuracy and high throughput.

    Science.gov (United States)

    Edgar, Robert C

    2004-01-01

    We describe MUSCLE, a new computer program for creating multiple alignments of protein sequences. Elements of the algorithm include fast distance estimation using kmer counting, progressive alignment using a new profile function we call the log-expectation score, and refinement using tree-dependent restricted partitioning. The speed and accuracy of MUSCLE are compared with T-Coffee, MAFFT and CLUSTALW on four test sets of reference alignments: BAliBASE, SABmark, SMART and a new benchmark, PREFAB. MUSCLE achieves the highest, or joint highest, rank in accuracy on each of these sets. Without refinement, MUSCLE achieves average accuracy statistically indistinguishable from T-Coffee and MAFFT, and is the fastest of the tested methods for large numbers of sequences, aligning 5000 sequences of average length 350 in 7 min on a current desktop computer. The MUSCLE program, source code and PREFAB test data are freely available at http://www.drive5. com/muscle.

  11. SOAP2: an improved ultrafast tool for short read alignment

    DEFF Research Database (Denmark)

    Li, Ruiqiang; Yu, Chang; Li, Yingrui

    2009-01-01

    SUMMARY: SOAP2 is a significantly improved version of the short oligonucleotide alignment program that both reduces computer memory usage and increases alignment speed at an unprecedented rate. We used a Burrows Wheeler Transformation (BWT) compression index to substitute the seed strategy...... for indexing the reference sequence in the main memory. We tested it on the whole human genome and found that this new algorithm reduced memory usage from 14.7 to 5.4 GB and improved alignment speed by 20-30 times. SOAP2 is compatible with both single- and paired-end reads. Additionally, this tool now supports...... multiple text and compressed file formats. A consensus builder has also been developed for consensus assembly and SNP detection from alignment of short reads on a reference genome. AVAILABILITY: http://soap.genomics.org.cn....

  12. Evolution of shiva laser alignment systems

    International Nuclear Information System (INIS)

    Boyd, R.D.

    1980-07-01

    The Shiva oscillator pulse is preamplified and divided into twenty beams. Each beam is then amplified, spatially filtered, directed, and focused onto a target a few hundred micrometers in size producing optical intensities up to 10 16 W/cm 2 . The laser was designed and built with three automatic alignment systems: the oscillator alignment system, which aligns each of the laser's three oscillators to a reference beamline; the chain input pointing system, which points each beam into its respective chain; and the chain output pointing, focusing and centering system which points, centers and focuses the beam onto the target. Recently the alignment of the laser's one hundred twenty spatial filter pinholes was also automated. This system uses digitized video images of back-illuminated pinholes and computer analysis to determine current positions. The offset of each current position from a desired center point is then translated into stepper motor commands and the pinhole is moved the proper distance. While motors for one pinhole are moving, the system can digitize, analyze, and send commands to other motors, allowing the system to efficiently align several pinholes in parallel

  13. Precision lens assembly with alignment turning system

    Science.gov (United States)

    Ho, Cheng-Fang; Huang, Chien-Yao; Lin, Yi-Hao; Kuo, Hui-Jean; Kuo, Ching-Hsiang; Hsu, Wei-Yao; Chen, Fong-Zhi

    2017-10-01

    The poker chip assembly with high precision lens barrels is widely applied to ultra-high performance optical system. ITRC applies the poker chip assembly technology to the high numerical aperture objective lenses and lithography projection lenses because of its high efficiency assembly process. In order to achieve high precision lens cell for poker chip assembly, an alignment turning system (ATS) is developed. The ATS includes measurement, alignment and turning modules. The measurement module is equipped with a non-contact displacement sensor (NCDS) and an autocollimator (ACM). The NCDS and ACM are used to measure centration errors of the top and the bottom surface of a lens respectively; then the amount of adjustment of displacement and tilt with respect to the rotational axis of the turning machine for the alignment module can be determined. After measurement, alignment and turning processes on the ATS, the centration error of a lens cell with 200 mm in diameter can be controlled within 10 arcsec. Furthermore, a poker chip assembly lens cell with three sub-cells is demonstrated, each sub-cells are measured and accomplished with alignment and turning processes. The lens assembly test for five times by each three technicians; the average transmission centration error of assembly lens is 12.45 arcsec. The results show that ATS can achieve high assembly efficiency for precision optical systems.

  14. Clear aligners in orthodontic treatment.

    Science.gov (United States)

    Weir, T

    2017-03-01

    Since the introduction of the Tooth Positioner (TP Orthodontics) in 1944, removable appliances analogous to clear aligners have been employed for mild to moderate orthodontic tooth movements. Clear aligner therapy has been a part of orthodontic practice for decades, but has, particularly since the introduction of Invisalign appliances (Align Technology) in 1998, become an increasingly common addition to the orthodontic armamentarium. An internet search reveals at least 27 different clear aligner products currently on offer for orthodontic treatment. The present paper will highlight the increasing popularity of clear aligner appliances, as well as the clinical scope and the limitations of aligner therapy in general. Further, the paper will outline the differences between the various types of clear aligner products currently available. © 2017 Australian Dental Association.

  15. Markov random field based automatic image alignment for electron tomography.

    Science.gov (United States)

    Amat, Fernando; Moussavi, Farshid; Comolli, Luis R; Elidan, Gal; Downing, Kenneth H; Horowitz, Mark

    2008-03-01

    We present a method for automatic full-precision alignment of the images in a tomographic tilt series. Full-precision automatic alignment of cryo electron microscopy images has remained a difficult challenge to date, due to the limited electron dose and low image contrast. These facts lead to poor signal to noise ratio (SNR) in the images, which causes automatic feature trackers to generate errors, even with high contrast gold particles as fiducial features. To enable fully automatic alignment for full-precision reconstructions, we frame the problem probabilistically as finding the most likely particle tracks given a set of noisy images, using contextual information to make the solution more robust to the noise in each image. To solve this maximum likelihood problem, we use Markov Random Fields (MRF) to establish the correspondence of features in alignment and robust optimization for projection model estimation. The resulting algorithm, called Robust Alignment and Projection Estimation for Tomographic Reconstruction, or RAPTOR, has not needed any manual intervention for the difficult datasets we have tried, and has provided sub-pixel alignment that is as good as the manual approach by an expert user. We are able to automatically map complete and partial marker trajectories and thus obtain highly accurate image alignment. Our method has been applied to challenging cryo electron tomographic datasets with low SNR from intact bacterial cells, as well as several plastic section and X-ray datasets.

  16. Using structure to explore the sequence alignment space of remote homologs.

    Directory of Open Access Journals (Sweden)

    Andrew Kuziemko

    2011-10-01

    Full Text Available Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.

  17. Using structure to explore the sequence alignment space of remote homologs.

    Science.gov (United States)

    Kuziemko, Andrew; Honig, Barry; Petrey, Donald

    2011-10-01

    Protein structure modeling by homology requires an accurate sequence alignment between the query protein and its structural template. However, sequence alignment methods based on dynamic programming (DP) are typically unable to generate accurate alignments for remote sequence homologs, thus limiting the applicability of modeling methods. A central problem is that the alignment that is "optimal" in terms of the DP score does not necessarily correspond to the alignment that produces the most accurate structural model. That is, the correct alignment based on structural superposition will generally have a lower score than the optimal alignment obtained from sequence. Variations of the DP algorithm have been developed that generate alternative alignments that are "suboptimal" in terms of the DP score, but these still encounter difficulties in detecting the correct structural alignment. We present here a new alternative sequence alignment method that relies heavily on the structure of the template. By initially aligning the query sequence to individual fragments in secondary structure elements and combining high-scoring fragments that pass basic tests for "modelability", we can generate accurate alignments within a small ensemble. Our results suggest that the set of sequences that can currently be modeled by homology can be greatly extended.

  18. Halo Intrinsic Alignment: Dependence on Mass, Formation Time, and Environment

    Energy Technology Data Exchange (ETDEWEB)

    Xia, Qianli; Kang, Xi; Wang, Peng; Luo, Yu [Purple Mountain Observatory, the Partner Group of MPI für Astronomie, 2 West Beijing Road, Nanjing 210008 (China); Yang, Xiaohu; Jing, Yipeng [Center for Astronomy and Astrophysics, Shanghai Jiao Tong University, Shanghai 200240 (China); Wang, Huiyuan [Key Laboratory for Research in Galaxies and Cosmology, Department of Astronomy, University of Science and Technology of China, Hefei, Anhui 230026 (China); Mo, Houjun, E-mail: kangxi@pmo.ac.cn [Astronomy Department and Center for Astrophysics, Tsinghua University, Beijing 10084 (China)

    2017-10-10

    In this paper we use high-resolution cosmological simulations to study halo intrinsic alignment and its dependence on mass, formation time, and large-scale environment. In agreement with previous studies using N -body simulations, it is found that massive halos have stronger alignment. For the first time, we find that for a given halo mass older halos have stronger alignment and halos in cluster regions also have stronger alignment than those in filaments. To model these dependencies, we extend the linear alignment model with inclusion of halo bias and find that the halo alignment with its mass and formation time dependence can be explained by halo bias. However, the model cannot account for the environment dependence, as it is found that halo bias is lower in clusters and higher in filaments. Our results suggest that halo bias and environment are independent factors in determining halo alignment. We also study the halo alignment correlation function and find that halos are strongly clustered along their major axes and less clustered along the minor axes. The correlated halo alignment can extend to scales as large as 100 h {sup −1} Mpc, where its feature is mainly driven by the baryon acoustic oscillation effect.

  19. Development and application of a modified dynamic time warping algorithm (DTW-S to analyses of primate brain expression time series

    Directory of Open Access Journals (Sweden)

    Vingron Martin

    2011-08-01

    Full Text Available Abstract Background Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Results Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. Conclusions The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  20. Development and application of a modified dynamic time warping algorithm (DTW-S) to analyses of primate brain expression time series.

    Science.gov (United States)

    Yuan, Yuan; Chen, Yi-Ping Phoebe; Ni, Shengyu; Xu, Augix Guohua; Tang, Lin; Vingron, Martin; Somel, Mehmet; Khaitovich, Philipp

    2011-08-18

    Comparing biological time series data across different conditions, or different specimens, is a common but still challenging task. Algorithms aligning two time series represent a valuable tool for such comparisons. While many powerful computation tools for time series alignment have been developed, they do not provide significance estimates for time shift measurements. Here, we present an extended version of the original DTW algorithm that allows us to determine the significance of time shift estimates in time series alignments, the DTW-Significance (DTW-S) algorithm. The DTW-S combines important properties of the original algorithm and other published time series alignment tools: DTW-S calculates the optimal alignment for each time point of each gene, it uses interpolated time points for time shift estimation, and it does not require alignment of the time-series end points. As a new feature, we implement a simulation procedure based on parameters estimated from real time series data, on a series-by-series basis, allowing us to determine the false positive rate (FPR) and the significance of the estimated time shift values. We assess the performance of our method using simulation data and real expression time series from two published primate brain expression datasets. Our results show that this method can provide accurate and robust time shift estimates for each time point on a gene-by-gene basis. Using these estimates, we are able to uncover novel features of the biological processes underlying human brain development and maturation. The DTW-S provides a convenient tool for calculating accurate and robust time shift estimates at each time point for each gene, based on time series data. The estimates can be used to uncover novel biological features of the system being studied. The DTW-S is freely available as an R package TimeShift at http://www.picb.ac.cn/Comparative/data.html.

  1. An algorithm of improving speech emotional perception for hearing aid

    Science.gov (United States)

    Xi, Ji; Liang, Ruiyu; Fei, Xianju

    2017-07-01

    In this paper, a speech emotion recognition (SER) algorithm was proposed to improve the emotional perception of hearing-impaired people. The algorithm utilizes multiple kernel technology to overcome the drawback of SVM: slow training speed. Firstly, in order to improve the adaptive performance of Gaussian Radial Basis Function (RBF), the parameter determining the nonlinear mapping was optimized on the basis of Kernel target alignment. Then, the obtained Kernel Function was used as the basis kernel of Multiple Kernel Learning (MKL) with slack variable that could solve the over-fitting problem. However, the slack variable also brings the error into the result. Therefore, a soft-margin MKL was proposed to balance the margin against the error. Moreover, the relatively iterative algorithm was used to solve the combination coefficients and hyper-plane equations. Experimental results show that the proposed algorithm can acquire an accuracy of 90% for five kinds of emotions including happiness, sadness, anger, fear and neutral. Compared with KPCA+CCA and PIM-FSVM, the proposed algorithm has the highest accuracy.

  2. Determination of fat content in chicken hamburgers using NIR spectroscopy and the Successive Projections Algorithm for interval selection in PLS regression (iSPA-PLS)

    Science.gov (United States)

    Krepper, Gabriela; Romeo, Florencia; Fernandes, David Douglas de Sousa; Diniz, Paulo Henrique Gonçalves Dias; de Araújo, Mário César Ugulino; Di Nezio, María Susana; Pistonesi, Marcelo Fabián; Centurión, María Eugenia

    2018-01-01

    Determining fat content in hamburgers is very important to minimize or control the negative effects of fat on human health, effects such as cardiovascular diseases and obesity, which are caused by the high consumption of saturated fatty acids and cholesterol. This study proposed an alternative analytical method based on Near Infrared Spectroscopy (NIR) and Successive Projections Algorithm for interval selection in Partial Least Squares regression (iSPA-PLS) for fat content determination in commercial chicken hamburgers. For this, 70 hamburger samples with a fat content ranging from 14.27 to 32.12 mg kg- 1 were prepared based on the upper limit recommended by the Argentinean Food Codex, which is 20% (w w- 1). NIR spectra were then recorded and then preprocessed by applying different approaches: base line correction, SNV, MSC, and Savitzky-Golay smoothing. For comparison, full-spectrum PLS and the Interval PLS are also used. The best performance for the prediction set was obtained for the first derivative Savitzky-Golay smoothing with a second-order polynomial and window size of 19 points, achieving a coefficient of correlation of 0.94, RMSEP of 1.59 mg kg- 1, REP of 7.69% and RPD of 3.02. The proposed methodology represents an excellent alternative to the conventional Soxhlet extraction method, since waste generation is avoided, yet without the use of either chemical reagents or solvents, which follows the primary principles of Green Chemistry. The new method was successfully applied to chicken hamburger analysis, and the results agreed with those with reference values at a 95% confidence level, making it very attractive for routine analysis.

  3. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    Science.gov (United States)

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  4. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  5. All about alignment

    CERN Multimedia

    2006-01-01

    The ALICE absorbers, iron wall and superstructure have been installed with great precision. The ALICE front absorber, positioned in the centre of the detector, has been installed and aligned. Weighing more than 400 tonnes, the ALICE absorbers and the surrounding support structures have been installed and aligned with a precision of 1-2 mm, hardly an easy task but a very important one. The ALICE absorbers are made of three parts: the front absorber, a 35-tonne cone-shaped structure, and two small-angle absorbers, long straight cylinder sections weighing 18 and 40 tonnes. The three pieces lined up have a total length of about 17 m. In addition to these, ALICE technicians have installed a 300-tonne iron filter wall made of blocks that fit together like large Lego pieces and a surrounding metal support structure to hold the tracking and trigger chambers. The absorbers house the vacuum chamber and are also the reference surface for the positioning of the tracking and trigger chambers. For this reason, the ab...

  6. Long Read Alignment with Parallel MapReduce Cloud Platform

    Directory of Open Access Journals (Sweden)

    Ahmed Abdulhakim Al-Absi

    2015-01-01

    Full Text Available Genomic sequence alignment is an important technique to decode genome sequences in bioinformatics. Next-Generation Sequencing technologies produce genomic data of longer reads. Cloud platforms are adopted to address the problems arising from storage and analysis of large genomic data. Existing genes sequencing tools for cloud platforms predominantly consider short read gene sequences and adopt the Hadoop MapReduce framework for computation. However, serial execution of map and reduce phases is a problem in such systems. Therefore, in this paper, we introduce Burrows-Wheeler Aligner’s Smith-Waterman Alignment on Parallel MapReduce (BWASW-PMR cloud platform for long sequence alignment. The proposed cloud platform adopts a widely accepted and accurate BWA-SW algorithm for long sequence alignment. A custom MapReduce platform is developed to overcome the drawbacks of the Hadoop framework. A parallel execution strategy of the MapReduce phases and optimization of Smith-Waterman algorithm are considered. Performance evaluation results exhibit an average speed-up of 6.7 considering BWASW-PMR compared with the state-of-the-art Bwasw-Cloud. An average reduction of 30% in the map phase makespan is reported across all experiments comparing BWASW-PMR with Bwasw-Cloud. Optimization of Smith-Waterman results in reducing the execution time by 91.8%. The experimental study proves the efficiency of BWASW-PMR for aligning long genomic sequences on cloud platforms.

  7. Inner detector alignment and top-quark mass measurement with the ATLAS detector

    CERN Document Server

    Moles-Valls, Regina

    This thesis is divided in two parts: one related with the alignment of the ATLAS Inner Detector tracking system and other with the measurement of the top-quark mass. Both topics are connected by the Globalχ2 fitting method. In order to measure the properties of the particles with high accuracy, the ID detector is composed by devices with high intrinsic resolution. If by any chance the position of the modules in the detector is known with worse precision than their intrinsic resolution this may introduce a distortion in the reconstructed trajectory of the particles or at least degrade the tracking resolution. The alignment is the responsible of determining the location of each module with high precision and avoiding therefore any bias in the physics results. During the commissioning of the detector, different alignment exercises were performed for preparing the Globalχ2 algorithm (the CSC , the FDR, weak modes studies,…). At the same time, the ATLAS detector was collecting million of cosmic rays which were...

  8. Sequence comparison alignment-free approach based on suffix tree and L-words frequency.

    Science.gov (United States)

    Soares, Inês; Goios, Ana; Amorim, António

    2012-01-01

    The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions). In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L-L-words--in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.

  9. Sequence Comparison Alignment-Free Approach Based on Suffix Tree and L-Words Frequency

    Directory of Open Access Journals (Sweden)

    Inês Soares

    2012-01-01

    Full Text Available The vast majority of methods available for sequence comparison rely on a first sequence alignment step, which requires a number of assumptions on evolutionary history and is sometimes very difficult or impossible to perform due to the abundance of gaps (insertions/deletions. In such cases, an alternative alignment-free method would prove valuable. Our method starts by a computation of a generalized suffix tree of all sequences, which is completed in linear time. Using this tree, the frequency of all possible words with a preset length L—L-words—in each sequence is rapidly calculated. Based on the L-words frequency profile of each sequence, a pairwise standard Euclidean distance is then computed producing a symmetric genetic distance matrix, which can be used to generate a neighbor joining dendrogram or a multidimensional scaling graph. We present an improvement to word counting alignment-free approaches for sequence comparison, by determining a single optimal word length and combining suffix tree structures to the word counting tasks. Our approach is, thus, a fast and simple application that proved to be efficient and powerful when applied to mitochondrial genomes. The algorithm was implemented in Python language and is freely available on the web.

  10. SPHINX--an algorithm for taxonomic binning of metagenomic sequences.

    Science.gov (United States)

    Mohammed, Monzoorul Haque; Ghosh, Tarini Shankar; Singh, Nitin Kumar; Mande, Sharmila S

    2011-01-01

    Compared with composition-based binning algorithms, the binning accuracy and specificity of alignment-based binning algorithms is significantly higher. However, being alignment-based, the latter class of algorithms require enormous amount of time and computing resources for binning huge metagenomic datasets. The motivation was to develop a binning approach that can analyze metagenomic datasets as rapidly as composition-based approaches, but nevertheless has the accuracy and specificity of alignment-based algorithms. This article describes a hybrid binning approach (SPHINX) that achieves high binning efficiency by utilizing the principles of both 'composition'- and 'alignment'-based binning algorithms. Validation results with simulated sequence datasets indicate that SPHINX is able to analyze metagenomic sequences as rapidly as composition-based algorithms. Furthermore, the binning efficiency (in terms of accuracy and specificity of assignments) of SPHINX is observed to be comparable with results obtained using alignment-based algorithms. A web server for the SPHINX algorithm is available at http://metagenomics.atc.tcs.com/SPHINX/.

  11. The FOLDALIGN web server for pairwise structural RNA alignment and mutual motif search

    DEFF Research Database (Denmark)

    Havgaard, Jakob Hull; Lyngsø, Rune B.; Gorodkin, Jan

    2005-01-01

    FOLDALIGN is a Sankoff-based algorithm for making structural alignments of RNA sequences. Here, we present a web server for making pairwise alignments between two RNA sequences, using the recently updated version of FOLDALIGN. The server can be used to scan two sequences for a common structural RNA...... motif of limited size, or the entire sequences can be aligned locally or globally. The web server offers a graphical interface, which makes it simple to make alignments and manually browse the results. the web server can be accessed at http://foldalign.kvl.dk...

  12. A direct method for computing extreme value (Gumbel) parameters for gapped biological sequence alignments.

    Science.gov (United States)

    Quinn, Terrance; Sinkala, Zachariah

    2014-01-01

    We develop a general method for computing extreme value distribution (Gumbel, 1958) parameters for gapped alignments. Our approach uses mixture distribution theory to obtain associated BLOSUM matrices for gapped alignments, which in turn are used for determining significance of gapped alignment scores for pairs of biological sequences. We compare our results with parameters already obtained in the literature.

  13. Tests of Alignment among Assessment, Standards, and Instruction Using Generalized Linear Model Regression

    Science.gov (United States)

    Fulmer, Gavin W.; Polikoff, Morgan S.

    2014-01-01

    An essential component in school accountability efforts is for assessments to be well-aligned with the standards or curriculum they are intended to measure. However, relatively little prior research has explored methods to determine statistical significance of alignment or misalignment. This study explores analyses of alignment as a special case…

  14. Mask alignment system for semiconductor processing

    Science.gov (United States)

    Webb, Aaron P.; Carlson, Charles T.; Weaver, William T.; Grant, Christopher N.

    2017-02-14

    A mask alignment system for providing precise and repeatable alignment between ion implantation masks and workpieces. The system includes a mask frame having a plurality of ion implantation masks loosely connected thereto. The mask frame is provided with a plurality of frame alignment cavities, and each mask is provided with a plurality of mask alignment cavities. The system further includes a platen for holding workpieces. The platen may be provided with a plurality of mask alignment pins and frame alignment pins configured to engage the mask alignment cavities and frame alignment cavities, respectively. The mask frame can be lowered onto the platen, with the frame alignment cavities moving into registration with the frame alignment pins to provide rough alignment between the masks and workpieces. The mask alignment cavities are then moved into registration with the mask alignment pins, thereby shifting each individual mask into precise alignment with a respective workpiece.

  15. CAMPways: constrained alignment framework for the comparative analysis of a pair of metabolic pathways.

    Science.gov (United States)

    Abaka, Gamze; Bıyıkoğlu, Türker; Erten, Cesim

    2013-07-01

    Given a pair of metabolic pathways, an alignment of the pathways corresponds to a mapping between similar substructures of the pair. Successful alignments may provide useful applications in phylogenetic tree reconstruction, drug design and overall may enhance our understanding of cellular metabolism. We consider the problem of providing one-to-many alignments of reactions in a pair of metabolic pathways. We first provide a constrained alignment framework applicable to the problem. We show that the constrained alignment problem even in a primitive setting is computationally intractable, which justifies efforts for designing efficient heuristics. We present our Constrained Alignment of Metabolic Pathways (CAMPways) algorithm designed for this purpose. Through extensive experiments involving a large pathway database, we demonstrate that when compared with a state-of-the-art alternative, the CAMPways algorithm provides better alignment results on metabolic networks as far as measures based on same-pathway inclusion and biochemical significance are concerned. The execution speed of our algorithm constitutes yet another important improvement over alternative algorithms. Open source codes, executable binary, useful scripts, all the experimental data and the results are freely available as part of the Supplementary Material at http://code.google.com/p/campways/. Supplementary data are available at Bioinformatics online.

  16. MaxAlign: maximizing usable data in an alignment

    DEFF Research Database (Denmark)

    Oliveira, Rodrigo Gouveia; Sackett, Peter Wad; Pedersen, Anders Gorm

    2007-01-01

    Align. In this paper we also introduce a new simple measure of tree similarity, Normalized Symmetric Similarity (NSS) that we consider useful for comparing tree topologies. CONCLUSION: We demonstrate how MaxAlign is helpful in detecting misaligned or defective sequences without requiring manual inspection. We also...

  17. Reverse alignment "mirror image" visualization as a laparoscopic training tool improves task performance.

    Science.gov (United States)

    Dunnican, Ward J; Singh, T Paul; Ata, Ashar; Bendana, Emma E; Conlee, Thomas D; Dolce, Charles J; Ramakrishnan, Rakesh

    2010-06-01

    Reverse alignment (mirror image) visualization is a disconcerting situation occasionally faced during laparoscopic operations. This occurs when the camera faces back at the surgeon in the opposite direction from which the surgeon's body and instruments are facing. Most surgeons will attempt to optimize trocar and camera placement to avoid this situation. The authors' objective was to determine whether the intentional use of reverse alignment visualization during laparoscopic training would improve performance. A standard box trainer was configured for reverse alignment, and 34 medical students and junior surgical residents were randomized to train with either forward alignment (DIRECT) or reverse alignment (MIRROR) visualization. Enrollees were tested on both modalities before and after a 4-week structured training program specific to their modality. Student's t test was used to determine differences in task performance between the 2 groups. Twenty-one participants completed the study (10 DIRECT, 11 MIRROR). There were no significant differences in performance time between DIRECT or MIRROR participants during forward or reverse alignment initial testing. At final testing, DIRECT participants had improved times only in forward alignment performance; they demonstrated no significant improvement in reverse alignment performance. MIRROR participants had significant time improvement in both forward and reverse alignment performance at final testing. Reverse alignment imaging for laparoscopic training improves task performance for both reverse alignment and forward alignment tasks. This may be translated into improved performance in the operating room when faced with reverse alignment situations. Minimal lab training can account for drastic adaptation to this environment.

  18. Computer vision applications for coronagraphic optical alignment and image processing.

    Science.gov (United States)

    Savransky, Dmitry; Thomas, Sandrine J; Poyneer, Lisa A; Macintosh, Bruce A

    2013-05-10

    Modern coronagraphic systems require very precise alignment between optical components and can benefit greatly from automated image processing. We discuss three techniques commonly employed in the fields of computer vision and image analysis as applied to the Gemini Planet Imager, a new facility instrument for the Gemini South Observatory. We describe how feature extraction and clustering methods can be used to aid in automated system alignment tasks, and also present a search algorithm for finding regular features in science images used for calibration and data processing. Along with discussions of each technique, we present our specific implementation and show results of each one in operation.

  19. CMS Muon Alignment: System Description and first results

    CERN Document Server

    Sobron, M

    2008-01-01

    The CMS detector has been instrumented with a precise and complex opto-mechanical alignment subsystem that provides a common reference frame between Tracker and Muon detection systems by means of a net of laser beams. The system allows a continuous and accurate monitoring of the muon chambers positions with respect to the Tracker body. Preliminary results of operation during the test of the CMS 4T solenoid magnet, performed in 2006, are presented. These measurements complement the information provided by the use of survey techniques and the results of alignment algorithms based on muon tracks crossing the detector.

  20. A Novel Pairwise Comparison-Based Method to Determine Radiation Dose Reduction Potentials of Iterative Reconstruction Algorithms, Exemplified Through Circle of Willis Computed Tomography Angiography.

    Science.gov (United States)

    Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel

    2016-05-01

    The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose