WorldWideScience

Sample records for length em algorithm

  1. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  2. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  3. Application of the EM algorithm to radiographic images.

    Science.gov (United States)

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  4. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  5. A Trust Region Aggressive Space Mapping Algorithm for EM

    DEFF Research Database (Denmark)

    Bakr., M.; Bandler, J. W.; Biernacki, R.

    1998-01-01

    A robust new algorithm for electromagnetic (EM) optimization of microwave circuits is presented. The algorithm (TRASM) integrates a trust region methodology with the aggressive space mapping (ASM). The trust region ensures that each iteration results in improved alignment between the coarse....... This suggested step exploits all the available EM simulations for improving the uniqueness of parameter extraction. The new algorithm was successfully used to design a number of microwave circuits. Examples include the EM optimization of a double-folded stub filter and of a high-temperature superconducting (HTS...

  6. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  7. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    Science.gov (United States)

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  8. Performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.

    1990-01-01

    In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms

  9. EM algorithm for one-shot device testing with competing risks under exponential distribution

    International Nuclear Information System (INIS)

    Balakrishnan, N.; So, H.Y.; Ling, M.H.

    2015-01-01

    This paper provides an extension of the work of Balakrishnan and Ling [1] by introducing a competing risks model into a one-shot device testing analysis under an accelerated life test setting. An Expectation Maximization (EM) algorithm is then developed for the estimation of the model parameters. An extensive Monte Carlo simulation study is carried out to assess the performance of the EM algorithm and then compare the obtained results with the initial estimates obtained by the Inequality Constrained Least Squares (ICLS) method of estimation. Finally, we apply the EM algorithm to a clinical data, ED01, to illustrate the method of inference developed here. - Highlights: • ALT data analysis for one-shot devices with competing risks is considered. • EM algorithm is developed for the determination of the MLEs. • The estimations of lifetime under normal operating conditions are presented. • The EM algorithm improves the convergence rate

  10. A Receiver for Differential Space-Time -Shifted BPSK Modulation Based on Scalar-MSDD and the EM Algorithm

    Directory of Open Access Journals (Sweden)

    Kim Jae H

    2005-01-01

    Full Text Available In this paper, we consider the issue of blind detection of Alamouti-type differential space-time (ST modulation in static Rayleigh fading channels. We focus our attention on a -shifted BPSK constellation, introducing a novel transformation to the received signal such that this binary ST modulation, which has a second-order transmit diversity, is equivalent to QPSK modulation with second-order receive diversity. This equivalent representation allows us to apply a low-complexity detection technique specifically designed for receive diversity, namely, scalar multiple-symbol differential detection (MSDD. To further increase receiver performance, we apply an iterative expectation-maximization (EM algorithm which performs joint channel estimation and sequence detection. This algorithm uses minimum mean square estimation to obtain channel estimates and the maximum-likelihood principle to detect the transmitted sequence, followed by differential decoding. With receiver complexity proportional to the observation window length, our receiver can achieve the performance of a coherent maximal ratio combining receiver (with differential decoding in as few as a single EM receiver iteration, provided that the window size of the initial MSDD is sufficiently long. To further demonstrate that the MSDD is a vital part of this receiver setup, we show that an initial ST conventional differential detector would lead to strange convergence behavior in the EM algorithm.

  11. A novel gene network inference algorithm using predictive minimum description length approach.

    Science.gov (United States)

    Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang

    2010-05-28

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the

  12. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  13. On the use of successive data in the ML-EM algorithm in Positron Emission Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Desmedt, P; Lemahieu, I [University of Ghent, ELIS Department, SInt-Pietersnieuwstraat 41, B-9000 Gent, (Belgium)

    1994-12-31

    The Maximum Likelihood-Expectation Maximization (ML-EM) algorithm is the most popular statistical reconstruction technique for Positron Emission Tomography (PET). The ML-EM algorithm is however also renowned for its long reconstruction times. An acceleration technique for this algorithm is studied in this paper. The proposed technique starts the ML-EM algorithm before the measurement process is completed. Since the reconstruction is initiated during the scan of the patient, the time elapsed before a reconstruction becomes available is reduced. Experiments with software phantoms indicate that the quality of the reconstructed image using successive data is comparable to the quality of the reconstruction with the normal ML-EM algorithm. (authors). 7 refs, 3 figs.

  14. A leaf sequencing algorithm to enlarge treatment field length in IMRT

    International Nuclear Information System (INIS)

    Xia Ping; Hwang, Andrew B.; Verhey, Lynn J.

    2002-01-01

    With MLC-based IMRT, the maximum usable field size is often smaller than the maximum field size for conventional treatments. This is due to the constraints of the overtravel distances of MLC leaves and/or jaws. Using a new leaf sequencing algorithm, the usable IMRT field length (perpendicular to the MLC motion) can be mostly made equal to the full length of the MLC field without violating the upper jaw overtravel limit. For any given intensity pattern, a criterion was proposed to assess whether an intensity pattern can be delivered without violation of the jaw position constraints. If the criterion is met, the new algorithm will consider the jaw position constraints during the segmentation for the step and shoot delivery method. The strategy employed by the algorithm is to connect the intensity elements outside the jaw overtravel limits with those inside the jaw overtravel limits. Several methods were used to establish these connections during segmentation by modifying a previously published algorithm (areal algorithm), including changing the intensity level, alternating the leaf-sequencing direction, or limiting the segment field size. The algorithm was tested with 1000 random intensity patterns with dimensions of 21x27 cm2, 800 intensity patterns with higher intensity outside the jaw overtravel limit, and three different types of clinical treatment plans that were undeliverable using a segmentation method from a commercial treatment planning system. The new algorithm achieved a success rate of 100% with these test patterns. For the 1000 random patterns, the new algorithm yields a similar average number of segments of 36.9±2.9 in comparison to 36.6±1.3 when using the areal algorithm. For the 800 patterns with higher intensities outside the jaw overtravel limits, the new algorithm results in an increase of 25% in the average number of segments compared to the areal algorithm. However, the areal algorithm fails to create deliverable segments for 90% of these

  15. The relationship between randomness and power-law distributed move lengths in random walk algorithms

    Science.gov (United States)

    Sakiyama, Tomoko; Gunji, Yukio-Pegio

    2014-05-01

    Recently, we proposed a new random walk algorithm, termed the REV algorithm, in which the agent alters the directional rule that governs it using the most recent four random numbers. Here, we examined how a non-bounded number, i.e., "randomness" regarding move direction, was important for optimal searching and power-law distributed step lengths in rule change. We proposed two algorithms: the REV and REV-bounded algorithms. In the REV algorithm, one of the four random numbers used to change the rule is non-bounded. In contrast, all four random numbers in the REV-bounded algorithm are bounded. We showed that the REV algorithm exhibited more consistent power-law distributed step lengths and flexible searching behavior.

  16. Statistical trajectory of an approximate EM algorithm for probabilistic image processing

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Titterington, D M

    2007-01-01

    We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations

  17. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  18. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  19. Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography

    Directory of Open Access Journals (Sweden)

    Kiyoko Tateishi

    2017-01-01

    Full Text Available The maximum-likelihood expectation-maximization (ML-EM algorithm is used for an iterative image reconstruction (IIR method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.

  20. Convergence and resolution recovery of block-iterative EM algorithms modeling 3D detector response in SPECT

    International Nuclear Information System (INIS)

    Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.

    1996-01-01

    We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly

  1. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    Science.gov (United States)

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  2. Noise properties of the EM algorithm. Pt. 1

    International Nuclear Information System (INIS)

    Barrett, H.H.; Wilson, D.W.; Tsui, B.M.W.

    1994-01-01

    The expectation-maximisation (EM) algorithm is an important tool for maximum-likelihood (ML) estimation and image reconstruction, especially in medical imaging. It is a non-linear iterative algorithm that attempts to find the ML estimate of the object that produced a data set. The convergence of the algorithm and other deterministic properties are well established, but relatively little is known about how noise in the data influences noise in the final reconstructed image. In this paper we present a detailed treatment of these statistical properties. The specific application we have in mind is image reconstruction in emission tomography, but the results are valid for any application of the EM algorithm in which the data set can be described by Poisson statistics. We show that the probability density function for the grey level at a pixel in the image is well approximated by a log-normal law. An expression is derived for the variance of the grey level and for pixel-to-pixel covariance. The variance increases rapidly with iteration number at first, but eventually saturates as the ML estimate is approached. Moreover, the variance at any iteration number has a factor proportional to the square of the mean image (though other factors may also depend on the mean image), so a map of the standard deviation resembles the object itself. Thus low-intensity regions of the image tend to have low noise. (author)

  3. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.; Larsen, Jan

    2006-01-01

    A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...

  4. Application of the Region-Time-Length algorithm to study of ...

    Indian Academy of Sciences (India)

    51

    analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 ...

  5. A computational algorithm addressing how vessel length might depend on vessel diameter

    Science.gov (United States)

    Jing Cai; Shuoxin Zhang; Melvin T. Tyree

    2010-01-01

    The objective of this method paper was to examine a computational algorithm that may reveal how vessel length might depend on vessel diameter within any given stem or species. The computational method requires the assumption that vessels remain approximately constant in diameter over their entire length. When this method is applied to three species or hybrids in the...

  6. Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

    OpenAIRE

    Gray, Alexander G.; Fischer, Bernd; Schumann, Johann; Buntine, Wray

    2003-01-01

    Machine learning has reached a point where many probabilistic methods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized for different models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer alge...

  7. The algorithm of random length sequences synthesis for frame synchronization of digital television systems

    Directory of Open Access Journals (Sweden)

    Аndriy V. Sadchenko

    2015-12-01

    Full Text Available Digital television systems need to ensure that all digital signals processing operations are performed simultaneously and consistently. Frame synchronization dictated by the need to match phases of transmitter and receiver so that it would be possible to identify the start of a frame. As a frame synchronization signals are often used long length binary sequence with good aperiodic autocorrelation function. Aim: This work is dedicated to the development of the algorithm of random length sequences synthesis. Materials and Methods: The paper provides a comparative analysis of the known sequences, which can be used at present as synchronization ones, revealed their advantages and disadvantages. This work proposes the algorithm for the synthesis of binary synchronization sequences of random length with good autocorrelation properties based on noise generator with a uniform distribution law of probabilities. A "white noise" semiconductor generator is proposed to use as the initial material for the synthesis of binary sequences with desired properties. Results: The statistical analysis of the initial implementations of the "white noise" and synthesized sequences for frame synchronization of digital television is conducted. The comparative analysis of the synthesized sequences with known ones was carried out. The results show the benefits of obtained sequences in compare with known ones. The performed simulations confirm the obtained results. Conclusions: Thus, the search algorithm of binary synchronization sequences with desired autocorrelation properties received. According to this algorithm, the sequence can be longer in length and without length limitations. The received sync sequence can be used for frame synchronization in modern digital communication systems that will increase their efficiency and noise immunity.

  8. Optimal data replication: A new approach to optimizing parallel EM algorithms on a mesh-connected multiprocessor for 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.

    1995-01-01

    The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication

  9. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  10. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  11. Optimal solution for travelling salesman problem using heuristic shortest path algorithm with imprecise arc length

    Science.gov (United States)

    Bakar, Sumarni Abu; Ibrahim, Milbah

    2017-08-01

    The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.

  12. A multicenter evaluation of seven commercial ML-EM algorithms for SPECT image reconstruction using simulation data

    International Nuclear Information System (INIS)

    Matsumoto, Keiichi; Ohnishi, Hideo; Niida, Hideharu; Nishimura, Yoshihiro; Wada, Yasuhiro; Kida, Tetsuo

    2003-01-01

    The maximum likelihood expectation maximization (ML-EM) algorithm has become available as an alternative to filtered back projection in SPECT. The actual physical performance may be different depending on the manufacturer and model, because of differences in computational details. The purpose of this study was to investigate the characteristics of seven different types of ML-EM algorithms using simple simulation data. Seven ML-EM algorithm programs were used: Genie (GE), esoft (Siemens), HARP-III (Hitachi), GMS-5500UI (Toshiba), Pegasys (ADAC), ODYSSEY-FX (Marconi), and Windows-PC (original software). Projection data of a 2-pixel-wide line source in the center of the field of view were simulated without attenuation or scatter. Images were reconstructed with ML-EM by changing the number of iterations from 1 to 45 for each algorithm. Image quality was evaluated after a reconstruction using full width at half maximum (FWHM), full width at tenth maximum (FWTM), and the total counts of the reconstructed images. In the maximum number of iterations, the difference in the FWHM value was up to 1.5 pixels, and that of FWTM, no less than 2.0 pixels. The total counts of the reconstructed images in the initial few iterations were larger or smaller than the converged value depending on the initial values. Our results for the simplest simulation data suggest that each ML-EM algorithm itself provides a simulation image. We should keep in mind which algorithm is being used and its computational details, when physical and clinical usefulness are compared. (author)

  13. An algorithm for the design and tuning of RF accelerating structures with variable cell lengths

    Science.gov (United States)

    Lal, Shankar; Pant, K. K.

    2018-05-01

    An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.

  14. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    Science.gov (United States)

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…

  15. Word-length algorithm for language identification of under-resourced languages

    Directory of Open Access Journals (Sweden)

    Ali Selamat

    2016-10-01

    Full Text Available Language identification is widely used in machine learning, text mining, information retrieval, and speech processing. Available techniques for solving the problem of language identification do require large amount of training text that are not available for under-resourced languages which form the bulk of the World’s languages. The primary objective of this study is to propose a lexicon based algorithm which is able to perform language identification using minimal training data. Because language identification is often the first step in many natural language processing tasks, it is necessary to explore techniques that will perform language identification in the shortest possible time. Hence, the second objective of this research is to study the effect of the proposed algorithm on the run-time performance of language identification. Precision, recall, and F1 measures were used to determine the effectiveness of the proposed word length algorithm using datasets drawn from the Universal Declaration of Human Rights Act in 15 languages. The experimental results show good accuracy on language identification at the document level and at the sentence level based on the available dataset. The improved algorithm also showed significant improvement in run time performance compared with the spelling checker approach.

  16. Effects of Varying Epoch Lengths, Wear Time Algorithms, and Activity Cut-Points on Estimates of Child Sedentary Behavior and Physical Activity from Accelerometer Data.

    Science.gov (United States)

    Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N

    2016-01-01

    To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p algorithms (all p algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.

  17. An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2016-01-06

    In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.

  18. Mean field theory of EM algorithm for Bayesian grey scale image restoration

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Tanaka, Kazuyuki

    2003-01-01

    The EM algorithm for the Bayesian grey scale image restoration is investigated in the framework of the mean field theory. Our model system is identical to the infinite range random field Q-Ising model. The maximum marginal likelihood method is applied to the determination of hyper-parameters. We calculate both the data-averaged mean square error between the original image and its maximizer of posterior marginal estimate, and the data-averaged marginal likelihood function exactly. After evaluating the hyper-parameter dependence of the data-averaged marginal likelihood function, we derive the EM algorithm which updates the hyper-parameters to obtain the maximum likelihood estimate analytically. The time evolutions of the hyper-parameters and so-called Q function are obtained. The relation between the speed of convergence of the hyper-parameters and the shape of the Q function is explained from the viewpoint of dynamics

  19. Minimum decoding trellis length and truncation depth of wrap-around Viterbi algorithm for TBCC in mobile WiMAX

    Directory of Open Access Journals (Sweden)

    Liu Yu-Sun

    2011-01-01

    Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.

  20. Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection

    Directory of Open Access Journals (Sweden)

    Yi-Shan Lin

    2017-01-01

    Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.

  1. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali; Alsunaidi, Mohammad A.; Ng, Tien Khee; Ooi, Boon S.

    2013-01-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  2. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali

    2013-03-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  3. Application and performance of an ML-EM algorithm in NEXT

    Science.gov (United States)

    Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.

    2017-08-01

    The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.

  4. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  5. Estimation of tool wear length in finish milling using a fuzzy inference algorithm

    Science.gov (United States)

    Ko, Tae Jo; Cho, Dong Woo

    1993-10-01

    The geometric accuracy and surface roughness are mainly affected by the flank wear at the minor cutting edge in finish machining. A fuzzy estimator obtained by a fuzzy inference algorithm with a max-min composition rule to evaluate the minor flank wear length in finish milling is introduced. The features sensitive to minor flank wear are extracted from the dispersion analysis of a time series AR model of the feed directional acceleration of the spindle housing. Linguistic rules for fuzzy estimation are constructed using these features, and then fuzzy inferences are carried out with test data sets under various cutting conditions. The proposed system turns out to be effective for estimating minor flank wear length, and its mean error is less than 12%.

  6. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. A system for the 3D reconstruction of retracted-septa PET data using the EM algorithm

    International Nuclear Information System (INIS)

    Johnson, C.A.; Yan, Y.; Carson, R.E.; Martino, R.L.; Daube-Witherspoon, M.E.

    1995-01-01

    The authors have implemented the EM reconstruction algorithm for volume acquisition from current generation retracted-septa PET scanners. Although the software was designed for a GE Advance scanner, it is easily adaptable to other 3D scanners. The reconstruction software was written for an Intel iPSC/860 parallel computer with 128 compute nodes. Running on 32 processors, the algorithm requires approximately 55 minutes per iteration to reconstruct a 128 x 128 x 35 image. No projection data compression schemes or other approximations were used in the implementation. Extensive use of EM system matrix (C ij ) symmetries (including the 8-fold in-plane symmetries, 2-fold axial symmetries, and axial parallel line redundancies) reduces the storage cost by a factor of 188. The parallel algorithm operates on distributed projection data which are decomposed by base-symmetry angles. Symmetry operators copy and index the C ij chord to the form required for the particular symmetry. The use of asynchronous reads, lookup tables, and optimized image indexing improves computational performance

  8. Non-tables look-up search algorithm for efficient H.264/AVC context-based adaptive variable length coding decoding

    Science.gov (United States)

    Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong

    2014-09-01

    In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.

  9. Allometric and Isometric variations in the Italian <em>Apodemus sylvaticusem> and <em>Apodemus flavicollisem> with respect to the conditions of allopatry and sympatry / Variazioni allometriche e isometriche in <em>Apodemus sylvaticusem> e <em>Apodemus flavicollisem> italiani, rispetto alle condizioni di allopatria e simpatria

    Directory of Open Access Journals (Sweden)

    Giovanni Amori

    1986-12-01

    Full Text Available Abstract In Italy there are two species of <em>Apodemus> (<em>Sylvaemus>: <em>Apodemus sylvaticusem> on the mainland and the main island, and <em>Apodemus flavicollisem> only on the mainland. The trend of some morphometric characters of the skull (incisive foramen length - FI; interorbital breadth = IO; length of palatal bridge = PP; upper alveolar length = $M^1M^3$ was analized and some theoretical models verified for <em>A. sylvaticusem>. If one considers the sympatric population of <em>A. sylvaticusem> and <em>A. flavicollisem> simultaneously the characters PP, IO and $M^1M^3$ appear significantly isometric being directly correlated ($P leq O.O1$, while FI character results allometric with respect to the previous ones, as expected. If one considers the sympatric populations of each of the species separately, the scenario is different. For <em>A. sylvaticusem> only PP and $M^1M^3$ are isometric ($P leq 0.05$. For <em>A. flavicollisem> only $M^1M^3$ and FI appear to be correlated, although not as significantly as for <em>A. sylvaticusem> ($P le 0.05$; one tail. The insular populations of <em>A. sylvaticusem> do not show significant correlations, except for FI and $M^1M^3$ ($P le 0.05$. On the contrary, considering all populations, sympatric and allopatric, of <em>A. sylvaticusem> at the same time are significant correlations ($P le 0.05$ in all combinations of characters, except for those involving the IO. We suggest that the isometric relations in sympatric assemblages are confined within a morphological range available to the genus <em>Apodemus>. In such a space, the two species are split in two different and innerly homogeneous distributions. We found no evidence to confirm the niche variation hypothesis. On the contrary, the variability expressed as SO or CV's appears higher in the sympatric populations than in the allopatric ones, for three of the four characters, confirming previous results

  10. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  11. Four Novel Cellulose Synthase (CESA Genes from <em>Birch> (<em>Betula platyphylla em>Suk. Involved in Primary and Secondary Cell Wall Biosynthesis

    Directory of Open Access Journals (Sweden)

    Xuemei Liu

    2012-09-01

    Full Text Available Cellulose synthase (CESA, which is an essential catalyst for the generation of plant cell wall biomass, is mainly encoded by the <em>CesA> gene family that contains ten or more members. In this study; four full-length cDNAs encoding CESA were isolated from<em> Betula platyphyllaem> Suk., which is an important timber species, using RT-PCR combined with the RACE method and were named as <em>BplCesA3em>, <em>−4em>,> −7 em>and> −8em>. These deduced CESAs contained the same typical domains and regions as their <em>Arabidopsis> homologs. The cDNA lengths differed among these four genes, as did the locations of the various protein domains inferred from the deduced amino acid sequences, which shared amino acid sequence identities ranging from only 63.8% to 70.5%. Real-time RT-PCR showed that all four <em>BplCesAs> were expressed at different levels in diverse tissues. Results indicated that BplCESA8 might be involved in secondary cell wall biosynthesis and floral development. BplCESA3 appeared in a unique expression pattern and was possibly involved in primary cell wall biosynthesis and seed development; it might also be related to the homogalacturonan synthesis. BplCESA7 and BplCESA4 may be related to the formation of a cellulose synthase complex and participate mainly in secondary cell wall biosynthesis. The extremely low expression abundance of the four BplCESAs in mature pollen suggested very little involvement of them in mature pollen formation in <em>Betula>. The distinct expression pattern of the four <em>BplCesAs> suggested they might participate in developments of various tissues and that they are possibly controlled by distinct mechanisms in <em>Betula.>

  12. Length and coverage of inhibitory decision rules

    KAUST Repository

    Alsolami, Fawaz

    2012-01-01

    Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.

  13. A QoS-Based Dynamic Queue Length Scheduling Algorithm in Multiantenna Heterogeneous Systems

    Directory of Open Access Journals (Sweden)

    Verikoukis Christos

    2010-01-01

    Full Text Available The use of real-time delay-sensitive applications in wireless systems has significantly grown during the last years. Therefore the designers of wireless systems have faced a challenging issue to guarantee the required Quality of Service (QoS. On the other hand, the recent advances and the extensive use of multiple antennas have already been included in several commercial standards, where the multibeam opportunistic transmission beamforming strategies have been proposed to improve the performance of the wireless systems. A cross-layer-based dynamically tuned queue length scheduler is presented in this paper, for the Downlink of multiuser and multiantenna WLAN systems with heterogeneous traffic requirements. To align with modern wireless systems transmission strategies, an opportunistic scheduling algorithm is employed, while a priority to the different traffic classes is applied. A tradeoff between the maximization of the throughput of the system and the guarantee of the maximum allowed delay is obtained. Therefore, the length of the queue is dynamically adjusted to select the appropriate conditions based on the operator requirements.

  14. Finite sample performance of the E-M algorithm for ranks data modelling

    Directory of Open Access Journals (Sweden)

    Angela D'Elia

    2007-10-01

    Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.

  15. Método para classificação de tipos de erros humanos: estudo de caso em acidentes em canteiros de obras An algorithm for classifying error types of front-line workers: a case study in accidents in construction sites

    Directory of Open Access Journals (Sweden)

    Tarcisio Abreu Saurin

    2012-04-01

    Full Text Available Este trabalho tem como objetivo principal desenvolver melhorias em um método de classificação de tipos de erros humanos de operadores de linha de frente. Tais melhorias foram desenvolvidas com base no teste do método em canteiros de obras, um ambiente no qual ele ainda não havia sido aplicado. Assim, foram investigados 19 acidentes de trabalho ocorridos em uma construtora de pequeno porte, sendo classificados os tipos de erros dos trabalhadores lesionados e de colegas de equipe que se encontravam no cenário do acidente. Os resultados indicaram que não houve nenhum erro em 70,5% das 34 vezes em que o método foi aplicado, evidenciando que as causas dos acidentes estavam fortemente associadas a fatores organizacionais. O estudo apresenta ainda recomendações para a interpretação das perguntas que constituem o método, bem como modificações em algumas dessas perguntas em comparação às versões anteriores.The objective of this study is to propose improvements in the algorithm for classifying error types of front-line workers. The improvements have been identified on the basis of testing the algorithm in construction sites, an environment where it had not been implemented it. To this end, 19 occupational accidents which occurred in a small construction company were investigated, and the error types of both injured workers and team members were classified. The results indicated that there was no error in 70.5% of the 34 times the algorithm was applied, providing evidence that the causes were strongly linked to organizational factors. Moreover, the study presents not only recommendations to facilitate the interpretation of the questions that constitute the algorithm, but also changes in some questions in comparison to the previous versions of the tool.

  16. A Novel Apoptosis Correlated Molecule: Expression and Characterization of Protein Latcripin-1 from <em>Lentinula em>edodes> C91–3

    Directory of Open Access Journals (Sweden)

    Min Huang

    2012-05-01

    Full Text Available An apoptosis correlated molecule—protein Latcripin-1 of <em>Lentinula> edodesem> C91-3—was expressed and characterized in <em>Pichia pastorisem> GS115. The total RNA was obtained from <em>Lentinula edodesem> C91–3. According to the transcriptome, the full-length gene of Latcripin-1 was isolated with 3'-Full Rapid Amplification of cDNA Ends (RACE and 5'-Full RACE methods. The full-length gene was inserted into the secretory expression vector pPIC9K. The protein Latcripin-1 was expressed in <em>Pichia pastorisem> GS115 and analyzed by Sodium Dodecylsulfonate Polyacrylate Gel Electrophoresis (SDS-PAGE and Western blot. The Western blot showed that the protein was expressed successfully. The biological function of protein Latcripin-1 on A549 cells was studied with flow cytometry and the 3-(4,5-Dimethylthiazol-2-yl-2,5-Diphenyl-tetrazolium Bromide (MTT method. The toxic effect of protein Latcripin-1 was detected with the MTT method by co-culturing the characterized protein with chick embryo fibroblasts. The MTT assay results showed that there was a great difference between protein Latcripin-1 groups and the control group (<em>p em>< 0.05. There was no toxic effect of the characterized protein on chick embryo fibroblasts. The flow cytometry showed that there was a significant difference between the protein groups of interest and the control group according to apoptosis function (<em>p em>< 0.05. At the same time, cell ultrastructure observed by transmission electron microscopy supported the results of flow cytometry. The work demonstrates that protein Latcripin-1 can induce apoptosis of human lung cancer cells A549 and brings new insights into and advantages to finding anti-tumor proteins.

  17. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Directory of Open Access Journals (Sweden)

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  18. An efficient algorithm for computing fixed length attractors based on bounded model checking in synchronous Boolean networks with biochemical applications.

    Science.gov (United States)

    Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N

    2015-04-28

    Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.

  19. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  20. Accelerated EM-based clustering of large data sets

    NARCIS (Netherlands)

    Verbeek, J.J.; Nunnink, J.R.J.; Vlassis, N.

    2006-01-01

    Motivated by the poor performance (linear complexity) of the EM algorithm in clustering large data sets, and inspired by the successful accelerated versions of related algorithms like k-means, we derive an accelerated variant of the EM algorithm for Gaussian mixtures that: (1) offers speedups that

  1. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  2. String matching with variable length gaps

    DEFF Research Database (Denmark)

    Bille, Philip; Gørtz, Inge Li; Vildhøj, Hjalte Wedel

    2012-01-01

    primitive in computational biology applications. Let m and n be the lengths of P and T, respectively, and let k be the number of strings in P. We present a new algorithm achieving time O(nlogk+m+α) and space O(m+A), where A is the sum of the lower bounds of the lengths of the gaps in P and α is the total...... number of occurrences of the strings in P within T. Compared to the previous results this bound essentially achieves the best known time and space complexities simultaneously. Consequently, our algorithm obtains the best known bounds for almost all combinations of m, n, k, A, and α. Our algorithm...

  3. A Linear Time Algorithm for the <em>k> Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

     k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  4. Acetylcholinesterase-Inhibiting Activity of Salicylanilide <em>N>-Alkylcarbamates and Their Molecular Docking

    Directory of Open Access Journals (Sweden)

    Josef Jampilek

    2012-08-01

    Full Text Available A series of twenty-five novel salicylanilide <em>N>-alkylcarbamates were investigated as potential acetylcholinesterase inhibitors. The compounds were tested for their ability to inhibit acetylcholinesterase (AChE from electric eel (<em>Electrophorus electricusem> L.. Experimental lipophilicity was determined, and the structure-activity relationships are discussed. The mode of binding in the active site of AChE was investigated by molecular docking. All the discussed compounds expressed significantly higher AChE inhibitory activity than rivastigmine and slightly lower than galanthamine. Disubstitution by chlorine in C'(3,4 of the aniline ring and the optimal length of hexyl-undecyl alkyl chains in the carbamate moiety provided the most active AChE inhibitors. Monochlorination in C'(4 exhibited slightly more effective AChE inhibitors than in C'(3. Generally it can be stated that compounds with higher lipophilicity showed higher inhibition, and the activity of the compounds is strongly dependent on the length of the <em>N>-alkyl chain.

  5. Echolocation calls and morphology in the Mehelyi’s (<em>Rhinolophus mehelyiem> and mediterranean (<em>R. euryaleem> horseshoe bats: implications for resource partitioning

    Directory of Open Access Journals (Sweden)

    Egoitz Salsamendi

    2006-03-01

    Full Text Available Abstract <em>Rhinolophus euryaleem> and <em>R. mehelyiem> are morphologically very similar species and their distributions overlap extensively in the Mediterranean basin. We modelled their foraging behaviour using echolocation calls and wing morphology and, assuming niche segregation occurs between the two species, we explored how it is shaped by these factors. Resting frequency of echolocation calls was recorded and weight, forearm length, wing loading, aspect ratio and wing tip shape index were measured. <em>R. mehelyiem> showed a significantly higher resting frequency than <em>R. euryaleem>, but differences are deemed insufficient for dietary niche segregation. Weight and forearm length were significantly larger in <em>R. mehelyiem>. The higher values of aspect ratio and wing loading and a lower value of wing tip shape index in <em>R. melehyiem> restrict its flight manoeuvrability and agility. Therefore, the flight ability of <em>R. mehelyiem> may decrease as habitat complexity increases. Thus, the principal mechanism for resource partitioning seems to be based on differing habitat use arising from differences in wing morphology. Riassunto Ecolocalizzazione e morfologia nei rinolofi di Mehely (<em>Rhinolophus mehelyiem> e euriale (<em>R. euryaleem>: implicazioni nella segregazione delle risorse trofiche. <em>Rhinolophus euryaleem> e <em>R. mehelyiem> sono specie morfologicamente molto simili, la cui distribuzione risulta largamente coincidente in area mediterranea. Il comportamento di foraggiamento delle due specie è stato analizzato in funzione delle caratteristiche dei segnali di ecolocalizzazione e della morfologia alare, ed è stata valutata l’incidenza di questi fattori nell’ipotesi di una segregazione delle nicchie. È stata rilevata la frequenza a riposo dei segnali ultrasonori, così come il peso, la lunghezza dell’avambraccio, il carico alare, e due

  6. Weighted expectation maximization reconstruction algorithms with application to gated megavoltage tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa

    2005-01-01

    We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)

  7. PEG Enhancement for EM1 and EM2+ Missions

    Science.gov (United States)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG

  8. Controle genético do comprimento do pedúnculo em feijão-caupi Genetic control of peduncle length in cowpea

    Directory of Open Access Journals (Sweden)

    Maurisrael de Moura Rocha

    2009-03-01

    Full Text Available O objetivo deste trabalho foi estudar o controle genético do caráter comprimento do pedúnculo em feijão-caupi (Vigna unguiculata. Para isso, foi realizado um cruzamento entre os parentais TVx-5058-09C, de pedúnculo curto, e TE96-282-22G, de pedúnculo longo. Os parentais e as gerações F1, F2, RC1 (P1xF1 e RC2 (P2xF1 foram avaliados em delineamento de blocos ao acaso, com quatro repetições. Foram estimados: variâncias fenotípica, genotípica, ambiental, aditiva e de dominância; herdabilidades no sentido amplo e restrito; grau médio de dominância e número mínimo de genes que determinam o caráter. O modelo aditivo-dominante foi adequado para explicar a variação observada. O efeito gênico aditivo foi o mais importante no controle do comprimento do pedúnculo, que é, aparentemente, controlado por cinco genes.The objective of this work was to investigate the genetic control of peduncle length in cowpea (Vigna unguiculata L.. A short peduncle cowpea line (TVx-5058-09C was crossed with a long peduncle line (TE 96-282-22G. The parents and the F1, F2, RC1 (P1xF1, and RC2 (P2xF1 generations were evaluated in randomized block design with four replications. Genotypic, phenotypic, environmental, additive, and dominance variances for peduncle length were determined. Narrow and broad sense heritability, the degree of dominance, and the minimum number of genes determining peduncle length were estimated. The additive-dominant model was adequate to explain the observed variation. The additive gene effect was the most important in controlling peduncle length, which appeared to be controlled by five genes.

  9. emMAW: computing minimal absent words in external memory.

    Science.gov (United States)

    Héliou, Alice; Pissis, Solon P; Puglisi, Simon J

    2017-09-01

    The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n  bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  10. Predictive minimum description length principle approach to inferring gene regulatory networks.

    Science.gov (United States)

    Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping

    2011-01-01

    Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.

  11. A fast EM algorithm for BayesA-like prediction of genomic breeding values.

    Directory of Open Access Journals (Sweden)

    Xiaochen Sun

    Full Text Available Prediction accuracies of estimated breeding values for economically important traits are expected to benefit from genomic information. Single nucleotide polymorphism (SNP panels used in genomic prediction are increasing in density, but the Markov Chain Monte Carlo (MCMC estimation of SNP effects can be quite time consuming or slow to converge when a large number of SNPs are fitted simultaneously in a linear mixed model. Here we present an EM algorithm (termed "fastBayesA" without MCMC. This fastBayesA approach treats the variances of SNP effects as missing data and uses a joint posterior mode of effects compared to the commonly used BayesA which bases predictions on posterior means of effects. In each EM iteration, SNP effects are predicted as a linear combination of best linear unbiased predictions of breeding values from a mixed linear animal model that incorporates a weighted marker-based realized relationship matrix. Method fastBayesA converges after a few iterations to a joint posterior mode of SNP effects under the BayesA model. When applied to simulated quantitative traits with a range of genetic architectures, fastBayesA is shown to predict GEBV as accurately as BayesA but with less computing effort per SNP than BayesA. Method fastBayesA can be used as a computationally efficient substitute for BayesA, especially when an increasing number of markers bring unreasonable computational burden or slow convergence to MCMC approaches.

  12. Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm

    DEFF Research Database (Denmark)

    Puchinger, Sven; Müelich, Sven; Mödinger, David

    2017-01-01

    We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log⁡(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....

  13. <em>DCAF4em>, a novel gene associated with leucocyte telomere length

    DEFF Research Database (Denmark)

    Mangino, Massimo; Christiansen, Lene; Stone, Rivka

    2015-01-01

    BACKGROUND: Leucocyte telomere length (LTL), which is fashioned by multiple genes, has been linked to a host of human diseases, including sporadic melanoma. A number of genes associated with LTL have already been identified through genome-wide association studies. The main aim of this study was t...

  14. Simulating Evolution of <em>Drosophila melanogaster Ebonyem> Mutants Using a Genetic Algorithm

    DEFF Research Database (Denmark)

    Helles, Glennie

    2009-01-01

    Genetic algorithms are generally quite easy to understand and work with, and they are a popular choice in many cases. One area in which genetic algorithms are widely and successfully used is artificial life where they are used to simulate evolution of artificial creatures. However, despite...... their suggestive name, simplicity and popularity in artificial life, they do not seem to have gained a footing within the field of population genetics to simulate evolution of real organisms --- possibly because genetic algorithms are based on a rather crude simplification of the evolutionary mechanisms known...

  15. Blind sequence-length estimation of low-SNR cyclostationary sequences

    CSIR Research Space (South Africa)

    Vlok, JD

    2014-06-01

    Full Text Available Several existing direct-sequence spread spectrum (DSSS) detection and estimation algorithms assume prior knowledge of the symbol period or sequence length, although very few sequence-length estimation techniques are available in the literature...

  16. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    Energy Technology Data Exchange (ETDEWEB)

    Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  17. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  18. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  19. Does the use of bedside pelvic ultrasound decrease length of stay in the emergency department?

    Science.gov (United States)

    Thamburaj, Ravi; Sivitz, Adam

    2013-01-01

    Diagnostic ultrasounds by emergency medicine (EM) and pediatric emergency medicine (PEM) physicians have increased because of ultrasonography training during residency and fellowship. The availability of ultrasound in radiology departments is limited or difficult to obtain especially during nighttime hours. Studies have shown that EM physicians can accurately perform goal-directed ultrasound after appropriate training. The goal of this study was to compare the length of stay for patients receiving an ultrasound to confirm intrauterine pregnancies. The hypothesis of this study is that a bedside ultrasound by a trained EM/PEM physician can reduce length of stay in the emergency department (ED) by 1 hour. This was a case cohort retrospective review for patients aged 13 to 21 years who received pelvic ultrasounds in the ED during 2007. Each patient was placed into 1 of 2 groups. Group 1 received bedside ultrasounds done by institutionally credentialed EM/PEM attending physicians. Group 2 received radiology department ultrasound only. Each group had subanalysis done including chief complaint, time of presentation, time to completion of ultrasound, length of stay, diagnosis, and disposition. Daytime was defined as presentation between 7 AM and 9 PM when radiology ultrasound technologists were routinely available. We studied 330 patients, with 244 patients (74%) in the bedside ultrasound group. The demographics of both groups showed no difference in age, presenting complaints, discharge diagnoses, and ultimate disposition. Group 1 had a significant reduction (P ultrasound compared with group 2 (mean, 82 minutes [range, 1-901 minutes] vs 149 minutes [range, 7-506 minutes]) and length of stay (142 [16-2268] vs. 230 [16-844]). Of those presenting during the day (66%), group 1 showed a significant reduction in length of stay (P ultrasound by trained EM/PEM physicians produced a significant reduction in length of stay in the ED, regardless of radiology ultrasound technologist

  20. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youngrok [Iowa State Univ., Ames, IA (United States)

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  1. Faster Algorithms for Computing Longest Common Increasing Subsequences

    DEFF Research Database (Denmark)

    Kutz, Martin; Brodal, Gerth Stølting; Kaligosi, Kanela

    2011-01-01

    of the alphabet, and Sort is the time to sort each input sequence. For k⩾3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures......We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths n and m, where m⩾n, we present an algorithm with an output-dependent expected running time of and O(m) space, where ℓ is the length of an LCIS, σ is the size....... Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an -time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small...

  2. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  3. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  4. An Efficient Algorithm for the Discrete Gabor Transform using full length Windows

    DEFF Research Database (Denmark)

    Søndergaard, Peter Lempel

    2007-01-01

    This paper extends the efficient factorization of the Gabor frame operator developed by Strohmer in [1] to the Gabor analysis/synthesis operator. This provides a fast method for computing the discrete Gabor transform (DGT) and several algorithms associated with it. The algorithm is used...

  5. GPS 2.1: enhanced prediction of kinase-specific phosphorylation sites with an algorithm of motif length selection.

    Science.gov (United States)

    Xue, Yu; Liu, Zexian; Cao, Jun; Ma, Qian; Gao, Xinjiao; Wang, Qingqi; Jin, Changjiang; Zhou, Yanhong; Wen, Longping; Ren, Jian

    2011-03-01

    As the most important post-translational modification of proteins, phosphorylation plays essential roles in all aspects of biological processes. Besides experimental approaches, computational prediction of phosphorylated proteins with their kinase-specific phosphorylation sites has also emerged as a popular strategy, for its low-cost, fast-speed and convenience. In this work, we developed a kinase-specific phosphorylation sites predictor of GPS 2.1 (Group-based Prediction System), with a novel but simple approach of motif length selection (MLS). By this approach, the robustness of the prediction system was greatly improved. All algorithms in GPS old versions were also reserved and integrated in GPS 2.1. The online service and local packages of GPS 2.1 were implemented in JAVA 1.5 (J2SE 5.0) and freely available for academic researches at: http://gps.biocuckoo.org.

  6. Effective calculation algorithm for nuclear chains of arbitrary length and branching

    International Nuclear Information System (INIS)

    Chirkov, V.A.; Mishanin, B.V.

    1994-01-01

    An effective algorithm for calculation of the isotope concentration in the spent nuclear fuel when it is kept in storage, is presented. Using the superposition principle and representing the transfer function in a rather compact form it becomes possible achieve high calculation speed and a moderate computer code size. The algorithm is applied for the calculation of activity, energy release and toxicity of heavy nuclides and products of their decay when the fuel is kept in storage. (authors). 1 ref., 4 tabs

  7. Improved algorithms for approximate string matching (extended abstract

    Directory of Open Access Journals (Sweden)

    Papamichail Georgios

    2009-01-01

    Full Text Available Abstract Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|·min(m, n, s + m + n and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online.

  8. A fast fractional difference algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    2014-01-01

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  9. A Fast Fractional Difference Algorithm

    DEFF Research Database (Denmark)

    Jensen, Andreas Noack; Nielsen, Morten Ørregaard

    We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...

  10. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  11. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  12. Tap-length optimization of adaptive filters used in stereophonic acoustic echo cancellation

    DEFF Research Database (Denmark)

    Kar, Asutosh; Swamy, M.N.S.

    2017-01-01

    An adaptive filter with a large number of weights or taps is necessary for stereophonic acoustic echo cancellation (SAEC), depending on the room impulse response and acoustic path where the cancellation is performed. However, a large tap-length results in slow convergence and increases...... the complexity of the tapped delay line structure for FIR adaptive filters. To overcome this problem, there is a need for an optimum tap-length-estimation algorithm that provides better convergence for the adaptive filters used in SAEC. This paper presents a solution to the problem of balancing convergence...... and steady-state performance of long length adaptive filters used for SAEC by proposing a new tap-length-optimization algorithm. The optimum tap length and step size of the adaptive filter are derived considering an impulse response with an exponentially-decaying envelope, which models a wide range...

  13. A space-efficient algorithm for local similarities.

    Science.gov (United States)

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  14. Persistence length of wormlike micelles composed of ionic surfactants: self-consistent-field predictions

    NARCIS (Netherlands)

    Lauw, Y.; Leermakers, F.A.M.; Cohen Stuart, M.A.

    2007-01-01

    The persistence length of a wormlike micelle composed of ionic surfactants CnEmXk in an aqueous solvent is predicted by means of the self-consistent-field theory where CnEm is the conventional nonionic surfactant and X-k is an additional sequence of k weakly charged (pH-dependent) segments. By

  15. Clustering performance comparison using K-means and expectation maximization algorithms.

    Science.gov (United States)

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  16. Componentes oculares em anisometropia The ocular components in anisometropia

    Directory of Open Access Journals (Sweden)

    David Tayah

    2007-06-01

    Full Text Available OBJETIVO: Comparar as correlações dos componentes oculares (comprimento axial, comprimento do segmento anterior, poder médio da córnea, profundidade da câmara vítrea e poder refrativo equivalente com o erro refrativo total do olho portador da menor e da maior ametropia em anisométropes. MÉTODOS: Foi realizado um "survey" analítico conduzido em população de 68 anisométropes de duas ou mais dioptrias atendida no Ambulatório da Clinica Oftalmológica do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo Os anisométropes foram submetidos à refração estática objetiva e subjetiva, ceratometria e biometria ultra-sônica. RESULTADOS: Não houve diferença significativa entre os valores dos componentes oculares medidos dos olhos portadores da menor e da maior ametropia. Os olhos portadores da menor ametropia apresentaram as mesmas correlações significantes observadas em olhos emétropes, ou seja, correlação da refração com comprimento do segmento anterior e comprimento axial, e correlação do comprimento axial com poder corneano e profundidade da câmara vítrea. Os olhos portadores da maior ametropia apresentaram correlação significante da refração com o comprimento axial e do comprimento axial com a profundidade da câmara vítrea. Ainda em ambos os olhos observou-se correlação significante do poder do cristalino com a profundidade da câmara anterior. CONCLUSÃO: Os olhos portadores da menor ametropia desenvolveram as correlações mais freqüentemente observadas nos olhos emétropes. Os olhos portadores da maior ametropia não desenvolveram as mesmas correlações dos emétropes.PURPOSE: To asses the correlation between ocular components (axial length, anterior segment length, corneal power, vitreous length and equivalent power of the eye and refractive error in eyes with higher and lower ametropia of subjects with anisometropia. METHODS: An analytical survey was carried out in 68 patients

  17. Effect of Calcium and Potassium on Antioxidant System of <em>Vicia fabaem> L. Under Cadmium Stress

    Directory of Open Access Journals (Sweden)

    Hayssam M. Ali

    2012-05-01

    Full Text Available Cadmium (Cd in soil poses a major threat to plant growth and productivity. In the present experiment, we studied the effect of calcium (Ca2+ and/or potassium (K+ on the antioxidant system, accumulation of proline (Pro, malondialdehyde (MDA, and content of photosynthetic pigments, cadmium (Cd and nutrients, <em>i.e.>, Ca2+ and K+ in leaf of <em>Vicia faba em>L. (cv. TARA under Cd stress. Plants grown in the presence of Cd exhibited reduced growth traits [root length (RL plant−1, shoot length (SL plant−1, root fresh weight (RFW plant−1, shoot fresh weight (SFW plant−1, root dry weight (RDW plant−1 and shoot dry weight (SDW plant−1] and concentration of Ca2+, K+, Chlorophyll (Chl <em>a> and Chl <em>b em>content, except content of MDA, Cd and (Pro. The antioxidant enzymes [peroxidase (POD and superoxide dismutase (SOD] slightly increased as compared to control under Cd stress. However, a significant improvement was observed in all growth traits and content of Ca2+, K+, Chl <em>a>, Chl <em>b em>,Pro and activity of antioxidant enzymes catalase (CAT, POD and SOD in plants subjected to Ca2+ and/or K+. The maximum alleviating effect was recorded in the plants grown in medium containing Ca2+ and K+ together. This study indicates that the application of Ca2+ and/or K+ had a significant and synergistic effect on plant growth. Also, application of Ca2+ and/or K+ was highly effective against the toxicity of Cd by improving activity of antioxidant enzymes and solute that led to the enhanced plant growth of faba bean plants.

  18. A Heuristic T-S Fuzzy Model for the Pumped-Storage Generator-Motor Using Variable-Length Tree-Seed Algorithm-Based Competitive Agglomeration

    Directory of Open Access Journals (Sweden)

    Jianzhong Zhou

    2018-04-01

    Full Text Available With the fast development of artificial intelligence techniques, data-driven modeling approaches are becoming hotspots in both academic research and engineering practice. This paper proposes a novel data-driven T-S fuzzy model to precisely describe the complicated dynamic behaviors of pumped storage generator motor (PSGM. In premise fuzzy partition of the proposed T-S fuzzy model, a novel variable-length tree-seed algorithm based competitive agglomeration (VTSA-CA algorithm is presented to determine the optimal number of clusters automatically and improve the fuzzy clustering performances. Besides, in order to promote modeling accuracy of PSGM, the input and output formats in the T-S fuzzy model are selected by an economical parameter controlled auto-regressive (CAR model derived from a high-order transfer function of PSGM considering the distributed components in the water diversion system of the power plant. The effectiveness and superiority of the T-S fuzzy model for PSGM under different working conditions are validated by performing comparative studies with both practical data and the conventional mechanistic model.

  19. Fast implementation of length-adaptive privacy amplification in quantum key distribution

    International Nuclear Information System (INIS)

    Zhang Chun-Mei; Li Mo; Huang Jing-Zheng; Li Hong-Wei; Li Fang-Yi; Wang Chuan; Yin Zhen-Qiang; Chen Wei; Han Zhen-Fu; Treeviriyanupab Patcharapong; Sripimanwat Keattisak

    2014-01-01

    Post-processing is indispensable in quantum key distribution (QKD), which is aimed at sharing secret keys between two distant parties. It mainly consists of key reconciliation and privacy amplification, which is used for sharing the same keys and for distilling unconditional secret keys. In this paper, we focus on speeding up the privacy amplification process by choosing a simple multiplicative universal class of hash functions. By constructing an optimal multiplication algorithm based on four basic multiplication algorithms, we give a fast software implementation of length-adaptive privacy amplification. “Length-adaptive” indicates that the implementation of privacy amplification automatically adapts to different lengths of input blocks. When the lengths of the input blocks are 1 Mbit and 10 Mbit, the speed of privacy amplification can be as fast as 14.86 Mbps and 10.88 Mbps, respectively. Thus, it is practical for GHz or even higher repetition frequency QKD systems. (general)

  20. Graph run-length matrices for histopathological image segmentation.

    Science.gov (United States)

    Tosun, Akif Burak; Gunduz-Demir, Cigdem

    2011-03-01

    The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.

  1. Classification of Ultrasonic NDE Signals Using the Expectation Maximization (EM) and Least Mean Square (LMS) Algorithms

    International Nuclear Information System (INIS)

    Kim, Dae Won

    2005-01-01

    Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances

  2. Similarity-regulation of OS-EM for accelerated SPECT reconstruction

    Science.gov (United States)

    Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.

    2016-06-01

    Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.

  3. A new simple iterative reconstruction algorithm for SPECT transmission measurement

    International Nuclear Information System (INIS)

    Hwang, D.S.; Zeng, G.L.

    2005-01-01

    This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases

  4. Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length

    DEFF Research Database (Denmark)

    Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.

    2013-01-01

    Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...

  5. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  6. Hardware modules of the RSA algorithm

    Directory of Open Access Journals (Sweden)

    Škobić Velibor

    2014-01-01

    Full Text Available This paper describes basic principles of data protection using the RSA algorithm, as well as algorithms for its calculation. The RSA algorithm is implemented on FPGA integrated circuit EP4CE115F29C7, family Cyclone IV, Altera. Four modules of Montgomery algorithm are designed using VHDL. Synthesis and simulation are done using Quartus II software and ModelSim. The modules are analyzed for different key lengths (16 to 1024 in terms of the number of logic elements, the maximum frequency and speed.

  7. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach

  8. Optimization of inhibitory decision rules relative to length and coverage

    KAUST Repository

    Alsolami, Fawaz

    2012-01-01

    The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation "attribute = value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. © 2012 Springer-Verlag.

  9. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  10. Upper Bound for Queue length in Regulated Burst Service Scheduling

    Directory of Open Access Journals (Sweden)

    Mahmood Daneshvar Farzanegan

    2016-01-01

    Full Text Available Quality of Service (QoS provisioning is very important in next computer/communication networks because of increasing multimedia services. Hence, very investigations are performed in this area. Scheduling algorithms effect QoS provisioning. Lately, a scheduling algorithm called Regulated Burst Service Scheduling (RBSS suggested by author in [1] to provide a better service to bursty and delay sensitive services such as video. One of the most significant feature in RBSS is considering burstiness of arrival traffic in scheduling algorithm. In this paper, an upper bound of queue length or buffer size and service curve are calculated by Network Calculus analysis for RBSS. Because in RBSS queue length is a parameter that is considered in scheduling arbitrator, analysis results a differential inequality to obtain service curve. To simplify, arrival traffic is assumed to be linear that is defined in the paper clearly. This paper help to analysis delay in RBSS for different traffic with different specifications. Therefore, QoS provisioning will be evaluated.

  11. Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes

    OpenAIRE

    Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman

    2011-01-01

    Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...

  12. Phase retrieval via incremental truncated amplitude flow algorithm

    Science.gov (United States)

    Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao

    2017-10-01

    This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.

  13. A transport-based condensed history algorithm

    International Nuclear Information System (INIS)

    Tolar, D. R. Jr.

    1999-01-01

    Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s 0 . This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 le s le s 0 . For example, the PEnELOPE method splits each step into two substeps; one with length ξs 0 and one with length (1 minusξ)s 0 , where ξ is a random number from 0 0 is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power

  14. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  15. Application of the region-time-length algorithm to study of earthquake precursors in the Thailand-Laos-Myanmar borders

    Science.gov (United States)

    Puangjaktha, P.; Pailoplee, S.

    2018-04-01

    In order to examine the precursory seismic quiescence of upcoming hazardous earthquakes, the seismicity data available in the vicinity of the Thailand-Laos-Myanmar borders was analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 parameters, those of r0 = 120 km and t0 = 2 yr yielded reasonable estimates of the anomalous RTL scores, in both temporal variation and spatial distribution, of a few years prior to five out of eight strong-to-major recognized earthquakes. Statistical evaluation of both the correlation coefficient and stochastic process for the RTL were checked and revealed that the RTL score obtained here excluded artificial or random phenomena. Therefore, the prospective earthquake sources mentioned here should be recognized and effective mitigation plans should be provided.

  16. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  17. Quantitation of PET data with the EM reconstruction technique

    International Nuclear Information System (INIS)

    Rosenqvist, G.; Dahlbom, M.; Erikson, L.; Bohm, C.; Blomqvist, G.

    1989-01-01

    The expectation maximization (EM) algorithm offers high spatial resolution and excellent noise reduction with low statistics PET data, since it incorporates the Poisson nature of the data. The main difficulties are long computation times, difficulties to find appropriate criteria to terminate the reconstruction and to quantify the resulting image data. In the present work a modified EM algorithm has been implements on a VAX 11/780. Its capability to quantify image data has been tested in phantom studies and in two clinical cases, cerebral blood flow studies and dopamine D2-receptor studies. Data from phantom studies indicate the superiority of images reconstructed with the EM technique compared to images reconstructed with the conventional filtered back-projection (FB) technique in areas with low statistics. At higher statistics the noise characteristics of the two techniques coincide. Clinical data support these findings

  18. Online learning algorithm for ensemble of decision rules

    KAUST Repository

    Chikalov, Igor

    2011-01-01

    We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.

  19. Morphological identification of the Soprano Pipistrelle (<em>Pipistrellus pygmaeusem> Leach, 1825 in Croatia.

    Directory of Open Access Journals (Sweden)

    Igor Pavlinić

    2008-07-01

    Full Text Available Abstract After the discovery of two different phonic types within the common pipistrelle (<em>Pipistrellus pipistrellusem>, mtDNA analysis confirmed the existence of two separate species named as common pipistrelle (<em>P. pipistrellusem> and soprano pipistrelle (<em>P. pygmaeusem>. The discrimination of these two cryptic species using external characters and measures has proved to be somewhat problematic. We examined two colonies of soprano pipistrelle from Donji Miholjac, Croatia. As a result, only two characters proved to be of help for field identification: wing venation (89% of cases and penis morphology and colour for males. The difference in length between the 2nd and 3rd phalanxes of the 3rd finger should be discarded as diagnostic trait between <em>P. pipistrellusem> and <em>P. pygmaeusem> in Croatia. Riassunto Identificazione su basi morfologiche del pipistrello pigmeo (<em>Pipistrellus pygmeausem>, Leach, 1825 in Croazia. A seguito della descrizione di due differenti "tipi fonici" nel pipistrello nano (<em>Pipistrellus pipistrellusem> e della successiva conferma su basi genetiche dell'esistenza di due specie distinte, designate come pipistrello nano (<em>P. pipistrellusem> e pipistrello pigmeo (<em>P. pygmaeusem>, la distinzione delle due specie in base a caratteristiche morfologiche esterne si è dimostrata un problema di difficile soluzione. Sulla base delle caratteristiche distintive e delle differenze biometriche proposte da altri Autori, sono state esaminate due colonie di pipistrello pigmeo a Donji Miholjac, in Croazia. I risultati ottenuti evidenziano che, tra tutti i potenziali caratteri sinora proposti, solo due risultano utili per un'identificazione diretta sul campo: la venatura delle ali, risultata utile alla discriminazione nell'89% degli esemplari analizzati, e la colorazione e morfologia del pene nei maschi. La

  20. A generalized global alignment algorithm.

    Science.gov (United States)

    Huang, Xiaoqiu; Chao, Kun-Mao

    2003-01-22

    Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.

  1. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic

  2. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  3. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  4. Sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach

  5. Time Series Modeling of Nano-Gold Immunochromatographic Assay via Expectation Maximization Algorithm.

    Science.gov (United States)

    Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui

    2013-12-01

    In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.

  6. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed......The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor......-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  7. An empirical study on SAJQ (Sorting Algorithm for Join Queries

    Directory of Open Access Journals (Sweden)

    Hassan I. Mathkour

    2010-06-01

    Full Text Available Most queries that applied on database management systems (DBMS depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm, where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm. Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries.

  8. Temperature-dependence of Threshold Current Density-Length Product in Metallization Lines: A Revisit

    International Nuclear Information System (INIS)

    Duryat, Rahmat Saptono; Kim, Choong-Un

    2016-01-01

    One of the important phenomena in Electromigration (EM) is Blech Effect. The existence of Threshold Current Density-Length Product or EM Threshold has such fundamental and technological consequences in the design, manufacture, and testing of electronics. Temperature-dependence of Blech Product had been thermodynamically established and the real behavior of such interconnect materials have been extensively studied. The present paper reviewed the temperature-dependence of EM threshold in metallization lines of different materials and structure as found in relevant published articles. It is expected that the reader can see a big picture from the compiled data, which might be overlooked when it was examined in pieces. (paper)

  9. Multi-sources model and control algorithm of an energy management system for light electric vehicles

    International Nuclear Information System (INIS)

    Hannan, M.A.; Azidin, F.A.; Mohamed, A.

    2012-01-01

    Highlights: ► An energy management system (EMS) is developed for a scooter under normal and heavy power load conditions. ► The battery, FC, SC, EMS, DC machine and vehicle dynamics are modeled and designed for the system. ► State-based logic control algorithms provide an efficient and feasible multi-source EMS for light electric vehicles. ► Vehicle’s speed and power are closely matched with the ECE-47 driving cycle under normal and heavy load conditions. ► Sources of energy changeover occurred at 50% of the battery state of charge level in heavy load conditions. - Abstract: This paper presents the multi-sources energy models and ruled based feedback control algorithm of an energy management system (EMS) for light electric vehicle (LEV), i.e., scooters. The multiple sources of energy, such as a battery, fuel cell (FC) and super-capacitor (SC), EMS and power controller, DC machine and vehicle dynamics are designed and modeled using MATLAB/SIMULINK. The developed control strategies continuously support the EMS of the multiple sources of energy for a scooter under normal and heavy power load conditions. The performance of the proposed system is analyzed and compared with that of the ECE-47 test drive cycle in terms of vehicle speed and load power. The results show that the designed vehicle’s speed and load power closely match those of the ECE-47 test driving cycle under normal and heavy load conditions. This study’s results suggest that the proposed control algorithm provides an efficient and feasible EMS for LEV.

  10. Extending electronic length frequency analysis in R

    DEFF Research Database (Denmark)

    Taylor, M. H.; Mildenberger, Tobias K.

    2017-01-01

    VBGF (soVBGF) requires a more intensive search due to two additional parameters. This work describes the implementation of two optimisation approaches ("simulated annealing" and "genetic algorithm") for growth function fitting using the open-source software "R." Using a generated LFQ data set......Electronic length frequency analysis (ELEFAN) is a system of stock assessment methods using length-frequency (LFQ) data. One step is the estimation of growth from the progression of LFQ modes through time using the von Bertalanffy growth function (VBGF). The option to fit a seasonally oscillating...... of the asymptotic length parameter (L-infinity) are found to have significant effects on parameter estimation error. An outlook provides context as to the significance of the R-based implementation for further testing and development, as well as the general relevance of the method for data-limited stock assessment....

  11. Computational performance of a projection and rescaling algorithm

    OpenAIRE

    Pena, Javier; Soheili, Negar

    2018-01-01

    This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...

  12. A motion algorithm to extract physical and motion parameters of mobile targets from cone-beam computed tomographic images.

    Science.gov (United States)

    Alsbou, Nesreen; Ahmad, Salahuddin; Ali, Imad

    2016-05-17

    A motion algorithm has been developed to extract length, CT number level and motion amplitude of a mobile target from cone-beam CT (CBCT) images. The algorithm uses three measurable parameters: Apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm are tested with mobile targets having different well-known sizes that are made from tissue-equivalent gel which is inserted into a thorax phantom. The phantom moves sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0-20 mm. Using this motion algorithm, three unknown parameters are extracted that include: Length of the target, CT number level, speed or motion amplitude for the mobile targets from CBCT images. The motion algorithm solves for the three unknown parameters using measured length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agrees with the measured lengths which are dependent on the target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, the target length and motion amplitude. Motion frequency and phase do not affect the elongation and CT number distribution of the mobile target and could not be determined. A motion algorithm has been developed to extract three parameters that include length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement of motion tracking and sorting of the images into different breathing phases. The motion model developed here works well for tumors that have simple shapes, high contrast relative to surrounding tissues and move nearly in regular motion pattern

  13. The HSBQ Algorithm with Triple-play Services for Broadband Hybrid Satellite Constellation Communication System

    Directory of Open Access Journals (Sweden)

    Anupon Boriboon

    2016-07-01

    Full Text Available The HSBQ algorithm is the one of active queue management algorithms, which orders to avoid high packet loss rates and control stable stream queue. That is the problem of calculation of the drop probability for both queue length stability and bandwidth fairness. This paper proposes the HSBQ, which drop the packets before the queues overflow at the gateways, so that the end nodes can respond to the congestion before queue overflow. This algorithm uses the change of the average queue length to adjust the amount by which the mark (or drop probability is changed. Moreover it adjusts the queue weight, which is used to estimate the average queue length, based on the rate. The results show that HSBQ algorithm could maintain control stable stream queue better than group of congestion metric without flow information algorithm as the rate of hybrid satellite network changing dramatically, as well as the presented empiric evidences demonstrate that the use of HSBQ algorithm offers a better quality of service than the traditionally queue control mechanisms used in hybrid satellite network.

  14. The Combination of RSA And Block Chiper Algorithms To Maintain Message Authentication

    Science.gov (United States)

    Yanti Tarigan, Sepri; Sartika Ginting, Dewi; Lumban Gaol, Melva; Lorensi Sitompul, Kristin

    2017-12-01

    RSA algorithm is public key algorithm using prime number and even still used today. The strength of this algorithm lies in the exponential process, and the factorial number into 2 prime numbers which until now difficult to do factoring. The RSA scheme itself adopts the block cipher scheme, where prior to encryption, the existing plaintext is divide in several block of the same length, where the plaintext and ciphertext are integers between 1 to n, where n is typically 1024 bit, and the block length itself is smaller or equal to log(n)+1 with base 2. With the combination of RSA algorithm and block chiper it is expected that the authentication of plaintext is secure. The secured message will be encrypted with RSA algorithm first and will be encrypted again using block chiper. And conversely, the chipertext will be decrypted with the block chiper first and decrypted again with the RSA algorithm. This paper suggests a combination of RSA algorithms and block chiper to secure data.

  15. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  16. An improved affine projection algorithm for active noise cancellation

    Science.gov (United States)

    Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo

    2017-08-01

    Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.

  17. Adaptive subdivision and the length and energy of Bézier curves

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1997-01-01

    It is an often used fact that the control polygon of a Bézier curve approximates the curve and that the approximation gets better when the curve is subdivided. In particular, if a Bézier curve is subdivided into some number of pieces, then the arc-length of the original curve is greater than...... the sum of the chord-lengths of the pieces, and less than the sum of the polygon-lengths of the pieces. Under repeated subdivisions, the difference between this lower and upper bound gets arbitrarily small.If $L_c$ denotes the total chord-length of the pieces and $L_p$ denotes the total polygon...... combination, and it forms the basis for a fast adaptive algorithm, which determines the arc-length of a Bézier curve.The energy of a curve is half the square of the curvature integrated with respect to arc-length. Like in the case of the arc-length, it is possible to use the chord-length and polygon...

  18. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    Science.gov (United States)

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Science.gov (United States)

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  20. Calculating Graph Algorithms for Dominance and Shortest Path

    DEFF Research Database (Denmark)

    Sergey, Ilya; Midtgaard, Jan; Clarke, Dave

    2012-01-01

    We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated directly from the definition of the properties by fixed-point fusion of (1) a least fixed point...... expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length. The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school...... of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation...

  1. On algorithm for building of optimal α-decision trees

    KAUST Repository

    Alkhalid, Abdulaziz

    2010-01-01

    The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.

  2. Comparison of reconfigurable structures for flexible word-length multiplication

    Directory of Open Access Journals (Sweden)

    O. A. Pfänder

    2008-05-01

    Full Text Available Binary multiplication continues to be one of the essential arithmetic operations in digital circuits. Even though field-programmable gate arrays (FPGAs are becoming more and more powerful these days, the vendors cannot avoid implementing multiplications with high word-lengths using embedded blocks instead of configurable logic. But on the other hand, the circuit's efficiency decreases if the provided word-length of the hard-wired multipliers exceeds the precision requirements of the algorithm mapped into the FPGA. Thus it is beneficial to use multiplier blocks with configurable word-length, optimized for area, speed and power dissipation, e.g. regarding digital signal processing (DSP applications.

    In this contribution, we present different approaches and structures for the realization of a multiplication with variable precision and perform an objective comparison. This includes one approach based on a modified Baugh and Wooley algorithm and three structures using Booth's arithmetic operand recoding with different array structures. All modules have the option to compute signed two's complement fix-point numbers either as an individual computing unit or interconnected to a superior array. Therefore, a high throughput at low precision through parallelism, or a high precision through concatenation can be achieved.

  3. Development of Microsatellite Markers for the Korean Mussel, <em>Mytilus coruscusem> (Mytilidae Using Next-Generation Sequencing

    Directory of Open Access Journals (Sweden)

    Hye Suck An

    2012-08-01

    Full Text Available <em>Mytilus coruscusem> (family Mytilidae is one of the most important marine shellfish species in Korea. During the past few decades, this species has become endangered due to the loss of habitats and overfishing. Despite this species’ importance, information on its genetic background is scarce. In this study, we developed microsatellite markers for <em>M.> coruscusem> using next-generation sequencing. A total of 263,900 raw reads were obtained from a quarter-plate run on the 454 GS-FLX titanium platform, and 176,327 unique sequences were generated with an average length of 381 bp; 2569 (1.45% sequences contained a minimum of five di- to tetra-nucleotide repeat motifs. Of the 51 loci screened, 46 were amplified successfully, and 22 were polymorphic among 30 individuals, with seven of trinucleotide repeats and three of tetranucleotide repeats. All loci exhibited high genetic variability, with an average of 17.32 alleles per locus, and the mean observed and expected heterozygosities were 0.67 and 0.90, respectively. In addition, cross-amplification was tested for all 22 loci in another congener species, <em>M.> <em>galloprovincialis.> None of the primer pairs resulted in effective amplification, which might be due to their high mutation rates. Our work demonstrated the utility of next-generation 454 sequencing as a method for the rapid and cost-effective identification of microsatellites. The high degree of polymorphism exhibited by the 22 newly developed microsatellites will be useful in future conservation genetic studies of this species.

  4. Land-cover classification with an expert classification algorithm using digital aerial photographs

    Directory of Open Access Journals (Sweden)

    José L. de la Cruz

    2010-05-01

    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (<em>Zea maysem> L., oats (<em>Avena sativaem> L., rye (<em>Secale cereale em>L., wheat (<em>Triticum aestivum em>L. and barley (<em>Hordeun vulgareem> L., (3 high protein crops, such as peas (<em>Pisum sativumem> L. and beans (<em>Vicia fabaem> L., (4 alfalfa (<em>Medicago sativaem> L., (5 woodlands and scrublands, including holly oak (<em>Quercus ilexem> L. and common retama (<em>Retama sphaerocarpaem> L., (6 urban soil, (7 olive groves (<em>Olea europaeaem> L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  5. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering

    International Nuclear Information System (INIS)

    Bettinardi, V.; Gilardi, M.C.; Fazio, F.; Alenius, S.; Ruotsalainen, U.; Numminen, P.; Teraes, M.

    2003-01-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  6. Optimization of inhibitory decision rules relative to length and coverage

    KAUST Repository

    Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata

    2012-01-01

    The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation

  7. A novel fair active queue management algorithm based on traffic delay jitter

    Science.gov (United States)

    Wang, Xue-Shun; Yu, Shao-Hua; Dai, Jin-You; Luo, Ting

    2009-11-01

    In order to guarantee the quantity of data traffic delivered in the network, congestion control strategy is adopted. According to the study of many active queue management (AQM) algorithms, this paper proposes a novel active queue management algorithm named JFED. JFED can stabilize queue length at a desirable level by adjusting output traffic rate and adopting a reasonable calculation of packet drop probability based on buffer queue length and traffic jitter; and it support burst packet traffic through the packet delay jitter, so that it can traffic flow medium data. JFED impose effective punishment upon non-responsible flow with a full stateless method. To verify the performance of JFED, it is implemented in NS2 and is compared with RED and CHOKe with respect to different performance metrics. Simulation results show that the proposed JFED algorithm outperforms RED and CHOKe in stabilizing instantaneous queue length and in fairness. It is also shown that JFED enables the link capacity to be fully utilized by stabilizing the queue length at a desirable level, while not incurring excessive packet loss ratio.

  8. PENDUGAAN DATA HILANG DENGAN METODE YATES DAN ALGORITMA EM PADA RANCANGAN LATTICE SEIMBANG

    Directory of Open Access Journals (Sweden)

    MADE SUSILAWATI

    2015-06-01

    Full Text Available Missing data often occur in agriculture and animal husbandry experiment. The missing data in experimental design makes the information that we get less complete. In this research, the missing data was estimated with Yates method and Expectation Maximization (EM algorithm. The basic concept of the Yates method is to minimize sum square error (JKG, meanwhile the basic concept of the EM algorithm is to maximize the likelihood function. This research applied Balanced Lattice Design with 9 treatments, 4 replications and 3 group of each repetition. Missing data estimation results showed that the Yates method was better used for two of missing data in the position on a treatment, a column and random, meanwhile the EM algorithm was better used to estimate one of missing data and two of missing data in the position of a group and a replication. The comparison of the result JKG of ANOVA showed that JKG of incomplete data larger than JKG of incomplete data that has been added with estimator of data. This suggest  thatwe need to estimate the missing data.

  9. Beam-induced motion correction for sub-megadalton cryo-EM particles.

    Science.gov (United States)

    Scheres, Sjors Hw

    2014-08-13

    In electron cryo-microscopy (cryo-EM), the electron beam that is used for imaging also causes the sample to move. This motion blurs the images and limits the resolution attainable by single-particle analysis. In a previous Research article (Bai et al., 2013) we showed that correcting for this motion by processing movies from fast direct-electron detectors allowed structure determination to near-atomic resolution from 35,000 ribosome particles. In this Research advance article, we show that an improved movie processing algorithm is applicable to a much wider range of specimens. The new algorithm estimates straight movement tracks by considering multiple particles that are close to each other in the field of view, and models the fall-off of high-resolution information content by radiation damage in a dose-dependent manner. Application of the new algorithm to four data sets illustrates its potential for significantly improving cryo-EM structures, even for particles that are smaller than 200 kDa. Copyright © 2014, Scheres.

  10. Optimal Solution for VLSI Physical Design Automation Using Hybrid Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    I. Hameem Shanavas

    2014-01-01

    Full Text Available In Optimization of VLSI Physical Design, area minimization and interconnect length minimization is an important objective in physical design automation of very large scale integration chips. The objective of minimizing the area and interconnect length would scale down the size of integrated chips. To meet the above objective, it is necessary to find an optimal solution for physical design components like partitioning, floorplanning, placement, and routing. This work helps to perform the optimization of the benchmark circuits with the above said components of physical design using hierarchical approach of evolutionary algorithms. The goal of minimizing the delay in partitioning, minimizing the silicon area in floorplanning, minimizing the layout area in placement, minimizing the wirelength in routing has indefinite influence on other criteria like power, clock, speed, cost, and so forth. Hybrid evolutionary algorithm is applied on each of its phases to achieve the objective. Because evolutionary algorithm that includes one or many local search steps within its evolutionary cycles to obtain the minimization of area and interconnect length. This approach combines a hierarchical design like genetic algorithm and simulated annealing to attain the objective. This hybrid approach can quickly produce optimal solutions for the popular benchmarks.

  11. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  12. Electromagnetic corrections to ππ scattering lengths: some lessons for the construction of effective hadronic field theories

    International Nuclear Information System (INIS)

    Maltman, K.

    1998-01-01

    Using the framework of effective chiral Lagrangians, we show that, in order to correctly implement electromagnetism (EM), as generated from the Standard Model, into effective hadronic theories (such as meson-exchange models) it is insufficient to consider only graphs in the low-energy effective theory containing explicit photon lines. The Standard Model requires the presence of contact interactions in the effective theory which are electromagnetic in origin, but which involve no photons in the effective theory. We illustrate the problems which can result from a ''standard'' EM subtraction: i.e., from assuming that removing all contributions in the effective theory generated by graphs with explicit photon lines fully removes EM effects, by considering the case of the s-wave ππ scattering lengths. In this case it is shown that such a subtraction procedure would lead to the incorrect conclusion that the strong interaction isospin-breaking contributions to these quantities were large when, in fact, they are known to vanish at leading order in m d -m u . The leading EM contact corrections for the channels employed in the extraction of the I=0,2 s-wave ππ scattering lengths from experiment are also evaluated. (orig.)

  13. Improving the resolution for Lamb wave testing via a smoothed Capon algorithm

    Science.gov (United States)

    Cao, Xuwei; Zeng, Liang; Lin, Jing; Hua, Jiadong

    2018-04-01

    Lamb wave testing is promising for damage detection and evaluation in large-area structures. The dispersion of Lamb waves is often unavoidable, restricting testing resolution and making the signal hard to interpret. A smoothed Capon algorithm is proposed in this paper to estimate the accurate path length of each wave packet. In the algorithm, frequency domain whitening is firstly used to obtain the transfer function in the bandwidth of the excitation pulse. Subsequently, wavenumber domain smoothing is employed to reduce the correlation between wave packets. Finally, the path lengths are determined by distance domain searching based on the Capon algorithm. Simulations are applied to optimize the number of smoothing times. Experiments are performed on an aluminum plate consisting of two simulated defects. The results demonstrate that spatial resolution is improved significantly by the proposed algorithm.

  14. Evaluation of <em>HER2em> Gene Amplification in Breast Cancer Using Nuclei Microarray <em>in em>S>itu em>Hybridization

    Directory of Open Access Journals (Sweden)

    Xuefeng Zhang

    2012-05-01

    Full Text Available Fluorescence<em> em>>in situ em>hybridization (FISH assay is considered the “gold standard” in evaluating <em>HER2/neu (HER2em> gene status. However, FISH detection is costly and time consuming. Thus, we established nuclei microarray with extracted intact nuclei from paraffin embedded breast cancer tissues for FISH detection. The nuclei microarray FISH (NMFISH technology serves as a useful platform for analyzing <em>HER2em> gene/chromosome 17 centromere ratio. We examined <em>HER2em> gene status in 152 cases of invasive ductal carcinomas of the breast that were resected surgically with FISH and NMFISH. <em>HER2em> gene amplification status was classified according to the guidelines of the American Society of Clinical Oncology and College of American Pathologists (ASCO/CAP. Comparison of the cut-off values for <em>HER2em>/chromosome 17 centromere copy number ratio obtained by NMFISH and FISH showed that there was almost perfect agreement between the two methods (κ coefficient 0.920. The results of the two methods were almost consistent for the evaluation of <em>HER2em> gene counts. The present study proved that NMFISH is comparable with FISH for evaluating <em>HER2em> gene status. The use of nuclei microarray technology is highly efficient, time and reagent conserving and inexpensive.

  15. Neonatal Phosphate Nutrition Alters <em>in em>Vivo> and <em>in em>Vitro> Satellite Cell Activity in Pigs

    Directory of Open Access Journals (Sweden)

    Chad H. Stahl

    2012-05-01

    Full Text Available Satellite cell activity is necessary for postnatal skeletal muscle growth. Severe phosphate (PO4 deficiency can alter satellite cell activity, however the role of neonatal PO4 nutrition on satellite cell biology remains obscure. Twenty-one piglets (1 day of age, 1.8 ± 0.2 kg BW were pair-fed liquid diets that were either PO4 adequate (0.9% total P, supra-adequate (1.2% total P in PO4 requirement or deficient (0.7% total P in PO4 content for 12 days. Body weight was recorded daily and blood samples collected every 6 days. At day 12, pigs were orally dosed with BrdU and 12 h later, satellite cells were isolated. Satellite cells were also cultured <em>in vitroem> for 7 days to determine if PO4 nutrition alters their ability to proceed through their myogenic lineage. Dietary PO4 deficiency resulted in reduced (<em>P> < 0.05 sera PO4 and parathyroid hormone (PTH concentrations, while supra-adequate dietary PO4 improved (<em>P> < 0.05 feed conversion efficiency as compared to the PO4 adequate group. <em>In vivoem> satellite cell proliferation was reduced (<em>P> < 0.05 among the PO4 deficient pigs, and these cells had altered <em>in vitroem> expression of markers of myogenic progression. Further work to better understand early nutritional programming of satellite cells and the potential benefits of emphasizing early PO4 nutrition for future lean growth potential is warranted.

  16. Fatigue Crack Length Sizing Using a Novel Flexible Eddy Current Sensor Array

    Directory of Open Access Journals (Sweden)

    Ruifang Xie

    2015-12-01

    Full Text Available The eddy current probe, which is flexible, array typed, highly sensitive and capable of quantitative inspection is one practical requirement in nondestructive testing and also a research hotspot. A novel flexible planar eddy current sensor array for the inspection of microcrack presentation in critical parts of airplanes is developed in this paper. Both exciting and sensing coils are etched on polyimide films using a flexible printed circuit board technique, thus conforming the sensor to complex geometric structures. In order to serve the needs of condition-based maintenance (CBM, the proposed sensor array is comprised of 64 elements. Its spatial resolution is only 0.8 mm, and it is not only sensitive to shallow microcracks, but also capable of sizing the length of fatigue cracks. The details and advantages of our sensor design are introduced. The working principal and the crack responses are analyzed by finite element simulation, with which a crack length sizing algorithm is proposed. Experiments based on standard specimens are implemented to verify the validity of our simulation and the efficiency of the crack length sizing algorithm. Experimental results show that the sensor array is sensitive to microcracks, and is capable of crack length sizing with an accuracy within ±0.2 mm.

  17. Memetic algorithms for de novo motif-finding in biomedical sequences.

    Science.gov (United States)

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary micro

  18. Sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications

    KAUST Repository

    Alsolami, Fawaz

    2013-01-01

    This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach. The results of experiments for decision tables from UCI Machine Learning Repository are discussed. © 2013 Springer-Verlag.

  19. ALOGAMIA EM ARROZ (Oryza sativa L. E RELAÇÃO COM CARACTERÍSTICAS AGRONÔMICAS RICE (Oryza sativa L. ALLOGAMY AND RELATIONSHIP WITH AGRONOMIC TRAITS

    Directory of Open Access Journals (Sweden)

    Péricles de Carvalho Ferreira Neves

    2007-09-01

    ="western" align="justify">Hybrid rice seed production, following the Chinese technique, requires a great amount of hand labor and is expensive. Alternatives to increase the outcrossing rate may help to reduce cost. Embrapa Rice and Beans´ hybrid rice breeding program transferred allogamic traits (anther and stigma lengths from <em>Oryza longistaminata em>A. Chev. to the cultivated species <em>Oryza sativa em>L. The objective of this study was to correlate allogamic and agronomic characters. <em>O. longistaminata em>was crossed to <em>O. sativa em>and then backcrossed twice to the cultivated line. Twenty five F3:6-derived lines were produced and correlation studies between allogamic (stigma, anther, and spikelet length and agronomic traits (panicle length, sterility, shattering, awn length, plant height, tiller per plant, and panicle exsertion were performed. The experimental design was a randomized complete block with four replications. The trials were sown in two environments within the Embrapa Rice and Beans´ experimental station. In general, there were poor genotypic and phenotypic correlations between allogamic and agronomic traits. Highly significant associations were found between stigma and anther length, stigma and awn length anther and awn length, and panicle length and plant height.

    KEY-WORDS: <em>Oryza longistaminataem>;<em> em>outcrossing rate; hybrid rice; seed production.

  20. Optimal synthesis of four-bar steering mechanism using AIS and genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Ettefagh, Mir Mohammad; Javash, Morteza Saeidi [University of Tabriz, Tabriz (Iran, Islamic Republic of)

    2014-06-15

    Synthesis of four-bar Ackermann steering mechanism was considered as an optimization problem for generating the best function between input and output links. The steering mechanism was designed through two heuristic optimization methods, namely, artificial immune system (AIS) algorithm and genetic algorithm (GA). The optimization was implemented using the two methods, length was selected as optimization parameter in the first method, whereas precision point distribution was considered in the second method. Two of the links in the first method had the same length to achieve a symmetric mechanism; one of these lengths was considered as optimization parameter. Five precision points were considered in the precision point distribution method, one of which was in the straight line condition, whereas the others were symmetric. The obtained results showed that the AIS algorithm can generate the closest function to the desired function in the first method. By contrast, GA can generate the closest function to the desired function with the least error in the second method.

  1. A fast ergodic algorithm for generating ensembles of equilateral random polygons

    Science.gov (United States)

    Varela, R.; Hinson, K.; Arsuaga, J.; Diao, Y.

    2009-03-01

    Knotted structures are commonly found in circular DNA and along the backbone of certain proteins. In order to properly estimate properties of these three-dimensional structures it is often necessary to generate large ensembles of simulated closed chains (i.e. polygons) of equal edge lengths (such polygons are called equilateral random polygons). However finding efficient algorithms that properly sample the space of equilateral random polygons is a difficult problem. Currently there are no proven algorithms that generate equilateral random polygons with its theoretical distribution. In this paper we propose a method that generates equilateral random polygons in a 'step-wise uniform' way. We prove that this method is ergodic in the sense that any given equilateral random polygon can be generated by this method and we show that the time needed to generate an equilateral random polygon of length n is linear in terms of n. These two properties make this algorithm a big improvement over the existing generating methods. Detailed numerical comparisons of our algorithm with other widely used algorithms are provided.

  2. Vision-based algorithms for high-accuracy measurements in an industrial bakery

    Science.gov (United States)

    Heleno, Paulo; Davies, Roger; Correia, Bento A. B.; Dinis, Joao

    2002-02-01

    This paper describes the machine vision algorithms developed for VIP3D, a measuring system used in an industrial bakery to monitor the dimensions and weight of loaves of bread (baguettes). The length and perimeter of more than 70 different varieties of baguette are measured with 1-mm accuracy, quickly, reliably and automatically. VIP3D uses a laser triangulation technique to measure the perimeter. The shape of the loaves is approximately cylindrical and the perimeter is defined as the convex hull of a cross-section perpendicular to the baguette axis at mid-length. A camera, mounted obliquely to the measuring plane, captures an image of a laser line projected onto the upper surface of the baguette. Three cameras are used to measure the baguette length, a solution adopted in order to minimize perspective-induced measurement errors. The paper describes in detail the machine vision algorithms developed to perform segmentation of the laser line and subsequent calculation of the perimeter of the baguette. The algorithms used to segment and measure the position of the ends of the baguette, to sub-pixel accuracy, are also described, as are the algorithms used to calibrate the measuring system and compensate for camera-induced image distortion.

  3. A quasi-Newton algorithm for large-scale nonlinear equations

    Directory of Open Access Journals (Sweden)

    Linghua Huang

    2017-02-01

    Full Text Available Abstract In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i a conjugate gradient (CG algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm’s initial point does not have any restrictions; (ii a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length α k $\\alpha_{k}$ . The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the 1 + q $1+q$ -order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.

  4. A quantum algorithm for Viterbi decoding of classical convolutional codes

    OpenAIRE

    Grice, Jon R.; Meyer, David A.

    2014-01-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...

  5. Synchronization Algorithm for SDN-controlled All-Optical TDM Switching in a Random Length Ring Network

    DEFF Research Database (Denmark)

    Kamchevska, Valerija; Cristofori, Valentina; Da Ros, Francesco

    2016-01-01

    We propose and demonstrate an algorithm that allows for automatic synchronization of SDN-controlled all-optical TDM switching nodes connected in a ring network. We experimentally show successful WDM-SDM transmission of data bursts between all ring nodes.......We propose and demonstrate an algorithm that allows for automatic synchronization of SDN-controlled all-optical TDM switching nodes connected in a ring network. We experimentally show successful WDM-SDM transmission of data bursts between all ring nodes....

  6. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  7. Unsupervised Idealization of Ion Channel Recordings by Minimum Description Length

    DEFF Research Database (Denmark)

    Gnanasambandam, Radhakrishnan; Nielsen, Morten S; Nicolai, Christopher

    2017-01-01

    and characterize an idealization algorithm based on Rissanen's Minimum Description Length (MDL) Principle. This method uses minimal assumptions and idealizes ion channel recordings without requiring a detailed user input or a priori assumptions about channel conductance and kinetics. Furthermore, we demonstrate...... that correlation analysis of conductance steps can resolve properties of single ion channels in recordings contaminated by signals from multiple channels. We first validated our methods on simulated data defined with a range of different signal-to-noise levels, and then showed that our algorithm can recover...... channel currents and their substates from recordings with multiple channels, even under conditions of high noise. We then tested the MDL algorithm on real experimental data from human PIEZO1 channels and found that our method revealed the presence of substates with alternate conductances....

  8. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  9. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  10. The logic of logistics: theory, algorithms and applications for logistics management

    Directory of Open Access Journals (Sweden)

    Claudio Barbieri da Cunha

    2010-04-01

    Full Text Available

    Nesse texto o autor apresenta uma resenha acerca do livro "The logic of logistics: theory, algorithms and applications for logistics management", de autoria de Julien Bramel e David Simchi-Levi, publicado pela Springer-Verlag, em 1997.

  11. Reference Gene Selection in the Desert Plant <em>Eremosparton songoricuem>m>

    Directory of Open Access Journals (Sweden)

    Dao-Yuan Zhang

    2012-06-01

    Full Text Available <em>Eremosparton songoricum em>(Litv. Vass. (<em>E. songoricumem> is a rare and extremely drought-tolerant desert plant that holds promise as a model organism for the identification of genes associated with water deficit stress. Here, we cloned and evaluated the expression of eight candidate reference genes using quantitative real-time reverse transcriptase polymerase chain reactions. The expression of these candidate reference genes was analyzed in a diverse set of 20 samples including various <em>E. songoricumem> plant tissues exposed to multiple environmental stresses. GeNorm analysis indicated that expression stability varied between the reference genes in the different experimental conditions, but the two most stable reference genes were sufficient for normalization in most conditions.<em> EsEFem> and <em>Esα-TUB> were sufficient for various stress conditions, <em>EsEF> and <em>EsACT> were suitable for samples of differing germination stages, and <em>EsGAPDH>and <em>Es>UBQ em>were most stable across multiple adult tissue samples. The <em>Es18Sem> gene was unsuitable as a reference gene in our analysis. In addition, the expression level of the drought-stress related transcription factor <em>EsDREB2em>> em>verified the utility of<em> E. songoricumem> reference genes and indicated that no single gene was adequate for normalization on its own. This is the first systematic report on the selection of reference genes in <em>E. songoricumem>, and these data will facilitate future work on gene expression in this species.

  12. Influence on dose calculation by difference of dose calculation algorithms in stereotactic lung irradiation. Comparison of pencil beam convolution (inhomogeneity correction: batho power law) and analytical anisotropic algorithm

    International Nuclear Information System (INIS)

    Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki

    2009-01-01

    The monitor unit (MU) was calculated by pencil beam convolution (inhomogeneity correction algorithm: batho power law) [PBC (BPL)] which is the dose calculation algorithm based on measurement in the past in the stereotactic lung irradiation study. The recalculation was done by analytical anisotropic algorithm (AAA), which is the dose calculation algorithm based on theory data. The MU calculated by PBC (BPL) and AAA was compared for each field. In the result of the comparison of 1031 fields in 136 cases, the MU calculated by PBC (BPL) was about 2% smaller than that calculated by AAA. This depends on whether one does the calculation concerning the extension of the second electrons. In particular, the difference in the MU is influenced by the X-ray energy. With the same X-ray energy, when the irradiation field size is small, the lung pass length is long, the lung pass length percentage is large, and the CT value of the lung is low, and the difference of MU is increased. (author)

  13. Detecting Scareware by Mining Variable Length Instruction Sequences

    OpenAIRE

    Shahzad, Raja Khurram; Lavesson, Niklas

    2011-01-01

    Scareware is a recent type of malicious software that may pose financial and privacy-related threats to novice users. Traditional countermeasures, such as anti-virus software, require regular updates and often lack the capability of detecting novel (unseen) instances. This paper presents a scareware detection method that is based on the application of machine learning algorithms to learn patterns in extracted variable length opcode sequences derived from instruction sequences of binary files....

  14. Optimum design for rotor-bearing system using advanced generic algorithm

    International Nuclear Information System (INIS)

    Kim, Young Chan; Choi, Seong Pil; Yang, Bo Suk

    2001-01-01

    This paper describes a combinational method to compute the global and local solutions of optimization problems. The present hybrid algorithm uses both a generic algorithm and a local concentrate search algorithm (e.g simplex method). The hybrid algorithm is not only faster than the standard genetic algorithm but also supplies a more accurate solution. In addition, this algorithm can find the global and local optimum solutions. The present algorithm can be supplied to minimize the resonance response (Q factor) and to yield the critical speeds as far from the operating speed as possible. These factors play very important roles in designing a rotor-bearing system under the dynamic behavior constraint. In the present work, the shaft diameter, the bearing length, and clearance are used as the design variables

  15. The impact of signal normalization on seizure detection using line length features.

    Science.gov (United States)

    Logesparan, Lojini; Rodriguez-Villegas, Esther; Casson, Alexander J

    2015-10-01

    Accurate automated seizure detection remains a desirable but elusive target for many neural monitoring systems. While much attention has been given to the different feature extractions that can be used to highlight seizure activity in the EEG, very little formal attention has been given to the normalization that these features are routinely paired with. This normalization is essential in patient-independent algorithms to correct for broad-level differences in the EEG amplitude between people, and in patient-dependent algorithms to correct for amplitude variations over time. It is crucial, however, that the normalization used does not have a detrimental effect on the seizure detection process. This paper presents the first formal investigation into the impact of signal normalization techniques on seizure discrimination performance when using the line length feature to emphasize seizure activity. Comparing five normalization methods, based upon the mean, median, standard deviation, signal peak and signal range, we demonstrate differences in seizure detection accuracy (assessed as the area under a sensitivity-specificity ROC curve) of up to 52 %. This is despite the same analysis feature being used in all cases. Further, changes in performance of up to 22 % are present depending on whether the normalization is applied to the raw EEG itself or directly to the line length feature. Our results highlight the median decaying memory as the best current approach for providing normalization when using line length features, and they quantify the under-appreciated challenge of providing signal normalization that does not impair seizure detection algorithm performance.

  16. PRESEE: an MDL/MML algorithm to time-series stream segmenting.

    Science.gov (United States)

    Xu, Kaikuo; Jiang, Yexi; Tang, Mingjie; Yuan, Changan; Tang, Changjie

    2013-01-01

    Time-series stream is one of the most common data types in data mining field. It is prevalent in fields such as stock market, ecology, and medical care. Segmentation is a key step to accelerate the processing speed of time-series stream mining. Previous algorithms for segmenting mainly focused on the issue of ameliorating precision instead of paying much attention to the efficiency. Moreover, the performance of these algorithms depends heavily on parameters, which are hard for the users to set. In this paper, we propose PRESEE (parameter-free, real-time, and scalable time-series stream segmenting algorithm), which greatly improves the efficiency of time-series stream segmenting. PRESEE is based on both MDL (minimum description length) and MML (minimum message length) methods, which could segment the data automatically. To evaluate the performance of PRESEE, we conduct several experiments on time-series streams of different types and compare it with the state-of-art algorithm. The empirical results show that PRESEE is very efficient for real-time stream datasets by improving segmenting speed nearly ten times. The novelty of this algorithm is further demonstrated by the application of PRESEE in segmenting real-time stream datasets from ChinaFLUX sensor networks data stream.

  17. Optimization of the Critical Diameter and Average Path Length of Social Networks

    Directory of Open Access Journals (Sweden)

    Haifeng Du

    2017-01-01

    Full Text Available Optimizing average path length (APL by adding shortcut edges has been widely discussed in connection with social networks, but the relationship between network diameter and APL is generally ignored in the dynamic optimization of APL. In this paper, we analyze this relationship and transform the problem of optimizing APL into the problem of decreasing diameter to 2. We propose a mathematic model based on a memetic algorithm. Experimental results show that our algorithm can efficiently solve this problem as well as optimize APL.

  18. Acidente vascular cerebral isquêmico em uma enfermaria de neurologia: complicações e tempo de internação Stroke in a neurology ward: etiologies, complications and length of stay

    Directory of Open Access Journals (Sweden)

    Rodrigo Bomeny de Paulo

    2009-01-01

    Full Text Available OBJETIVOS: Os objetivos deste trabalho foram: avaliar as complicações e o tempo de internação de doentes com acidente vascular cerebral isquêmico (AVCI na fase aguda ou subaguda em uma enfermaria de Neurologia geral em São Paulo; investigar a influência de idade, fatores de risco para doença vascular, território arterial acometido e etiologia sobre as complicações e o tempo de internação. MÉTODOS: Foram coletados prospectivamente dados de 191 doentes com AVCI e posteriormente analisados. RESULTADOS: Cinquenta e um doentes (26,7% apresentaram alguma complicação clínica durante a internação. A pneumonia foi a complicação mais frequente. O tempo médio de internação na enfermaria foi de 16,8±13,8 dias. Na análise multivariável, o único fator que se correlacionou significativamente com menor taxa de complicações foi idade mais jovem (OR=0,92-0,97, p INTRODUCTION: Purposes of this study were: evaluate complications and length of stay of patients admitted with diagnosis of ischemic stroke (IS in the acute or subacute phase, in a general Neurology ward in São paulo, Brazil; investigate the influence of age, risk factors for vascular disease, arterial territory and etiology. METHODS: Data from 191 IS patients were collected prospectively. RESULTS: Fifty-one patients (26.7% presented at least one clinical complication during stay. pneumonia was the most frequent complication. Mean length of stay was 16.8+-13.8 days. Multivariate analysis revealed a correlation between younger age and lower complication rates (OR=0.92-0.97, p < 0.001. presence of complications was the only factor that independently influenced length of stay (OR=4.20; CI=1.928.84; p<0.0001. CONCLUSION: These results should be considered in the planning and organization of IS care in Brazil.

  19. Linac design algorithm with symmetric segments

    International Nuclear Information System (INIS)

    Takeda, Harunori; Young, L.M.; Nath, S.; Billen, J.H.; Stovall, J.E.

    1996-01-01

    The cell lengths in linacs of traditional design are typically graded as a function of particle velocity. By making groups of cells and individual cells symmetric in both the CCDTL AND CCL, the cavity design as well as mechanical design and fabrication is simplified without compromising the performance. We have implemented a design algorithm in the PARMILA code in which cells and multi-cavity segments are made symmetric, significantly reducing the number of unique components. Using the symmetric algorithm, a sample linac design was generated and its performance compared with a similar one of conventional design

  20. A note on a perfect simulation algorithm for marked Hawkes processes

    DEFF Research Database (Denmark)

    Møller, Jesper; Rasmussen, Jakob Gulddahl

    2004-01-01

    The usual straightforward simulation algorithm for (marked or unmarked) Hawkes processes suffers from edge effect. In this note we describe a perfect simulation algorithm which is partly derived as in Brix and Kendall (2002) and partly using upper and lower processes as in the Propp......-Wilson algorithm (1996), or rather as in the dominated CFTP algorithm by Kendall and Moller (2000). Various monotonicity properties and approximations of the cumulative distribution function for the length of a so-called cluster in a marked Hawkes process play an important role....

  1. A Learning Algorithm for Multimodal Grammar Inference.

    Science.gov (United States)

    D'Ulizia, A; Ferri, F; Grifoni, P

    2011-12-01

    The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.

  2. Track length estimation applied to point detectors

    International Nuclear Information System (INIS)

    Rief, H.; Dubi, A.; Elperin, T.

    1984-01-01

    The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward

  3. Solving the SAT problem using Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Arunava Bhattacharjee

    2017-08-01

    Full Text Available In this paper we propose our genetic algorithm for solving the SAT problem. We introduce various crossover and mutation techniques and then make a comparative analysis between them in order to find out which techniques are the best suited for solving a SAT instance. Before the genetic algorithm is applied to an instance it is better to seek for unit and pure literals in the given formula and then try to eradicate them. This can considerably reduce the search space, and to demonstrate this we tested our algorithm on some random SAT instances. However, to analyse the various crossover and mutation techniques and also to evaluate the optimality of our algorithm we performed extensive experiments on benchmark instances of the SAT problem. We also estimated the ideal crossover length that would maximise the chances to solve a given SAT instance.

  4. The K tree score: quantification of differences in the relative branch length and topology of phylogenetic trees.

    Science.gov (United States)

    Soria-Carrasco, Víctor; Talavera, Gerard; Igea, Javier; Castresana, Jose

    2007-11-01

    We introduce a new phylogenetic comparison method that measures overall differences in the relative branch length and topology of two phylogenetic trees. To do this, the algorithm first scales one of the trees to have a global divergence as similar as possible to the other tree. Then, the branch length distance, which takes differences in topology and branch lengths into account, is applied to the two trees. We thus obtain the minimum branch length distance or K tree score. Two trees with very different relative branch lengths get a high K score whereas two trees that follow a similar among-lineage rate variation get a low score, regardless of the overall rates in both trees. There are several applications of the K tree score, two of which are explained here in more detail. First, this score allows the evaluation of the performance of phylogenetic algorithms, not only with respect to their topological accuracy, but also with respect to the reproduction of a given branch length variation. In a second example, we show how the K score allows the selection of orthologous genes by choosing those that better follow the overall shape of a given reference tree. http://molevol.ibmb.csic.es/Ktreedist.html

  5. Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces

    DEFF Research Database (Denmark)

    Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel

    The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...

  6. An Imaging System for Automated Characteristic Length Measurement of Debrisat Fragments

    Science.gov (United States)

    Moraguez, Mathew; Patankar, Kunal; Fitz-Coy, Norman; Liou, J.-C.; Sorge, Marlon; Cowardin, Heather; Opiela, John; Krisko, Paula H.

    2015-01-01

    The debris fragments generated by DebriSat's hypervelocity impact test are currently being processed and characterized through an effort of NASA and USAF. The debris characteristics will be used to update satellite breakup models. In particular, the physical dimensions of the debris fragments must be measured to provide characteristic lengths for use in these models. Calipers and commercial 3D scanners were considered as measurement options, but an automated imaging system was ultimately developed to measure debris fragments. By automating the entire process, the measurement results are made repeatable and the human factor associated with calipers and 3D scanning is eliminated. Unlike using calipers to measure, the imaging system obtains non-contact measurements to avoid damaging delicate fragments. Furthermore, this fully automated measurement system minimizes fragment handling, which reduces the potential for fragment damage during the characterization process. In addition, the imaging system reduces the time required to determine the characteristic length of the debris fragment. In this way, the imaging system can measure the tens of thousands of DebriSat fragments at a rate of about six minutes per fragment, compared to hours per fragment in NASA's current 3D scanning measurement approach. The imaging system utilizes a space carving algorithm to generate a 3D point cloud of the article being measured and a custom developed algorithm then extracts the characteristic length from the point cloud. This paper describes the measurement process, results, challenges, and future work of the imaging system used for automated characteristic length measurement of DebriSat fragments.

  7. A specimen of <em>Sorex> cfr. <em>samniticus> in Barn Owl's pellets from Murge plateau (Apulia, Italy / Su di un <em>Sorex> cfr. <em>samniticus> (Insectivora, Soricidae rinvenuto in borre di <em>Tyto albaem> delle Murge (Puglia, Italia

    Directory of Open Access Journals (Sweden)

    Giovanni Ferrara

    1992-07-01

    Full Text Available Abstract In a lot of Barn Owl's pellets from the Murge plateau a specimen of <em>Sorex> sp. was detected. Thank to some morphological and morphometrical features, the cranial bones can be tentatively attributed to <em>Sorex samniticusem> Altobello, 1926. The genus <em>Sorex> was not yet included in the Apulia's fauna southwards of the Gargano district; the origin and significance of the above record is briefly discussed, the actual presence of a natural population of <em>Sorex> in the Murge being not yet proved. Riassunto Viene segnalato il rinvenimento di un esemplare di <em>Sorex> cfr. <em>samniticus> da borre di <em>Tyto albaem> delle Murge. Poiché il genere non era stato ancora segnalato nella Puglia a sud del Gargano, viene discusso il significato faunistico del reperto.

  8. Efecto de extractos vegetales de <em>Polygonum hydropiperoidesem>, <em>Solanum nigrumem> y <em>Calliandra pittieriem> sobre el gusano cogollero (<em>Spodoptera frugiperdaem>

    Directory of Open Access Journals (Sweden)

    Lizarazo H. Karol

    2008-12-01

    Full Text Available

    El gusano cogollero <em>Spodoptera frugiperdaem> es una de las plagas que más afectan los cultivos en la región de Sumapaz (Cundinamarca, Colombia. En la actualidad se controla principalmente aplicando productos de síntesis química, sin embargo la aplicación de extractos vegetales surge como una alternativa de menor impacto sobre el ambiente. Este control se emplea debido a que las plantas contienen metabolitos secundarios que pueden inhibir el desarrollo de los insectos. Por tal motivo, la presente investigación evaluó el efecto insecticida y antialimentario de extractos vegetales de barbasco <em>Polygonum hydropiperoidesem> (Polygonaceae, carbonero <em>Calliandra pittieriem> (Mimosaceae y hierba mora <em>Solanum nigrumem> (Solanaceae sobre larvas de <em>S. frugiperdaem> biotipo maíz. Se estableció una cría masiva del insecto en el laboratorio utilizando una dieta natural con hojas de maíz. Posteriormente se obtuvieron extractos vegetales utilizando solventes de alta polaridad (agua y etanol y media polaridad (diclorometano los cuales se aplicaron sobre las larvas de segundo instar. Los resultados más destacados se presentaron con extractos de <em>P. hydropiperoidesem>, obtenidos con diclorometano en sus diferentes dosis, con los cuales se alcanzó una mortalidad de 100% 12 días después de la aplicación y un efecto antialimentario representado por un consumo de follaje de maíz inferior al 4%, efectos similares a los del testigo comercial (Clorpiriphos.

  9. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    Science.gov (United States)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  10. Rapid Development of Microsatellite Markers with 454 Pyrosequencing in a Vulnerable Fish<em>,> the Mottled Skate<em>, Raja em>pulchra>

    Directory of Open Access Journals (Sweden)

    Jung-Ha Kang

    2012-06-01

    Full Text Available The mottled skate, <em>Raja pulchraem>, is an economically valuable fish. However, due to a severe population decline, it is listed as a vulnerable species by the International Union for Conservation of Nature. To analyze its genetic structure and diversity, microsatellite markers were developed using 454 pyrosequencing. A total of 17,033 reads containing dinucleotide microsatellite repeat units (mean, 487 base pairs were identified from 453,549 reads. Among 32 loci containing more than nine repeat units, 20 primer sets (62% produced strong PCR products, of which 14 were polymorphic. In an analysis of 60 individuals from two <em>R. pulchra em>populations, the number of alleles per locus ranged from 1–10, and the mean allelic richness was 4.7. No linkage disequilibrium was found between any pair of loci, indicating that the markers were independent. The Hardy–Weinberg equilibrium test showed significant deviation in two of the 28 single-loci after sequential Bonferroni’s correction. Using 11 primer sets, cross-species amplification was demonstrated in nine related species from four families within two classes. Among the 11 loci amplified from three other <em>Rajidae> family species; three loci were polymorphic. A monomorphic locus was amplified in all three <em>Rajidae> family species and the <em>Dasyatidae> family. Two <em>Rajidae> polymorphic loci amplified monomorphic target DNAs in four species belonging to the Carcharhiniformes class, and another was polymorphic in two Carcharhiniformes species.

  11. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  12. A New Natural Lactone from <em>Dimocarpus> <em>longan> Lour. Seeds

    Directory of Open Access Journals (Sweden)

    Zhongjun Li

    2012-08-01

    Full Text Available A new natural product named longanlactone was isolated from <em>Dimocarpus> <em>longan> Lour. seeds. Its structure was determined as 3-(2-acetyl-1<em>H>-pyrrol-1-yl-5-(prop-2-yn-1-yldihydrofuran-2(3H-one by spectroscopic methods and HRESIMS.

  13. Purification, Characterization and Antioxidant Activities <em>in Vitroem>> em>and <em>in Vivoem> of the Polysaccharides from <em>Boletus edulisem> Bull

    Directory of Open Access Journals (Sweden)

    Yijun Fan

    2012-07-01

    Full Text Available A water-soluble polysaccharide (BEBP was extracted from <em>Boletus edulis em>Bull using hot water extraction followed by ethanol precipitation. The polysaccharide BEBP was further purified by chromatography on a DEAE-cellulose column, giving three major polysaccharide fractions termed BEBP-1, BEBP-2 and BEBP-3. In the next experiment, the average molecular weight (Mw, IR and monosaccharide compositional analysis of the three polysaccharide fractions were determined. The evaluation of antioxidant activities both <em>in vitroem> and <em>in vivo em>suggested that BEBP-3 had good potential antioxidant activity, and should be explored as a novel potential antioxidant.

  14. FPGA Hardware Acceleration of a Phylogenetic Tree Reconstruction with Maximum Parsimony Algorithm

    OpenAIRE

    BLOCK, Henry; MARUYAMA, Tsutomu

    2017-01-01

    In this paper, we present an FPGA hardware implementation for a phylogenetic tree reconstruction with a maximum parsimony algorithm. We base our approach on a particular stochastic local search algorithm that uses the Progressive Neighborhood and the Indirect Calculation of Tree Lengths method. This method is widely used for the acceleration of the phylogenetic tree reconstruction algorithm in software. In our implementation, we define a tree structure and accelerate the search by parallel an...

  15. A new algorithm for the integration of exponential and logarithmic functions

    Science.gov (United States)

    Rothstein, M.

    1977-01-01

    An algorithm for symbolic integration of functions built up from the rational functions by repeatedly applying either the exponential or logarithm functions is discussed. This algorithm does not require polynomial factorization nor partial fraction decomposition and requires solutions of linear systems with only a small number of unknowns. It is proven that if this algorithm is applied to rational functions over the integers, a computing time bound for the algorithm can be obtained which is a polynomial in a bound on the integer length of the coefficients, and in the degrees of the numerator and denominator of the rational function involved.

  16. <em>Angiostrongylus vasorumem> in red foxes (<em>Vulpes vulpesem> and badgers (<em>Meles melesem> from Central and Northern Italy

    Directory of Open Access Journals (Sweden)

    Marta Magi

    2010-06-01

    Full Text Available Abstract During 2004-2005 and 2007-2008, 189 foxes (<em>Vulpes vulpesem> and 6 badgers (<em>Meles melesem> were collected in different areas of Central Northern Italy (Piedmont, Liguria and Tuscany and examined for <em>Angiostrongylus vasorumem> infection. The prevalence of the infection was significantly different in the areas considered, with the highest values in the district of Imperia (80%, Liguria and in Montezemolo (70%, southern Piedmont; the prevalence in Tuscany was 7%. One badger collected in the area of Imperia turned out to be infected, representing the first report of the parasite in this species in Italy. Further studies are needed to evaluate the role played by fox populations as reservoirs of infection and the probability of its spreading to domestic dogs.
    Riassunto <em>Angiostrongylus vasorumem> nella volpe (<em>Vulpes vulpesem> e nel tasso (<em>Meles melesem> in Italia centro-settentrionale. Nel 2004-2005 e 2007-2008, 189 volpi (<em>Vulpes vulpesem> e 6 tassi (<em>Meles melesem> provenienti da differenti aree dell'Italia settentrionale e centrale (Piemonte, Liguria Toscana, sono stati esaminati per la ricerca di <em>Angiostrongylus vasorumem>. La prevalenza del nematode è risultata significativamente diversa nelle varie zone, con valori elevati nelle zone di Imperia (80% e di Montezemolo (70%, provincia di Cuneo; la prevalenza in Toscana è risultata del 7%. Un tasso proveniente dall'area di Imperia è risultato positivo per A. vasorum; questa è la prima segnalazione del parassita in tale specie in Italia. Ulteriori studi sono necessari per valutare il potenziale della volpe come serbatoio e la possibilità di diffusione della parassitosi ai cani domestici.

    doi:10.4404/hystrix-20.2-4442

  17. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  18. Assessment of Genetic Fidelity in <em>Rauvolfia em>s>erpentina em>Plantlets Grown from Synthetic (Encapsulated Seeds Following <em>in Vitroem> Storage at 4 °C

    Directory of Open Access Journals (Sweden)

    Mohammad Anis

    2012-05-01

    Full Text Available An efficient method was developed for plant regeneration and establishment from alginate encapsulated synthetic seeds of <em>Rauvolfia serpentinaem>. Synthetic seeds were produced using <em>in vitroem> proliferated microshoots upon complexation of 3% sodium alginate prepared in Llyod and McCown woody plant medium (WPM and 100 mM calcium chloride. Re-growth ability of encapsulated nodal segments was evaluated after storage at 4 °C for 0, 1, 2, 4, 6 and 8 weeks and compared with non-encapsulated buds. Effects of different media <em>viz>; Murashige and Skoog medium; Lloyd and McCown woody Plant medium, Gamborg’s B5 medium and Schenk and Hildebrandt medium was also investigated for conversion into plantlets. The maximum frequency of conversion into plantlets from encapsulated nodal segments stored at 4 °C for 4 weeks was achieved on woody plant medium supplement with 5.0 μM BA and 1.0 μM NAA. Rooting in plantlets was achieved in half-strength Murashige and Skoog liquid medium containing 0.5 μM indole-3-acetic acid (IAA on filter paper bridges. Plantlets obtained from stored synseeds were hardened, established successfully <em>ex vitroem> and were morphologically similar to each other as well as their mother plant. The genetic fidelity of <em>Rauvolfia em>clones raised from synthetic seeds following four weeks of storage at 4 °C were assessed by using random amplified polymorphic<em> em>DNA (RAPD and inter-simple sequence repeat<em> em>(ISSR markers. All the RAPD and ISSR profiles from generated plantlets were monomorphic and comparable<em> em>to the mother plant, which confirms the genetic<em> em>stability among the clones. This synseed protocol could be useful for establishing a particular system for conservation, short-term storage and production of genetically identical and stable plants before it is released for commercial purposes.

  19. Sulla presenza di <em>Sorex antinoriiem>, <em>Neomys anomalusem> (Insectivora, Soricidae e <em>Talpa caecaem> (Insectivora, Talpidae in Umbria

    Directory of Open Access Journals (Sweden)

    A.M. Paci

    2003-10-01

    Full Text Available Lo scopo del contributo è di fornire un aggiornamento sulla presenza del Toporagno del Vallese <em>Sorex antinoriiem>, del Toporagno acquatico di Miller <em>Neomys anomalusem> e della Talpa cieca <em>Talpa caecaem> in Umbria, dove le specie risultano accertate ormai da qualche anno. A tal fine sono stati rivisitati i reperti collezionati e la bibliografia conosciuta. Toporagno del Vallese: elevato di recente a livello di specie da Brünner et al. (2002, altrimenti considerato sottospecie del Toporagno comune (<em>S. araneus antinoriiem>. È conservato uno di tre crani incompleti (mancano mandibole ed incisivi superiori al momento prudenzialmente riferiti a <em>Sorex> cfr. <em>antinorii>, provenienti dall?Appennino umbro-marchigiano settentrionale (dintorni di Scalocchio - PG, 590 m. s.l.m. e determinati sulla base della pigmentazione rossa degli ipoconi del M1 e M2; Toporagno acquatico di Miller: tre crani (Breda in Paci e Romano op. cit. e un esemplare intero (Paci, ined. sono stati trovati a pochi chilometri di distanza gli uni dall?altro tra i comuni di Assisi e Valfabbrica, in ambienti mediocollinari limitrofi al Parco Regionale del M.te Subasio (Perugia. In provincia di Terni la specie viene segnalata da Isotti (op. cit. per i dintorni di Orvieto. Talpa cieca: sono noti una femmina e un maschio raccolti nel comune di Pietralunga (PG, rispettivamente in una conifereta a <em>Pinus nigraem> (m. 630 s.l.m. e nelle vicinanze di un bosco misto collinare a prevalenza di <em>Quercus cerrisem> (m. 640 s.l.m.. Recentemente un terzo individuo è stato rinvenuto nel comune di Sigillo (PG, all?interno del Parco Regionale di M.te Cucco, sul margine di una faggeta a 1100 m s.l.m. In entrambi i casi l?areale della specie è risultato parapatrico con quello di <em>Talpa europaeaem>.

  20. Synchronization in a Random Length Ring Network for SDN-Controlled Optical TDM Switching

    DEFF Research Database (Denmark)

    Kamchevska, Valerija; Cristofori, Valentina; Da Ros, Francesco

    2016-01-01

    . In addition, we propose a novel synchronization algorithm that enables automatic synchronization of software defined networking controlled all-optical TDM switching nodes connected in a ring network. Besides providing synchronization, the algorithm also can facilitate dynamic slot size change and failure......In this paper we focus on optical time division multiplexed (TDM) switching and its main distinguishing characteristics compared with other optical subwavelength switching technologies. We review and discuss in detail the synchronization requirements that allow for proper switching operation...... detection. We experimentally validate the algorithm behavior and achieve correct operation for three different ring lengths. Moreover, we experimentally demonstrate data plane connectivity in a ring network composed of three nodes and show successful wavelength division multiplexing space division...

  1. Plasma influence on the dispersion properties of finite-length, corrugated waveguides

    OpenAIRE

    Shkvarunets, A.; Kobayashi, S.; Weaver, J.; Carmel, Y.; Rodgers, J.; Antonsen, T.; Granatstein, V.L.; Destler, W.W.; Ogura, K.; Minami, K.

    1996-01-01

    We present an experimental study of the electromagnetic properties of transverse magnetic modes in a corrugated-wall cavity filled with a radially inhomogeneous plasma. The shifts of the .resonant frequencies of a finite-length, corrugated cavity were measured as a function of the background plasma density and the dispersion diagram was reconstructed up to a peak plasma density of 1012 em - 3. Good agreement with a calculated dispersion diagram is obtained for plasma densities below 5 X 1011 ...

  2. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  3. A theoretical derivation of the condensed history algorithm

    International Nuclear Information System (INIS)

    Larsen, E.W.

    1992-01-01

    Although the Condensed History Algorithm is a successful and widely-used Monte Carlo method for solving electron transport problems, it has been derived only by an ad-hoc process based on physical reasoning. In this paper we show that the Condensed History Algorithm can be justified as a Monte Carlo simulation of an operator-split procedure in which the streaming, angular scattering, and slowing-down operators are separated within each time step. Different versions of the operator-split procedure lead to Ο(Δs) and Ο(Δs 2 ) versions of the method, where Δs is the path-length step. Our derivation also indicates that higher-order versions of the Condensed History Algorithm may be developed. (Author)

  4. A polynomial time algorithm for checking regularity of totally normed process algebra

    NARCIS (Netherlands)

    Yang, F.; Huang, H.

    2015-01-01

    A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for

  5. Adding large EM stack support

    KAUST Repository

    Holst, Glendon

    2016-12-01

    Serial section electron microscopy (SSEM) image stacks generated using high throughput microscopy techniques are an integral tool for investigating brain connectivity and cell morphology. FIB or 3View scanning electron microscopes easily generate gigabytes of data. In order to produce analyzable 3D dataset from the imaged volumes, efficient and reliable image segmentation is crucial. Classical manual approaches to segmentation are time consuming and labour intensive. Semiautomatic seeded watershed segmentation algorithms, such as those implemented by ilastik image processing software, are a very powerful alternative, substantially speeding up segmentation times. We have used ilastik effectively for small EM stacks – on a laptop, no less; however, ilastik was unable to carve the large EM stacks we needed to segment because its memory requirements grew too large – even for the biggest workstations we had available. For this reason, we refactored the carving module of ilastik to scale it up to large EM stacks on large workstations, and tested its efficiency. We modified the carving module, building on existing blockwise processing functionality to process data in manageable chunks that can fit within RAM (main memory). We review this refactoring work, highlighting the software architecture, design choices, modifications, and issues encountered.

  6. Microsatellite Loci in the Gypsophyte <em>Lepidium subulatum em>(Brassicaceae, and Transferability to Other <em>Lepidieae>

    Directory of Open Access Journals (Sweden)

    José Gabriel Segarra-Moragues

    2012-09-01

    Full Text Available Polymorphic microsatellite markers were developed for the Ibero-North African, strict gypsophyte <em>Lepidium subulatumem> to unravel the effects of habitat fragmentation in levels of genetic diversity, genetic structure and gene flow among its populations. Using 454 pyrosequencing 12 microsatellite loci including di- and tri-nucleotide repeats were characterized in <em>L. subulatumem>. They amplified a total of 80 alleles (2–12 alleles per locus in a sample of 35 individuals of <em>L. subulatumem>, showing relatively high levels of genetic diversity, <em>H>O = 0.645, <em>H>E = 0.627. Cross-species transferability of all 12 loci was successful for the Iberian endemics <em>Lepidium cardaminesem>, <em>Lepidium stylatumem>, and the widespread, <em>Lepidium graminifoliumem> and one species each of two related genera, <em>Cardaria drabaem> and <em>Coronopus didymusem>. These microsatellite primers will be useful to investigate genetic diversity, population structure and to address conservation genetics in species of <em>Lepidium>.

  7. A Plagiarism Detection Algorithm based on Extended Winnowing

    Directory of Open Access Journals (Sweden)

    Duan Xuliang

    2017-01-01

    Full Text Available Plagiarism is a common problem faced by academia and education. Mature commercial plagiarism detection system has the advantages of comprehensive and high accuracy, but the expensive detection costs make it unsuitable for real-time, lightweight application environment such as the student assignments plagiarism detection. This paper introduces the method of extending classic Winnowing plagiarism detection algorithm, expands the algorithm in functionality. The extended algorithm can retain the text location and length information in original document while extracting the fingerprints of a document, so that the locating and marking for plagiarism text fragment are much easier to achieve. The experimental results and several years of running practice show that the expansion of the algorithm has little effect on its performance, normal hardware configuration of PC will be able to meet small and medium-sized applications requirements. Based on the characteristics of lightweight, high efficiency, reliability and flexibility of Winnowing, the extended algorithm further enhances the adaptability and extends the application areas.

  8. On Line Segment Length and Mapping 4-regular Grid Structures in Network Infrastructures

    DEFF Research Database (Denmark)

    Riaz, Muhammad Tahir; Nielsen, Rasmus Hjorth; Pedersen, Jens Myrup

    2006-01-01

    The paper focuses on mapping the road network into 4-regular grid structures. A mapping algorithm is proposed. To model the road network GIS data have been used. The Geographic Information System (GIS) data for the road network are composed with different size of line segment lengths...

  9. Increasing LIGO sensitivity by feedforward subtraction of auxiliary length control noise

    International Nuclear Information System (INIS)

    Meadors, Grant David; Riles, Keith; Kawabe, Keita

    2014-01-01

    LIGO, the Laser Interferometer Gravitational-wave Observatory, has been designed and constructed to measure gravitational wave strain via differential arm length. The LIGO 4 km Michelson arms with Fabry–Perot cavities have auxiliary length control servos for suppressing Michelson motion of the beam-splitter and arm cavity input mirrors, which degrades interferometer sensitivity. We demonstrate how a post facto pipeline improves a data sample from LIGO Science Run 6 with feedforward subtraction. Dividing data into 1024 s windows, we numerically fit filter functions representing the frequency-domain transfer functions from Michelson length channels into the gravitational-wave strain data channel for each window, then subtract the filtered Michelson channel noise (witness) from the strain channel (target). In this paper we describe the algorithm, assess achievable improvements in sensitivity to astrophysical sources, and consider relevance to future interferometry. (paper)

  10. Forecasting Jakarta composite index (IHSG) based on chen fuzzy time series and firefly clustering algorithm

    Science.gov (United States)

    Ningrum, R. W.; Surarso, B.; Farikhin; Safarudin, Y. M.

    2018-03-01

    This paper proposes the combination of Firefly Algorithm (FA) and Chen Fuzzy Time Series Forecasting. Most of the existing fuzzy forecasting methods based on fuzzy time series use the static length of intervals. Therefore, we apply an artificial intelligence, i.e., Firefly Algorithm (FA) to set non-stationary length of intervals for each cluster on Chen Method. The method is evaluated by applying on the Jakarta Composite Index (IHSG) and compare with classical Chen Fuzzy Time Series Forecasting. Its performance verified through simulation using Matlab.

  11. A Distance-Adaptive Refueling Recommendation Algorithm for Self-Driving Travel

    Directory of Open Access Journals (Sweden)

    Quanli Xu

    2018-03-01

    Full Text Available Taking the maximum vehicle driving distance, the distances from gas stations, the route length, and the number of refueling gas stations as the decision conditions, recommendation rules and an early refueling service warning mechanism for gas stations along a self-driving travel route were constructed by using the algorithm presented in this research, based on the spatial clustering characteristics of gas stations and the urgency of refueling. Meanwhile, by combining ArcEngine and Matlab capabilities, a scenario simulation system of refueling for self-driving travel was developed by using c#.net in order to validate and test the accuracy and applicability of the algorithm. A total of nine testing schemes with four simulation scenarios were designed and executed using this algorithm, and all of the simulation results were consistent with expectations. The refueling recommendation algorithm proposed in this study can automatically adapt to changes in the route length of self-driving travel, the maximum driving distance of the vehicle, and the distance from gas stations, which could provide variable refueling recommendation strategies according to differing gas station layouts along the route. Therefore, the results of this study could provide a scientific reference for the reasonable planning and timely supply of vehicle refueling during self-driving travel.

  12. Constituents from <em>Vigna em>vexillata> and Their Anti-Inflammatory Activity

    Directory of Open Access Journals (Sweden)

    Guo-Feng Chen

    2012-08-01

    Full Text Available The seeds of <em>Vigna em>genus are important food resources and there have already been many reports regarding their bioactivities. In our preliminary bioassay, the chloroform layer of methanol extracts of<em> V. vexillata em>demonstrated significant anti-inflammatory bioactivity. Therefore, the present research is aimed to purify and identify the anti-inflammatory principles of <em>V. vexillataem>. One new sterol (1 and two new isoflavones (2,3 were reported from the natural sources for the first time and their chemical structures were determined by the spectroscopic and mass spectrometric analyses. In addition, 37 known compounds were identified by comparison of their physical and spectroscopic data with those reported in the literature. Among the isolates, daidzein (23, abscisic acid (25, and quercetin (40 displayed the most significant inhibition of superoxide anion generation and elastase release.

  13. Multiple Lookup Table-Based AES Encryption Algorithm Implementation

    Science.gov (United States)

    Gong, Jin; Liu, Wenyi; Zhang, Huixin

    Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.

  14. Synthesis, Crystal Structure and Luminescent Property of Cd (II Complex with <em>N-Benzenesulphonyl-L>-leucine

    Directory of Open Access Journals (Sweden)

    Xishi Tai

    2012-09-01

    Full Text Available A new trinuclear Cd (II complex [Cd3(L6(2,2-bipyridine3] [L =<em> Nem>-phenylsulfonyl-L>-leucinato] has been synthesized and characterized by elemental analysis, IR and X-ray single crystal diffraction analysis. The results show that the complex belongs to the orthorhombic, space group<em> Pem>212121 with<em> aem> = 16.877(3 Å, <em>b> em>= 22.875(5 Å, <em>c em>= 29.495(6 Å, <em>α> em>= <emem>= <emem>= 90°, <em>V> em>= 11387(4 Å3, <em>Z> em>= 4, <em>Dc>= 1.416 μg·m−3, <emem>= 0.737 mm−1, <em>F> em>(000 = 4992, and final <em>R>1 = 0.0390, <em>ωR>2 = 0.0989. The complex comprises two seven-coordinated Cd (II atoms, with a N2O5 distorted pengonal bipyramidal coordination environment and a six-coordinated Cd (II atom, with a N2O4 distorted octahedral coordination environment. The molecules form one dimensional chain structure by the interaction of bridged carboxylato groups, hydrogen bonds and p-p interaction of 2,2-bipyridine. The luminescent properties of the Cd (II complex and <em>N-Benzenesulphonyl-L>-leucine in solid and in CH3OH solution also have been investigated.

  15. Variações anatômicas em Lymnaea columella (Mollusca, Gastropoda

    Directory of Open Access Journals (Sweden)

    Marlene T. Ueta

    1977-12-01

    Full Text Available Foram estudadas variações anatômicas de espécimes de L. columella coletados de diferentes criadouros situados em diversos Municípios do Estado de São Paulo: Campinas, Americana, Atibaia, Pirassununga, Caçapava e Taubaté. As comparações morfométricas foram baseadas em estudos do aparelho genital, rim e rádula. Foram medidos, para cada criadouro, os comprimentos do conjunto útero-vagina, dueto da espermateca, prepúcio e bainha do pênis. Foram ainda calculados os índices de relação entre bainha do pênis/prepúcio e estabelecidos os coeficientes de correlação entre comprimento da concha e comprimento do prepúcio. Cortes longitudiais do complexo peniano foram também objeto de estudo. Em relação a rádula foram determinados o número de fileiras transversais e o número de dentes por fileira, e estabelecida uma fórmula radular aproximada para os diversos criadouros.Soft parts of Lymnaea columella from ten populations from the State of São Paulo were studied in order to determine morphometric variations. These morphometric cornparisons were made upon reproductive system, kidney and radula of snail samples collected in different municipalities: Campinas, Americana, Atibaia, Pirassununga, Caçapava and Taubaté. Length measurementes of uterus, duct of spermatheca, prepuce and penis sheath were taken; the ratio penis sheath/prepuce and correlation coefficients between length of shell and length of prepuce were established. Longitudinal sections of penial complex were also studied. The number of trans verse rows, number of teeth per rows and the length of shell were determined. For each sample, the radular formula was indicated.

  16. Fast Algorithm for Computing the Discrete Hartley Transform of Type-II

    Directory of Open Access Journals (Sweden)

    Mounir Taha Hamood

    2016-06-01

    Full Text Available The generalized discrete Hartley transforms (GDHTs have proved to be an efficient alternative to the generalized discrete Fourier transforms (GDFTs for real-valued data applications. In this paper, the development of direct computation of radix-2 decimation-in-time (DIT algorithm for the fast calculation of the GDHT of type-II (DHT-II is presented. The mathematical analysis and the implementation of the developed algorithm are derived, showing that this algorithm possesses a regular structure and can be implemented in-place for efficient memory utilization.The performance of the proposed algorithm is analyzed and the computational complexity is calculated for different transform lengths. A comparison between this algorithm and existing DHT-II algorithms shows that it can be considered as a good compromise between the structural and computational complexities.

  17. Fundamental length and relativistic length

    International Nuclear Information System (INIS)

    Strel'tsov, V.N.

    1988-01-01

    It si noted that the introduction of fundamental length contradicts the conventional representations concerning the contraction of the longitudinal size of fast-moving objects. The use of the concept of relativistic length and the following ''elongation formula'' permits one to solve this problem

  18. <em>N>-Substituted 5-Chloro-6-phenylpyridazin-3(2<em>H>-ones: Synthesis, Insecticidal Activity Against <em>Plutella xylostella em>(L. and SAR Study

    Directory of Open Access Journals (Sweden)

    Song Yang

    2012-08-01

    Full Text Available A series of <em>N>-substituted 5-chloro-6-phenylpyridazin-3(2<em>H>-one derivatives were synthesized based on our previous work; all compounds were characterized by spectral data and tested for <em>in vitroem> insecticidal activity against <em>Plutella xylostellaem>. The results showed that the synthesized pyridazin-3(2<em>H>-one compounds possessed good insecticidal activities, especially the compounds 4b, 4d, and 4h which showed > 90% activity at 100 mg/L. The structure-activity relationships (SAR for these compounds were also discussed.

  19. From LZ77 to the run-length encoded burrows-wheeler transform, and back

    DEFF Research Database (Denmark)

    Policriti, Alberto; Prezza, Nicola

    2017-01-01

    The Lempel-Ziv factorization (LZ77) and the Run-Length encoded Burrows-Wheeler Transform (RLBWT) are two important tools in text compression and indexing, being their sizes z and r closely related to the amount of text self-repetitiveness. In this paper we consider the problem of converting the t......(r + z) words of working space. Note that r and z can be constant if the text is highly repetitive, and our algorithms can operate with (up to) exponentially less space than naive solutions based on full decompression.......The Lempel-Ziv factorization (LZ77) and the Run-Length encoded Burrows-Wheeler Transform (RLBWT) are two important tools in text compression and indexing, being their sizes z and r closely related to the amount of text self-repetitiveness. In this paper we consider the problem of converting the two...... representations into each other within a working space proportional to the input and the output. Let n be the text length. We show that RLBWT can be converted to LZ77 in O(n log r) time and O(r) words of working space. Conversely, we provide an algorithm to convert LZ77 to RLBWT in O(n(log r + log z)) time and O...

  20. Analysis of the Command and Control Segment (CCS) attitude estimation algorithm

    Science.gov (United States)

    Stockwell, Catherine

    1993-01-01

    This paper categorizes the qualitative behavior of the Command and Control Segment (CCS) differential correction algorithm as applied to attitude estimation using simultaneous spin axis sun angle and Earth cord length measurements. The categories of interest are the domains of convergence, divergence, and their boundaries. Three series of plots are discussed that show the dependence of the estimation algorithm on the vehicle radius, the sun/Earth angle, and the spacecraft attitude. Common qualitative dynamics to all three series are tabulated and discussed. Out-of-limits conditions for the estimation algorithm are identified and discussed.

  1. An Adaptive Bacterial Foraging Optimization Algorithm with Lifecycle and Social Learning

    Directory of Open Access Journals (Sweden)

    Xiaohui Yan

    2012-01-01

    Full Text Available Bacterial Foraging Algorithm (BFO is a recently proposed swarm intelligence algorithm inspired by the foraging and chemotactic phenomenon of bacteria. However, its optimization ability is not so good compared with other classic algorithms as it has several shortages. This paper presents an improved BFO Algorithm. In the new algorithm, a lifecycle model of bacteria is founded. The bacteria could split, die, or migrate dynamically in the foraging processes, and population size varies as the algorithm runs. Social learning is also introduced so that the bacteria will tumble towards better directions in the chemotactic steps. Besides, adaptive step lengths are employed in chemotaxis. The new algorithm is named BFOLS and it is tested on a set of benchmark functions with dimensions of 2 and 20. Canonical BFO, PSO, and GA algorithms are employed for comparison. Experiment results and statistic analysis show that the BFOLS algorithm offers significant improvements than original BFO algorithm. Particulary with dimension of 20, it has the best performance among the four algorithms.

  2. A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm

    Science.gov (United States)

    Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing

    2018-01-01

    To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.

  3. Online EM with weight-based forgetting

    OpenAIRE

    Celaya, Enric; Agostini, Alejandro

    2015-01-01

    In the on-line version of the EM algorithm introduced by Sato and Ishii (2000), a time-dependent discount factor is introduced for forgetting the effect of the old posterior values obtained with an earlier, inaccurate estimator. In their approach, forgetting is uniformly applied to the estimators of each mixture component depending exclusively on time, irrespective of the weight attributed to each unit for the observed sample. This causes an excessive forgetting in the less frequently sampled...

  4. Evaluation of Four Encryption Algorithms for Viability, Reliability and Performance Estimation

    Directory of Open Access Journals (Sweden)

    J. B. Awotunde

    2016-12-01

    Full Text Available Data and information in storage, in transit or during processing are found in various computers and computing devices with wide range of hardware specifications. Cryptography is the knowledge of using codes to encrypt and decrypt data. It enables one to store sensitive information or transmit it across computer in a more secured ways so that it cannot be read by anyone except the intended receiver. Cryptography also allows secure storage of sensitive data on any computer. Cryptography as an approach to computer security comes at a cost in terms of resource utilization such as time, memory and CPU usability time which in some cases may not be in abundance to achieve the set out objective of protecting data. This work looked into the memory construction rate, different key size, CPU utilization time period and encryption speed of the four algorithms to determine the amount of computer resource that is expended and how long it takes each algorithm to complete its task. Results shows that key length of the cryptographic algorithm is proportional to the resource utilization in most cases as found out in the key length of Blowfish, AES, 3DES and DES algorithms respectively. Further research can be carried out in order to determine the power utilization of each of these algorithms.

  5. Spin chain simulations with a meron cluster algorithm

    International Nuclear Information System (INIS)

    Boyer, T.; Bietenholz, W.; Deutsches Elektronen-Synchrotron; Wuilloud, J.; Geneve Univ.

    2007-01-01

    We apply a meron cluster algorithm to the XY spin chain, which describes a quantum rotor. This is a multi-cluster simulation supplemented by an improved estimator, which deals with objects of half-integer topological charge. This method is powerful enough to provide precise results for the model with a θ-term - it is therefore one of the rare examples, where a system with a complex action can be solved numerically. In particular we measure the correlation length, as well as the topological and magnetic susceptibility. We discuss the algorithmic efficiency in view of the critical slowing down. Due to the excellent performance that we observe, it is strongly motivated to work on new applications of meron cluster algorithms in higher dimensions. (orig.)

  6. Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems

    Directory of Open Access Journals (Sweden)

    Zahoor Uddin

    2018-01-01

    Full Text Available Independent component analysis (ICA is a technique of blind source separation (BSS used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM and binary phase shift keying (BPSK signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR and input data block lengths.

  7. Effect of Selection of Design Parameters on the Optimization of a Horizontal Axis Wind Turbine via Genetic Algorithm

    International Nuclear Information System (INIS)

    Alpman, Emre

    2014-01-01

    The effect of selecting the twist angle and chord length distributions on the wind turbine blade design was investigated by performing aerodynamic optimization of a two-bladed stall regulated horizontal axis wind turbine. Twist angle and chord length distributions were defined using Bezier curve using 3, 5, 7 and 9 control points uniformly distributed along the span. Optimizations performed using a micro-genetic algorithm with populations composed of 5, 10, 15, 20 individuals showed that, the number of control points clearly affected the outcome of the process; however the effects were different for different population sizes. The results also showed the superiority of micro-genetic algorithm over a standard genetic algorithm, for the selected population sizes. Optimizations were also performed using a macroevolutionary algorithm and the resulting best blade design was compared with that yielded by micro-genetic algorithm

  8. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Directory of Open Access Journals (Sweden)

    Jiayi Wu

    Full Text Available Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM. We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  9. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Science.gov (United States)

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  10. Estimation of parameters in Shot-Noise-Driven Doubly Stochastic Poisson processes using the EM algorithm--modeling of pre- and postsynaptic spike trains.

    Science.gov (United States)

    Mino, H

    2007-01-01

    To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.

  11. INFLUÊNCIA ESTOICA NA CONCEPÇÃO DE <em>STATUS> E <em>DICTUM> COMO <em> QUASI RES EM> (ὡσανεì τινά EM ABERLARDO STOIC INFLUENCE IN ABELARD'S CONCEPTION OF <em>STATUS> AND <em>DICTUM> AS <em>QUASI RESem> (ὡσανεì τινά.

    Directory of Open Access Journals (Sweden)

    Guy Hamelin

    2011-09-01

    Full Text Available Na sua obra, Pedro Abelardo (1079-1142 destaca duas noções metafísicas que fundamentam sua teoria lógica: o <em style="mso-bidi-font-style: normal;">statusem> e o <em style="mso-bidi-font-style: normal;">dictum propositionisem>, ao causar, respectivamente, a imposição (<em style="mso-bidi-font-style: normal;">impositioem> dos termos universais e o valor de verdade das proposições. Trata-se de expressões que se referem a naturezas ontológicas peculiares, na medida em que não são consideradas coisas (<em style="mso-bidi-font-style: normal;">resem>, mesmo que constituem causas. Todavia, também não são nada. Abelardo as chama de ‘quase coisas’ (<em style="mso-bidi-font-style: normal;">quasi resem>. No presente artigo, explicamos, primeiro, essas duas noções essenciais da lógica abelardiana, antes de tentar, em seguida, encontrar a fonte dessa metafísica particular. Em oposição a comentadores importantes da lógica de Abelardo, que estimam que haja uma forte influência platônica sobre essa concepção específica, defendemos antes, com apoio de textos significativos e de acordo com o nominalismo abelardiano, que a maior ascendência sobre a metafísica do nosso autor é a do estoicismo, sobretudo, antigo.In his work, Peter Abelard (1079-1142 highlights two metaphysical notions, which sustain his logical theory: the <em>status> and the <em>dictum propositionisem>, causing respectively both the imposition (<em>impositio> of universal terms and the thuth-value of propositions. Both expressions refer to peculiar ontological natures, in so far as they are not considered things (<em>res>, even if they constitute causes. Nevertheless, neither are they ‘nothing’. Abelard calls them ‘quasi-things’ (<em>quasi resem>. In the present article, we expound first these two essential notions of Abelardian logic before then trying to find the source of this particular metaphysics. Contrary to some important

  12. Electron scattering in dense atomic and molecular gases: An empirical correlation of polarizability and electron scattering length

    International Nuclear Information System (INIS)

    Rupnik, K.; Asaf, U.; McGlynn, S.P.

    1990-01-01

    A linear correlation exists between the electron scattering length, as measured by a pressure shift method, and the polarizabilities for He, Ne, Ar, Kr, and Xe gases. The correlative algorithm has excellent predictive capability for the electron scattering lengths of mixtures of rare gases, simple molecular gases such as H 2 and N 2 and even complex molecular entities such as methane, CH 4

  13. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  14. EFFICIENT ADAPTIVE STEGANOGRAPHY FOR COLOR IMAGESBASED ON LSBMR ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. Sharmila

    2012-02-01

    Full Text Available Steganography is the art of hiding the fact that communication is taking place, by hiding information in other medium. Many different carrier file formats can be used, but digital images are the most popular because of their frequent use on the Internet. For hiding secret information in images, there exists a large variety of steganographic techniques. The Least Significant Bit (LSB based approach is a simplest type of steganographic algorithm. In all the existing approaches, the decision of choosing the region within a cover image is performed without considering the relationship between image content and the size of secret message. Thus, the plain regions in the cover will be ruin after data hiding even at a low data rate. Hence choosing the edge region for data hiding will be a solution. Many algorithms are deal with edges in images for data hiding. The Paper 'Edge adaptive image steganography based on LSBMR algorithm' is a LSB steganography presented the results of algorithms on gray-scale images only. This paper presents the results of analyzing the performance of edge adaptive steganography for colored images (JPEG. The algorithms have been slightly modified for colored image implementation and are compared on the basis of evaluation parameters like peak signal noise ratio (PSNR and mean square error (MSE. This method can select the edge region depending on the length of secret message and difference between two consecutive bits in the cover image. For length of message is short, only small edge regions are utilized while on leaving other region as such. When the data rate increases, more regions can be used adaptively for data hiding by adjusting the parameters. Besides this, the message is encrypted using efficient cryptographic algorithm which further increases the security.

  15. An Interval Type-2 Fuzzy System with a Species-Based Hybrid Algorithm for Nonlinear System Control Design

    Directory of Open Access Journals (Sweden)

    Chung-Ta Li

    2014-01-01

    Full Text Available We propose a species-based hybrid of the electromagnetism-like mechanism (EM and back-propagation algorithms (SEMBP for an interval type-2 fuzzy neural system with asymmetric membership functions (AIT2FNS design. The interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs and the TSK-type consequent part are adopted to implement the network structure in AIT2FNS. In addition, the type reduction procedure is integrated into an adaptive network structure to reduce computational complexity. Hence, the AIT2FNS can enhance the approximation accuracy effectively by using less fuzzy rules. The AIT2FNS is trained by the SEMBP algorithm, which contains the steps of uniform initialization, species determination, local search, total force calculation, movement, and evaluation. It combines the advantages of EM and back-propagation (BP algorithms to attain a faster convergence and a lower computational complexity. The proposed SEMBP algorithm adopts the uniform method (which evenly scatters solution agents over the feasible solution region and the species technique to improve the algorithm’s ability to find the global optimum. Finally, two illustrative examples of nonlinear systems control are presented to demonstrate the performance and the effectiveness of the proposed AIT2FNS with the SEMBP algorithm.

  16. Control and EMS of a Grid-Connected Microgrid with Economical Analysis

    Directory of Open Access Journals (Sweden)

    Mohamed El-Hendawi

    2018-01-01

    Full Text Available Recently, significant development has occurred in the field of microgrid and renewable energy systems (RESs. Integrating microgrids and renewable energy sources facilitates a sustainable energy future. This paper proposes a control algorithm and an optimal energy management system (EMS for a grid-connected microgrid to minimize its operating cost. The microgrid includes photovoltaic (PV, wind turbine (WT, and energy storage systems (ESS. The interior search algorithm (ISA optimization technique determines the optimal hour-by-hour scheduling for the microgrid system, while it meets the required load demand based on 24-h ahead forecast data. The control system consists of three stages: EMS, supervisory control and local control. EMS is responsible for providing the control system with the optimum day-ahead scheduling power flow between the microgrid (MG sources, batteries, loads and the main grid based on an economic analysis. The supervisory control stage is responsible for compensating the mismatch between the scheduled power and the real microgrid power. In addition, this paper presents the local control design to regulate the local power, current and DC voltage of the microgrid. For verification, the proposed model was applied on a real case study in Oshawa (Ontario, Canada with various load conditions.

  17. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  18. New Algorithm of Automatic Complex Password Generator Employing Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Sura Jasim Mohammed

    2018-01-01

    Full Text Available Due to the occurred increasing in information sharing, internet popularization, E-commerce transactions, and data transferring, security and authenticity become an important and necessary subject. In this paper an automated schema was proposed to generate a strong and complex password which is based on entering initial data such as text (meaningful and simple information or not, with the concept of encoding it, then employing the Genetic Algorithm by using its operations crossover and mutation to generated different data from the entered one. The generated password is non-guessable and can be used in many and different applications and internet services like social networks, secured system, distributed systems, and online services. The proposed password generator achieved diffusion, randomness, and confusions, which are very necessary, required and targeted in the resulted password, in addition to the notice that the length of the generated password differs from the length of initial data, and any simple changing and modification in the initial data produces more and clear modification in the generated password. The proposed work was done using visual basic programing language.

  19. A 3D Printing Model Watermarking Algorithm Based on 3D Slicing and Feature Points

    Directory of Open Access Journals (Sweden)

    Giao N. Pham

    2018-02-01

    Full Text Available With the increase of three-dimensional (3D printing applications in many areas of life, a large amount of 3D printing data is copied, shared, and used several times without any permission from the original providers. Therefore, copyright protection and ownership identification for 3D printing data in communications or commercial transactions are practical issues. This paper presents a novel watermarking algorithm for 3D printing models based on embedding watermark data into the feature points of a 3D printing model. Feature points are determined and computed by the 3D slicing process along the Z axis of a 3D printing model. The watermark data is embedded into a feature point of a 3D printing model by changing the vector length of the feature point in OXY space based on the reference length. The x and y coordinates of the feature point will be then changed according to the changed vector length that has been embedded with a watermark. Experimental results verified that the proposed algorithm is invisible and robust to geometric attacks, such as rotation, scaling, and translation. The proposed algorithm provides a better method than the conventional works, and the accuracy of the proposed algorithm is much higher than previous methods.

  20. SU-E-J-150: Four-Dimensional Cone-Beam CT Algorithm by Extraction of Physical and Motion Parameter of Mobile Targets Retrospective to Image Reconstruction with Motion Modeling

    International Nuclear Information System (INIS)

    Ali, I; Ahmad, S; Alsbou, N

    2015-01-01

    Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases

  1. SU-E-J-150: Four-Dimensional Cone-Beam CT Algorithm by Extraction of Physical and Motion Parameter of Mobile Targets Retrospective to Image Reconstruction with Motion Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ali, I; Ahmad, S [University of Oklahoma Health Sciences, Oklahoma City, OK (United States); Alsbou, N [Ohio Northern University, Ada, OH (United States)

    2015-06-15

    Purpose: To develop 4D-cone-beam CT (CBCT) algorithm by motion modeling that extracts actual length, CT numbers level and motion amplitude of a mobile target retrospective to image reconstruction by motion modeling. Methods: The algorithm used three measurable parameters: apparent length and blurred CT number distribution of a mobile target obtained from CBCT images to determine actual length, CT-number value of the stationary target, and motion amplitude. The predictions of this algorithm were tested with mobile targets that with different well-known sizes made from tissue-equivalent gel which was inserted into a thorax phantom. The phantom moved sinusoidally in one-direction to simulate respiratory motion using eight amplitudes ranging 0–20mm. Results: Using this 4D-CBCT algorithm, three unknown parameters were extracted that include: length of the target, CT number level, speed or motion amplitude for the mobile targets retrospective to image reconstruction. The motion algorithms solved for the three unknown parameters using measurable apparent length, CT number level and gradient for a well-defined mobile target obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on the actual target length and motion amplitude. The gradient of the CT number distribution of the mobile target is dependent on the stationary CT number level, actual target length and motion amplitude. Motion frequency and phase did not affect the elongation and CT number distribution of the mobile target and could not be determined. Conclusion: A 4D-CBCT motion algorithm was developed to extract three parameters that include actual length, CT number level and motion amplitude or speed of mobile targets directly from reconstructed CBCT images without prior knowledge of the stationary target parameters. This algorithm provides alternative to 4D-CBCT without requirement to motion tracking and sorting of the images into different breathing phases

  2. Beyond Mixing-length Theory: A Step Toward 321D

    Science.gov (United States)

    Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav

    2015-08-01

    We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier-Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier-Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier-Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.

  3. BEYOND MIXING-LENGTH THEORY: A STEP TOWARD 321D

    International Nuclear Information System (INIS)

    Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav

    2015-01-01

    We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier–Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier–Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier–Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated

  4. A parallel row-based algorithm for standard cell placement with integrated error control

    Science.gov (United States)

    Sargent, Jeff S.; Banerjee, Prith

    1989-01-01

    A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.

  5. Subsurface imaging by electrical and EM methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    This report consists of 3 subjects. 1) Three dimensional inversion of resistivity data with topography : In this study, we developed a 3-D inversion method based on the finite element calculation of model responses, which can effectively accommodate the irregular topography. In solving the inverse problem, the iterative least-squares approach comprising the smoothness-constraints was taken along with the reciprocity approach in the calculation of Jacobian. Furthermore the Active Constraint Balancing, which has been recently developed by ourselves to enhance the resolving power of the inverse problem, was also employed. Since our new algorithm accounts for the topography in the inversion step, topography correction is not necessary as a preliminary processing and we can expect a more accurate image of the earth. 2) Electromagnetic responses due to a source in the borehole : The effects of borehole fluid and casing on the borehole EM responses should thoroughly be analyzed since they may affect the resultant image of the earth. In this study, we developed an accurate algorithm for calculating the EM responses containing the effects of borehole fluid and casing when a current-carrying ring is located on the borehole axis. An analytic expression for primary vertical magnetic field along the borehole axis was first formulated and the fast Fourier transform is to be applied to get the EM fields at any location in whole space. 3) High frequency electromagnetic impedance survey : At high frequencies the EM impedance becomes a function of the angle of incidence or the horizontal wavenumber, so the electrical properties cannot be readily extracted without first eliminating the effect of horizontal wavenumber on the impedance. For this purpose, this paper considers two independent methods for accurately determining the horizontal wavenumber, which in turn is used to correct the impedance data. The 'apparent' electrical properties derived from the corrected impedance

  6. Comparison of predictive performance of data mining algorithms in predicting body weight in Mengali rams of Pakistan

    Directory of Open Access Journals (Sweden)

    Senol Celik

    Full Text Available ABSTRACT The present study aimed at comparing predictive performance of some data mining algorithms (CART, CHAID, Exhaustive CHAID, MARS, MLP, and RBF in biometrical data of Mengali rams. To compare the predictive capability of the algorithms, the biometrical data regarding body (body length, withers height, and heart girth and testicular (testicular length, scrotal length, and scrotal circumference measurements of Mengali rams in predicting live body weight were evaluated by most goodness of fit criteria. In addition, age was considered as a continuous independent variable. In this context, MARS data mining algorithm was used for the first time to predict body weight in two forms, without (MARS_1 and with interaction (MARS_2 terms. The superiority order in the predictive accuracy of the algorithms was found as CART > CHAID ≈ Exhaustive CHAID > MARS_2 > MARS_1 > RBF > MLP. Moreover, all tested algorithms provided a strong predictive accuracy for estimating body weight. However, MARS is the only algorithm that generated a prediction equation for body weight. Therefore, it is hoped that the available results might present a valuable contribution in terms of predicting body weight and describing the relationship between the body weight and body and testicular measurements in revealing breed standards and the conservation of indigenous gene sources for Mengali sheep breeding. Therefore, it will be possible to perform more profitable and productive sheep production. Use of data mining algorithms is useful for revealing the relationship between body weight and testicular traits in describing breed standards of Mengali sheep.

  7. Habitat quality assessment for the Eurasian otter (<em>Lutra lutraem> on the river Jajrood, Iran

    Directory of Open Access Journals (Sweden)

    Roohallah Mirzaei

    2010-06-01

    Full Text Available Abstract There is little information about the status and ecology of the Eurasian otter (<em>Lutra lutraem> in Iran. We assessed the habitat suitability for otters of the River Jajrood, Tehran province, measuring, or visually estimating, 12 environmental parameters along 16 600 m long river stretches (sampling sites. The downstream stretches of the river were found to be more suitable for otters with respect to the upper part of its course. Although the assessments of habitat suitability for the otter may be affected by several limits, the current distribution of the species on the river agrees with the results of this study. The preservation of the otter in Tehran province should involve the restoration of the ecosystem of the River Jajrood in order to improve the length of suitable river stretches.
    Riassunto Stima dell’idoneità ambientale per la lontra (<em>Lutra lutraem> del fiume Jajrood, Iran. Le informazioni relative alla lontra (<em>Lutra lutraem> in Iran sono scarse. L’idoneità ambientale per la specie del fiume Jajrood, provincia di Tehran, è stata valutata, misurando o stimando 12 parametri ambientali lungo 16 stazioni di campionamento, coincidenti con tratti di fiume della lunghezza di 600 m. I tratti più a valle sono risultati più idonei rispetto al corso superiore del fiume. Malgrado i numerosi limiti del metodo di stima dell’idoneità ambientale adottato, i risultati sono in accordo con l’attuale distribuzione della lontra lungo il fiume Jajrood. La conservazione della lontra nella provincia di Tehran dovrebbe prevedere miglioramenti ambientali volti a incrementare lo sviluppo lineare degli habitat idonei lungo il fiume Jajrood.

    doi:10.4404/hystrix-20.2-4447

  8. Optimal Golomb Ruler Sequences Generation for Optical WDM Systems: A Novel Parallel Hybrid Multi-objective Bat Algorithm

    Science.gov (United States)

    Bansal, Shonak; Singh, Arun Kumar; Gupta, Neena

    2017-02-01

    In real-life, multi-objective engineering design problems are very tough and time consuming optimization problems due to their high degree of nonlinearities, complexities and inhomogeneity. Nature-inspired based multi-objective optimization algorithms are now becoming popular for solving multi-objective engineering design problems. This paper proposes original multi-objective Bat algorithm (MOBA) and its extended form, namely, novel parallel hybrid multi-objective Bat algorithm (PHMOBA) to generate shortest length Golomb ruler called optimal Golomb ruler (OGR) sequences at a reasonable computation time. The OGRs found their application in optical wavelength division multiplexing (WDM) systems as channel-allocation algorithm to reduce the four-wave mixing (FWM) crosstalk. The performances of both the proposed algorithms to generate OGRs as optical WDM channel-allocation is compared with other existing classical computing and nature-inspired algorithms, including extended quadratic congruence (EQC), search algorithm (SA), genetic algorithms (GAs), biogeography based optimization (BBO) and big bang-big crunch (BB-BC) optimization algorithms. Simulations conclude that the proposed parallel hybrid multi-objective Bat algorithm works efficiently as compared to original multi-objective Bat algorithm and other existing algorithms to generate OGRs for optical WDM systems. The algorithm PHMOBA to generate OGRs, has higher convergence and success rate than original MOBA. The efficiency improvement of proposed PHMOBA to generate OGRs up to 20-marks, in terms of ruler length and total optical channel bandwidth (TBW) is 100 %, whereas for original MOBA is 85 %. Finally the implications for further research are also discussed.

  9. An Analysis of Light Periods of BL Lac Object S5 0716+714 with the MUSIC Algorithm

    Science.gov (United States)

    Tang, Jie

    2012-07-01

    The multiple signal classification (MUSIC) algorithm is introduced to the estimation of light periods of BL Lac objects. The principle of the MUSIC algorithm is given, together with a testing on its spectral resolution by using a simulative signal. From a lot of literature, we have collected a large number of effective observational data of the BL Lac object S5 0716+714 in the three optical wavebands V, R, and I from 1994 to 2008. The light periods of S5 0716+714 are obtained by means of the MUSIC algorithm and average periodogram algorithm, respectively. It is found that there exist two major periodic components, one is the period of (3.33±0.08) yr, another is the period of (1.24±0.01) yr. The comparison of the performances of periodicity analysis of two algorithms indicates that the MUSIC algorithm has a smaller requirement on the sample length, as well as a good spectral resolution and anti-noise ability, to improve the accuracy of periodicity analysis in the case of short sample length.

  10. Natural Products from Antarctic Colonial Ascidians of the Genera <em>Aplidium> and <em>Synoicum>: Variability and Defensive Role

    Directory of Open Access Journals (Sweden)

    Conxita Avila

    2012-08-01

    Full Text Available Ascidians have developed multiple defensive strategies mostly related to physical, nutritional or chemical properties of the tunic. One of such is chemical defense based on secondary metabolites. We analyzed a series of colonial Antarctic ascidians from deep-water collections belonging to the genera <em>Aplidium> and <em>Synoicum> to evaluate the incidence of organic deterrents and their variability. The ether fractions from 15 samples including specimens of the species <em>A.> <em>falklandicum>, <em>A.> <em>fuegiense>, <em>A.> <em>meridianum>, <em>A.> <em>millari> and <em>S.> <em>adareanum> were subjected to feeding assays towards two relevant sympatric predators: the starfish <em>Odontaster> <em>validus>, and the amphipod <em>Cheirimedon> <em>femoratus>. All samples revealed repellency. Nonetheless, some colonies concentrated defensive chemicals in internal body-regions rather than in the tunic. Four ascidian-derived meroterpenoids, rossinones B and the three derivatives 2,3-epoxy-rossinone B, 3-epi-rossinone B, 5,6-epoxy-rossinone B, and the indole alkaloids meridianins A–G, along with other minoritary meridianin compounds were isolated from several samples. Some purified metabolites were tested in feeding assays exhibiting potent unpalatabilities, thus revealing their role in predation avoidance. Ascidian extracts and purified compound-fractions were further assessed in antibacterial tests against a marine Antarctic bacterium. Only the meridianins showed inhibition activity, demonstrating a multifunctional defensive role. According to their occurrence in nature and within our colonial specimens, the possible origin of both types of metabolites is discussed.

  11. Fast algorithms for computing defects and their derivatives in the Regge calculus

    International Nuclear Information System (INIS)

    Brewin, Leo

    2011-01-01

    Any practical attempt to solve the Regge equations, these being a large system of non-linear algebraic equations, will almost certainly employ a Newton-Raphson-like scheme. In such cases, it is essential that efficient algorithms be used when computing the defect angles and their derivatives with respect to the leg lengths. The purpose of this paper is to present details of such an algorithm.

  12. Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data

    Science.gov (United States)

    Teßmann, Matthias; Mohr, Stephan; Gayetskyy, Svitlana; Haßler, Ulf; Hanke, Randolf; Greiner, Günther

    2010-12-01

    Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber length has a strong influence on overall strength, stiffness, and stability of the material. The approximate fiber length distribution is usually determined early in the development process, as conventional methods require a destruction of the sample component. In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the segmentation method is demonstrated by applying it to artificially generated test data and selected real components.

  13. A Local Scalable Distributed EM Algorithm for Large P2P Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  14. Fumigant Antifungal Activity of Myrtaceae Essential Oils and Constituents from <em>Leptospermum petersoniiem> against Three <em>Aspergillus> Species

    Directory of Open Access Journals (Sweden)

    Il-Kwon Park

    2012-09-01

    Full Text Available Commercial plant essential oils obtained from 11 Myrtaceae plant species were tested for their fumigant antifungal activity against <em>Aspergillus ochraceusem>, <em>A. flavusem>, and <em>A. nigerem>. Essential oils extracted from<em> em>Leptospermum> <em>petersonii> at air concentrations of 56 × 10−3 mg/mL and 28 × 10−3 mg/mL completely inhibited the growth of the three <em>Aspergillus> species. However, at an air concentration of 14 × 10−3 mg/mL, inhibition rates of <em>L. petersoniiem> essential oils were reduced to 20.2% and 18.8% in the case of <em>A. flavusem> and <em>A. nigerem>, respectively. The other Myrtaceae essential oils (56 × 10−3 mg/mL only weakly inhibited the fungi or had no detectable affect. Gas chromatography-mass spectrometry analysis identified 16 compounds in <em>L. petersoniiem>> em>essential> em>oil.> em>The antifungal activity of the identified compounds was tested individually by using standard or synthesized compounds. Of these, neral and geranial inhibited growth by 100%, at an air concentration of 56 × 10−3 mg/mL, whereas the activity of citronellol was somewhat lover (80%. The other compounds exhibited only moderate or weak antifungal activity. The antifungal activities of blends of constituents identified in <em>L. petersoniiem> oil indicated that neral and geranial were the major contributors to the fumigant and antifungal activities.

  15. Electromagnetism Mechanism for Enhancing the Refueling Cycle Length of a WWER-1000

    Directory of Open Access Journals (Sweden)

    Navid Poursalehi

    2017-02-01

    Full Text Available Increasing the operation cycle length can be an important goal in the fuel reload design of a nuclear reactor core. In this research paper, a new optimization approach, electromagnetism mechanism (EM, is applied to the fuel arrangement design of the Bushehr WWER-1000 core. For this purpose, a neutronic solver has been developed for calculating the required parameters during the reload cycle of the reactor. In this package, two modules have been linked, including PARCS v2.7 and WIMS-5B codes, integrated in a solver for using in the fuel arrangement optimization operation. The first results of the prepared package, along with the cycle for the original pattern of Bushehr WWER-1000, are compared and verified according to the Final Safety Analysis Report and then the results of exploited EM linked with Purdue Advanced Reactor Core Simulator (PARCS and Winfrith Improved Multigroup Scheme (WIMS codes are reported for the loading pattern optimization. Totally, the numerical results of our loading pattern optimization indicate the power of the EM for this problem and also show the effective improvement of desired parameters for the gained semi-optimized core pattern in comparison to the designer scheme.

  16. Electromagnetism mechanism for enhancing the refueling cycle length of a WWER-1000

    Energy Technology Data Exchange (ETDEWEB)

    Poursalehi, Navid; Nejati-Zadeh, Mostafa; Minuchehr, Abdolhamid [Dept. of Nuclear Engineering, Shahid Beheshti University, Tehran (Iran, Islamic Republic of)

    2017-02-15

    Increasing the operation cycle length can be an important goal in the fuel reload design of a nuclear reactor core. In this research paper, a new optimization approach, electromagnetism mechanism (EM), is applied to the fuel arrangement design of the Bushehr WWER-1000 core. For this purpose, a neutronic solver has been developed for calculating the required parameters during the reload cycle of the reactor. In this package, two modules have been linked, including PARCS v2.7 and WIMS-5B codes, integrated in a solver for using in the fuel arrangement optimization operation. The first results of the prepared package, along with the cycle for the original pattern of Bushehr WWER-1000, are compared and verified according to the Final Safety Analysis Report and then the results of exploited EM linked with Purdue Advanced Reactor Core Simulator (PARCS) and Winfrith Improved Multigroup Scheme (WIMS) codes are reported for the loading pattern optimization. Totally, the numerical results of our loading pattern optimization indicate the power of the EM for this problem and also show the effective improvement of desired parameters for the gained semi-optimized core pattern in comparison to the designer scheme.

  17. Clinical Relevance of <em>CDH1em> and <em>CDH13em> DNA-Methylation in Serum of Cervical Cancer Patients

    Directory of Open Access Journals (Sweden)

    Günther K. Bonn

    2012-07-01

    Full Text Available This study was designed to investigate the DNA-methylation status of <em>E>-cadherin (<em>CDH1em> and <em>H>-cadherin (<em>CDH13em> in serum samples of cervical cancer patients and control patients with no malignant diseases and to evaluate the clinical utility of these markers. DNA-methylation status of <em>CDH1em> and <em>CDH13em> was analyzed by means of MethyLight-technology in serum samples from 49 cervical cancer patients and 40 patients with diseases other than cancer. To compare this methylation analysis with another technique, we analyzed the samples with a denaturing high performance liquid chromatography (DHPLC PCR-method. The specificity and sensitivity of <em>CDH1em> DNA-methylation measured by MethyLight was 75% and 55%, and for <em>CDH13em> DNA-methylation 95% and 10%. We identified a specificity of 92.5% and a sensitivity of only 27% for the <em>CDH1em> DHPLC-PCR analysis. Multivariate analysis showed that serum <em>CDH1em> methylation-positive patients had a 7.8-fold risk for death (95% CI: 2.2–27.7; <em>p> = 0.001 and a 92.8-fold risk for relapse (95% CI: 3.9–2207.1; <em>p> = 0.005. We concluded that the serological detection of <em>CDH1em> and <em>CDH13em> DNA-hypermethylation is not an ideal diagnostic tool due to low diagnostic specificity and sensitivity. However, it was validated that <em>CDH1em> methylation analysis in serum samples may be of potential use as a prognostic marker for cervical cancer patients.

  18. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  19. Extraction of Dihydroquercetin<em> em>from <em>Larix gmeliniem>i> em>with Ultrasound-Assisted and Microwave-Assisted Alternant Digestion

    Directory of Open Access Journals (Sweden)

    Yuangang Zu

    2012-07-01

    Full Text Available An ultrasound and microwave assisted alternant extraction method (UMAE was applied for extracting dihydroquercetin (DHQ from <em>Larix gmeliniem>i> wood. This investigation was conducted using 60% ethanol as solvent, 1:12 solid to liquid ratio, and 3 h soaking time. The optimum treatment time was ultrasound 40 min, microwave 20 min, respectively, and the extraction was performed once. Under the optimized conditions, satisfactory extraction yield of the target analyte was obtained. Relative to ultrasound-assisted or microwave-assisted method, the proposed approach provides higher extraction yield. The effect of DHQ of different concentrations and synthetic antioxidants on oxidative stability in soy bean oil stored for 20 days at different temperatures (25 °C and 60 °C was compared. DHQ was more effective in restraining soy bean oil oxidation, and a dose-response relationship was observed. The antioxidant activity of DHQ was a little stronger than that of BHA and BHT. Soy bean oil supplemented with 0.08 mg/g DHQ exhibited favorable antioxidant effects and is preferable for effectively avoiding oxidation. The <em>L. gmeliniiem> wood samples before and after extraction were characterized by scanning electron microscopy. The results showed that the UMAE method is a simple and efficient technique for sample preparation.

  20. Mobile and embedded fast high resolution image stitching for long length rectangular monochromatic objects with periodic structure

    Science.gov (United States)

    Limonova, Elena; Tropin, Daniil; Savelyev, Boris; Mamay, Igor; Nikolaev, Dmitry

    2018-04-01

    In this paper we describe stitching protocol, which allows to obtain high resolution images of long length monochromatic objects with periodic structure. This protocol can be used for long length documents or human-induced objects in satellite images of uninhabitable regions like Arctic regions. The length of such objects can reach notable values, while modern camera sensors have limited resolution and are not able to provide good enough image of the whole object for further processing, e.g. using in OCR system. The idea of the proposed method is to acquire a video stream containing full object in high resolution and use image stitching. We expect the scanned object to have straight boundaries and periodic structure, which allow us to introduce regularization to the stitching problem and adapt algorithm for limited computational power of mobile and embedded CPUs. With the help of detected boundaries and structure we estimate homography between frames and use this information to reduce complexity of stitching. We demonstrate our algorithm on mobile device and show image processing speed of 2 fps on Samsung Exynos 5422 processor

  1. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  2. How <em>Varroa> Parasitism Affects the Immunological and Nutritional Status of the Honey Bee, <em>Apis melliferaem>

    Directory of Open Access Journals (Sweden)

    Katherine A. Aronstein

    2012-06-01

    Full Text Available We investigated the effect of the parasitic mite <em>Varroa destructorem> on the immunological and nutritional condition of honey bees, <em>Apis melliferaem>, from the perspective of the individual bee and the colony. Pupae, newly-emerged adults and foraging adults were sampled from honey bee colonies at one site in S. Texas, USA. <em>Varroa>‑infested bees displayed elevated titer of Deformed Wing Virus (DWV, suggestive of depressed capacity to limit viral replication. Expression of genes coding three anti-microbial peptides (<em>defensin1, abaecin, hymenoptaecinem> was either not significantly different between <em>Varroa>-infested and uninfested bees or was significantly elevated in <em>Varroa>-infested bees, varying with sampling date and bee developmental age. The effect of <em>Varroa> on nutritional indices of the bees was complex, with protein, triglyceride, glycogen and sugar levels strongly influenced by life-stage of the bee and individual colony. Protein content was depressed and free amino acid content elevated in <em>Varroa>-infested pupae, suggesting that protein synthesis, and consequently growth, may be limited in these insects. No simple relationship between the values of nutritional and immune-related indices was observed, and colony-scale effects were indicated by the reduced weight of pupae in colonies with high <em>Varroa> abundance, irrespective of whether the individual pupa bore <em>Varroa>.

  3. Construction of Short-Length High-Rates LDPC Codes Using Difference Families

    Directory of Open Access Journals (Sweden)

    Deny Hamdani

    2010-10-01

    Full Text Available Low-density parity-check (LDPC code is linear-block error-correcting code defined by sparse parity-check matrix. It is decoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paper presents a class of low-density parity-check (LDPC codes showing good performance with low encoding complexity. The code is constructed using difference families from  combinatorial design. The resulting code, which is designed to have short code length and high code rate, can be encoded with low complexity due to its quasi-cyclic structure, and performs well when it is iteratively decoded with the sum-product algorithm. These properties of LDPC code are quite suitable for applications in future wireless local area network.

  4. Clustering of tethered satellite system simulation data by an adaptive neuro-fuzzy algorithm

    Science.gov (United States)

    Mitra, Sunanda; Pemmaraju, Surya

    1992-01-01

    Recent developments in neuro-fuzzy systems indicate that the concepts of adaptive pattern recognition, when used to identify appropriate control actions corresponding to clusters of patterns representing system states in dynamic nonlinear control systems, may result in innovative designs. A modular, unsupervised neural network architecture, in which fuzzy learning rules have been embedded is used for on-line identification of similar states. The architecture and control rules involved in Adaptive Fuzzy Leader Clustering (AFLC) allow this system to be incorporated in control systems for identification of system states corresponding to specific control actions. We have used this algorithm to cluster the simulation data of Tethered Satellite System (TSS) to estimate the range of delta voltages necessary to maintain the desired length rate of the tether. The AFLC algorithm is capable of on-line estimation of the appropriate control voltages from the corresponding length error and length rate error without a priori knowledge of their membership functions and familarity with the behavior of the Tethered Satellite System.

  5. Trophic systems and chorology: data from shrews, moles and voles of Italy preyed by the barn owl / Sistemi trofici e corologia: dati su <em>Soricidae>, <em>Talpidae> ed <em>Arvicolidae> d'Italia predati da <em>Tyto albaem> (Scopoli 1769

    Directory of Open Access Journals (Sweden)

    Longino Contoli

    1986-12-01

    Full Text Available Abstract In small Mammals biogeography, available data are up to now by far too scanty for elucidate the distribution of a lot of taxa, especially with regard to the absence from a given area. In this respect, standardized quantitative sampling techniques, like Owl pellets analysis can enable not only to enhance faunistic knowledges, but also to estimate the actual absence probability of a given taxon "m", lacking from the diet of an individual raptor. For the last purpose, the relevant frequencies of "m" in the other ecologically similar sites of the same raptor species diets are averaged ($f_m$ : the relevant standard error (multiplicated by a coefficient, according to the desired degree of accuracy, in relation of the integral of probabilities subtracted ($overline{F}_m - a E$: then, the probability that a single specimen is not pertaining to "m" is obtained ($P_0 = 1 - F_m + a E$; lastly, the desiderate accuracy probability ($P_d$ is chosen. Now, "$N_d$" (the number of individuals of all prey species in a single site needed for obtain, with the desired probability, a specimen at least of "m" is obtained through $$N = frac{ln P_d}{ln P_0}$$ Obviously, every site-diet with more than "N" preyed individuals and without any "i" specimen is considered to be lacking of such taxon. A "usefulness index" for the above purposes is outlined and checked about three raptors. Some exanples about usefulness of the Owl pellet analysis method in biogeography are given, concerning <em>Tyto albaem> diets in peninsular Italy about: - <em>Sorex minutusem>, lacking in some quite insulated areas; - <em>Sorex araneusem> (sensu stricto, after GRAF et al., 1979, present also in lowland areas in Emilia-Romagna; - <em>Crocidura suaveolensem> and - <em>Suncus etruscusem>, present also in the southermost part of Calabria (Reggio province; - <em>Talpa caecaem>, present also in the Antiapennines of Latium (Cimini mounts; - <em>Talpa romanaem

  6. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  7. Algorithm of orthogonal bi-axle for auto-separating of watermelon seeds

    Science.gov (United States)

    Sun, Yong; Guan, Miao; Yu, Daoqin; Wang, Jing

    2007-11-01

    During the process of watermelon seeds characteristic extraction as well as separation, watermelon seeds' major and minor axes, the length and width ratio have played a very important role in appearance regulating degree evaluation. It is quite difficult to find the answer of orthogonal bi-axes because the watermelon seeds are flat and irregular in shape and what's more there is no rule to follow. After a lot of experiments and research, the author proposed the algorithm of orthogonal bi-axes algorithm for granulated object. It has been put into practice and proved in the application of auto-separation system for watermelon seeds. This algorithm has the advantage of lower time complexity and higher precision compared with other algorithms. The algorithm can be used in the solution of other similar granulated objects, and has the widespread application value.

  8. Scheduling Algorithms for Maximizing Throughput with Zero-Forcing Beamforming in a MIMO Wireless System

    Science.gov (United States)

    Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi

    Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.

  9. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim

    2017-01-01

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended

  10. Pressure algorithm for elliptic flow calculations with the PDF method

    Science.gov (United States)

    Anand, M. S.; Pope, S. B.; Mongia, H. C.

    1991-01-01

    An algorithm to determine the mean pressure field for elliptic flow calculations with the probability density function (PDF) method is developed and applied. The PDF method is a most promising approach for the computation of turbulent reacting flows. Previous computations of elliptic flows with the method were in conjunction with conventional finite volume based calculations that provided the mean pressure field. The algorithm developed and described here permits the mean pressure field to be determined within the PDF calculations. The PDF method incorporating the pressure algorithm is applied to the flow past a backward-facing step. The results are in good agreement with data for the reattachment length, mean velocities, and turbulence quantities including triple correlations.

  11. An iterative algorithm for calculating stylus radius unambiguously

    International Nuclear Information System (INIS)

    Vorburger, T V; Zheng, A; Renegar, T B; Song, J-F; Ma, L

    2011-01-01

    The stylus radius is an important specification for stylus instruments and is commonly provided by instrument manufacturers. However, it is difficult to measure the stylus radius unambiguously. Accurate profiles of the stylus tip may be obtained by profiling over an object sharper than itself, such as a razor blade. However, the stylus profile thus obtained is a partial arc, and unless the shape of the stylus tip is a perfect sphere or circle, the effective value of the radius depends on the length of the tip profile over which the radius is determined. We have developed an iterative, least squares algorithm aimed to determine the effective least squares stylus radius unambiguously. So far, the algorithm converges to reasonable results for the least squares stylus radius. We suggest that the algorithm be considered for adoption in documentary standards describing the properties of stylus instruments.

  12. Reproductive traits in the european Hare (<em>Lepus europaeusem> Pallas: the typical or Brown and the Mountain haplotypes.

    Directory of Open Access Journals (Sweden)

    Charlotte Ragagli

    2008-07-01

    Full Text Available Abstract Four hundred and two pairs of hares belonging to the mountain and brown haplotypes of the European hare <em>Lepus europaeusem> Pallas, 1778 were raised in a farm located in central Italy over 4 years (from 2003 to 2006. The birth date, total number of young born, and number of surviving and weaned leverets were recorded for each pair. The start of reproduction, birth-interval, length of the reproductive season, number of birth per pair per year, number of leverets per pair, number of weaned leverets per pair and number of weaned leverets per birth were analysed in relation to the different haplotypes and years; the incidence of superfetation and pseudogestation was also considered. Results showed that the brown hare produced young at the beginning of February, whilst the mountain hare started reproduction significantly later. Brown hares showed a longer reproductive period than mountain hares (192 days vs 156 days and a higher productivity. The most frequent gestation length was 37-41 days. The distribution of delivery intervals did not differ between the two haplotypes. Riassunto Caratteristiche riproduttive di due aplotipi della lepre (<em>Lepus europaeusem> Pallas 1778. Lepri (<em>Lepus europaeusem> Pallas 1778 appartenenti all’aplotipo di montagna e a quello bruno sono state monitorate per 4 anni (dal 2003 al 2006 in uno stesso allevamento situato in una zona dell’Italia centrale. Per ciascuna coppia di riproduttori allevata (N = 402 sono stati raccolti i dati relativi a: data del parto, numero totale di nati, numero totale di nati vivi e di leprotti svezzati. L’inizio del periodo riproduttivo, l’intervallo interparto, la durata della gestazione, la durata della stagione riproduttiva, il numero di parti per coppia per anno, il numero di nati per coppia, il numero di svezzati per coppia, il numero di svezzati per parto sono stati analizzati in relazione ai

  13. Quantitative analysis of length-diameter distribution and cross-sectional properties of fibers from three-dimensional tomographic images

    DEFF Research Database (Denmark)

    Miettinen, Arttu; Joffe, Roberts; Madsen, Bo

    2013-01-01

    obtained from optical microscopy of polished cross-sections of a composite. This approach gives accurate yet local results, but a rather large number of optical images have to be processed to achieve a representative description of the morphology of the material. In this work a fully automatic algorithm......A number of rule-of-mixture micromechanical models have been successfully used to predict the mechanical properties of short fiber composites. However, in order to obtain accurate predictions, a detailed description of the internal structure of the material is required. This information is often...... for estimating the length-diameter distribution of solid or hollow fibers, utilizing three-dimensional X-ray tomographic images, is presented. The method is based on a granulometric approach for fiber length distribution measurement, combined with a novel algorithm that relates cross-sectional fiber properties...

  14. Dynamic Allan Variance Analysis Method with Time-Variant Window Length Based on Fuzzy Control

    Directory of Open Access Journals (Sweden)

    Shanshan Gu

    2015-01-01

    Full Text Available To solve the problem that dynamic Allan variance (DAVAR with fixed length of window cannot meet the identification accuracy requirement of fiber optic gyro (FOG signal over all time domains, a dynamic Allan variance analysis method with time-variant window length based on fuzzy control is proposed. According to the characteristic of FOG signal, a fuzzy controller with the inputs of the first and second derivatives of FOG signal is designed to estimate the window length of the DAVAR. Then the Allan variances of the signals during the time-variant window are simulated to obtain the DAVAR of the FOG signal to describe the dynamic characteristic of the time-varying FOG signal. Additionally, a performance evaluation index of the algorithm based on radar chart is proposed. Experiment results show that, compared with different fixed window lengths DAVAR methods, the change of FOG signal with time can be identified effectively and the evaluation index of performance can be enhanced by 30% at least by the DAVAR method with time-variant window length based on fuzzy control.

  15. Proximate Composition, Nutritional Attributes and Mineral Composition of <em>Peperomia> <em>pellucida> L. (Ketumpangan Air Grown in Malaysia

    Directory of Open Access Journals (Sweden)

    Maznah Ismail

    2012-09-01

    Full Text Available This study presents the proximate and mineral composition of <em>Peperomia> <em>pellucida> L., an underexploited weed plant in Malaysia. Proximate analysis was performed using standard AOAC methods and mineral contents were determined using atomic absorption spectrometry. The results indicated <em>Peperomia> <em>pellucida> to be rich in crude protein, carbohydrate and total ash contents. The high amount of total ash (31.22% suggests a high-value mineral composition comprising potassium, calcium and iron as the main elements. The present study inferred that <em>Peperomia> <em>pellucida> would serve as a good source of protein and energy as well as micronutrients in the form of a leafy vegetable for human consumption.

  16. Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data

    Directory of Open Access Journals (Sweden)

    Günther Greiner

    2010-01-01

    Full Text Available Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber length has a strong influence on overall strength, stiffness, and stability of the material. The approximate fiber length distribution is usually determined early in the development process, as conventional methods require a destruction of the sample component. In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the segmentation method is demonstrated by applying it to artificially generated test data and selected real components.

  17. Evaluation of Antioxidant Activities of Aqueous Extracts and Fractionation of Different Parts of <em>Elsholtzia em>ciliata>

    Directory of Open Access Journals (Sweden)

    Yuangang Zu

    2012-05-01

    Full Text Available The aim of this study was to investigate the antioxidant and free-radical scavenging activity of extract and fractions from various parts of <em>Elsholtzia ciliataem>. The inflorescences, leaves, stems and roots of <em>E. ciliataem> were extracted separately and two phenolic component enrichment methods: ethyl acetate-water liquid-liquid extraction and macroporous resin adsorption-desorption, were adopted in this study. The antioxidant activities of water extracts and fractions of <em>E. ciliataem> were examined using different assay model systems <em>in vitroem>. The fraction root E (purified by HPD300 macroporous resin exhibited the highest total phenolics content (497.2 ± 24.9 mg GAE/g, accompanied with the highest antioxidant activity against various antioxidant systems <em>in vitroem> compared to other fractions. On the basis of the results obtained, <em>E. ciliataem> extracts can be used potentially as a ready accessible and valuable bioactive source of natural antioxidants.

  18. Acceleration of the direct reconstruction of linear parametric images using nested algorithms

    International Nuclear Information System (INIS)

    Wang Guobao; Qi Jinyi

    2010-01-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  19. A new modified artificial bee colony algorithm for the economic dispatch problem

    International Nuclear Information System (INIS)

    Secui, Dinu Calin

    2015-01-01

    Highlights: • A new modified ABC algorithm (MABC) is proposed to solve the EcD/EmD problem. • Valve-point effects, ramp-rate limits, POZ, transmission losses were considered. • The algorithm is tested on four systems having 6, 13, 40 and 52 thermal units. • MABC algorithm outperforms several optimization techniques. - Abstract: In this paper a new modified artificial bee colony algorithm (MABC) is proposed to solve the economic dispatch problem by taking into account the valve-point effects, the emission pollutions and various operating constraints of the generating units. The MABC algorithm introduces a new relation to update the solutions within the search space, in order to increase the algorithm ability to avoid premature convergence and to find stable and high quality solutions. Moreover, to strengthen the MABC algorithm performance, it is endowed with a chaotic sequence generated by both a cat map and a logistic map. The MABC algorithm behavior is investigated for several combinations resulting from three generating modalities of the chaotic sequences and two selection schemes of the solutions. The performance of the MABC variants is tested on four systems having six units, thirteen units, forty units and fifty-two thermal generating units. The comparison of the results shows that the MABC variants have a better performance than the classical ABC algorithm and other optimization techniques

  20. False-nearest-neighbors algorithm and noise-corrupted time series

    International Nuclear Information System (INIS)

    Rhodes, C.; Morari, M.

    1997-01-01

    The false-nearest-neighbors (FNN) algorithm was originally developed to determine the embedding dimension for autonomous time series. For noise-free computer-generated time series, the algorithm does a good job in predicting the embedding dimension. However, the problem of predicting the embedding dimension when the time-series data are corrupted by noise was not fully examined in the original studies of the FNN algorithm. Here it is shown that with large data sets, even small amounts of noise can lead to incorrect prediction of the embedding dimension. Surprisingly, as the length of the time series analyzed by FNN grows larger, the cause of incorrect prediction becomes more pronounced. An analysis of the effect of noise on the FNN algorithm and a solution for dealing with the effects of noise are given here. Some results on the theoretically correct choice of the FNN threshold are also presented. copyright 1997 The American Physical Society

  1. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    Science.gov (United States)

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  2. The Forward-Reverse Algorithm for Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2015-01-07

    In this work, we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which we solve a set of deterministic optimization problems where the SRNs are replaced by the classical ODE rates; then, during the second phase, the Monte Carlo version of the EM algorithm is applied starting from the output of the previous phase. Starting from a set of over-dispersed seeds, the output of our two-phase method is a cluster of maximum likelihood estimates obtained by using convergence assessment techniques from the theory of Markov chain Monte Carlo.

  3. Genomic multiple sequence alignments: refinement using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Lefkowitz Elliot J

    2005-08-01

    Full Text Available Abstract Background Genomic sequence data cannot be fully appreciated in isolation. Comparative genomics – the practice of comparing genomic sequences from different species – plays an increasingly important role in understanding the genotypic differences between species that result in phenotypic differences as well as in revealing patterns of evolutionary relationships. One of the major challenges in comparative genomics is producing a high-quality alignment between two or more related genomic sequences. In recent years, a number of tools have been developed for aligning large genomic sequences. Most utilize heuristic strategies to identify a series of strong sequence similarities, which are then used as anchors to align the regions between the anchor points. The resulting alignment is globally correct, but in many cases is suboptimal locally. We describe a new program, GenAlignRefine, which improves the overall quality of global multiple alignments by using a genetic algorithm to improve local regions of alignment. Regions of low quality are identified, realigned using the program T-Coffee, and then refined using a genetic algorithm. Because a better COFFEE (Consistency based Objective Function For alignmEnt Evaluation score generally reflects greater alignment quality, the algorithm searches for an alignment that yields a better COFFEE score. To improve the intrinsic slowness of the genetic algorithm, GenAlignRefine was implemented as a parallel, cluster-based program. Results We tested the GenAlignRefine algorithm by running it on a Linux cluster to refine sequences from a simulation, as well as refine a multiple alignment of 15 Orthopoxvirus genomic sequences approximately 260,000 nucleotides in length that initially had been aligned by Multi-LAGAN. It took approximately 150 minutes for a 40-processor Linux cluster to optimize some 200 fuzzy (poorly aligned regions of the orthopoxvirus alignment. Overall sequence identity increased only

  4. Reactive power dispatch considering voltage stability with seeker optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Chaohua; Chen, Weirong; Zhang, Xuexia [The School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031 (China); Zhu, Yunfang [Department of Computer and Communication Engineering, E' mei Campus, Southwest Jiaotong University, E' mei 614202 (China)

    2009-10-15

    Optimal reactive power dispatch (ORPD) has a growing impact on secure and economical operation of power systems. This issue is well known as a non-linear, multi-modal and multi-objective optimization problem where global optimization techniques are required in order to avoid local minima. In the last decades, computation intelligence-based techniques such as genetic algorithms (GAs), differential evolution (DE) algorithms and particle swarm optimization (PSO) algorithms, etc., have often been used for this aim. In this work, a seeker optimization algorithm (SOA) based method is proposed for ORPD considering static voltage stability and voltage deviation. The SOA is based on the concept of simulating the act of human searching where search direction is based on the empirical gradient by evaluating the response to the position changes and step length is based on uncertainty reasoning by using a simple Fuzzy rule. The algorithm's performance is studied with comparisons of two versions of GAs, three versions of DE algorithms and four versions of PSO algorithms on the IEEE 57 and 118-bus power systems. The simulation results show that the proposed approach performed better than the other listed algorithms and can be efficiently used for the ORPD problem. (author)

  5. Construction of Short-length High-rates Ldpc Codes Using Difference Families

    OpenAIRE

    Deny Hamdani; Ery Safrianti

    2007-01-01

    Low-density parity-check (LDPC) code is linear-block error-correcting code defined by sparse parity-check matrix. It isdecoded using the massage-passing algorithm, and in many cases, capable of outperforming turbo code. This paperpresents a class of low-density parity-check (LDPC) codes showing good performance with low encoding complexity.The code is constructed using difference families from combinatorial design. The resulting code, which is designed tohave short code length and high code r...

  6. Control of the coypu (<em>Myocastor coypusem> by cage-trapping in the cultivated plain of northern Italy

    Directory of Open Access Journals (Sweden)

    Claudio Prigioni

    2006-03-01

    Full Text Available Abstract Between November 2002 and March 2003, thirty-five trapping sessions, carried out along 1.5-9 m wide irrigation canals scattered in six provinces of Lombardy region (northern Italy, allowed us to test for the effectiveness of coypu (<em>Myocastor coypusem> control operations in the central part of the intensively cultivated plain of the River Po. A total of 1534 coypus were captured, with a trapping success of 0.087 removed coypus/trap-days. Trapping sessions of about 33 consecutive days guaranteed the best cost/benefit ratio. Only few trapping sessions determined a significant decrease of local population size. In most of the trapping sites, removal of coypus probably enhanced immigration of animals from neighbouring areas. Among captured coypus, the sex-ratio was not significantly biased. The young/adults ratio (mean value = 0.33 significantly decreased in February and March 2003 with respect to previous months. The 11.6% of overall trapped females were pregnant. Adult coypus resulted sexually dimorphic for head-body length, tail length and weight, being higher for males, while young coypus did not show any significant variation between sexes. Some implications for the coypu management are also discussed. Riassunto Controllo numerico della Nutria (<em>Myocastor coypusem> mediante trappolaggio in aree coltivate della pianura Padana. Nel periodo novembre 2002-marzo 2003 sono state effettuate 35 sessioni di trappolaggio lungo canali di irrigazione (1,5-9 m di larghezza distribuiti in 6 province lombarde, al fine di valutare l’efficacia dell’intervento di controllo della popolazione di Nutria (<em>Myocastor coypusem>. In totale sono stati catturati 1534 animali con un successo di trappolaggio di 0,087 nutrie/giorni trappola. Le sessioni di trappolaggio della durata di 33 giorni consecutivi erano quelle che garantivano il miglior rapporto costi/benefici. Un significativo

  7. Numerical algorithms for intragranular diffusional fission gas release incorporated in the Transuranus code

    International Nuclear Information System (INIS)

    Lassmann, K.

    2002-01-01

    Complicated physical processes govern diffusional fission gas release in nuclear fuels. In addition to the physical problem there exists a numerical problem, as some solutions of the underlying diffusion equation contain numerical errors that by far exceed the physical details. In this paper the two algorithms incorporated in the TRANSURANUS code, the URGAS and the new FORMAS algorithm are compared. The previously reported deficiency of the most elegant and mathematically sound FORMAS algorithm at low release could be overcome. Both algorithms are simple, fast, without numerical problems, insensitive to time step lengths and well balanced over the entire range of fission gas release. They can be made available on request as FORTRAN subroutines. (author)

  8. Foot length measurements of newborns of high and low risk pregnancies.

    Science.gov (United States)

    Salge, Ana Karina Marques; Rocha, Érika Lopes; Gaíva, Maria Aparecida Munhoz; Castral, Thaíla Correa; Guimarães, Janaína Valadares; Xavier, Raphaela Maioni

    2017-03-09

    Comparing foot length measurements of newborns in high and low risk pregnancies at a public hospital in Goiânia, GO, Brazil. A cross-sectional study carried out between April, 2013 and May, 2015, with a sample consisting of 180 newborns; 106 infants of women from high-risk pregnancies and 74 of women from low-risk pregnancies. Data were descriptively analyzed. Foot length measurement was performed using a stiff transparent plastic ruler, graduated in millimeters. The length of both feet was measured from the tip of the hallux (big toe) to the end of the heel. A statistically significant relationship was found between the foot length and newborn's weight, between the cephalic and thoracic perimeters in the high-risk group and between the cephalic perimeter in the control group. There is a need for creating cut-off points to identify newborns with intrauterine growth disorders using foot length. Comparar as medidas do comprimento hálux-calcâneo de recém-nascidos em gestações de alto e baixo risco em um hospital público de Goiânia, GO. Estudo transversal, realizado no período de abril de 2013 a maio de 2015, cuja amostra constituiu-se de 180 recém-nascidos, 106 filhos de mulheres com gestação de alto risco e 74 de mulheres com gestação de baixo risco. Os dados foram analisados descritivamente. A medida do comprimento hálux-calcâneo foi realizada utilizando-se de régua plástica transparente rígida, graduada em milímetros. Foram medidos ambos os pés, aferindo-se o comprimento da ponta do hálux até a extremidade do calcâneo. Foi encontrada relação estatisticamente significante entre o comprimento hálux-calcâneo e o peso do recém-nascido, entre os perímetros cefálico e torácico no grupo de alto risco e entre o perímetro cefálico no grupo controle. Existe necessidade da criação de pontos de corte para identificar recém-nascidos com desvios de crescimento intrauterino utilizando-se do comprimento hálux-calcâneo. Comparar las mediciones

  9. Improving the quantum cost of reversible Boolean functions using reorder algorithm

    Science.gov (United States)

    Ahmed, Taghreed; Younes, Ahmed; Elsayed, Ashraf

    2018-05-01

    This paper introduces a novel algorithm to synthesize a low-cost reversible circuits for any Boolean function with n inputs represented as a Positive Polarity Reed-Muller expansion. The proposed algorithm applies a predefined rules to reorder the terms in the function to minimize the multi-calculation of common parts of the Boolean function to decrease the quantum cost of the reversible circuit. The paper achieves a decrease in the quantum cost and/or the circuit length, on average, when compared with relevant work in the literature.

  10. Reducing a congestion with introduce the greedy algorithm on traffic light control

    Science.gov (United States)

    Catur Siswipraptini, Puji; Hendro Martono, Wisnu; Hartanti, Dian

    2018-03-01

    The density of vehicles causes congestion seen at every junction in the city of jakarta due to the static or manual traffic timing lamp system consequently the length of the queue at the junction is uncertain. The research has been aimed at designing a sensor based traffic system based on the queue length detection of the vehicle to optimize the duration of the green light. In detecting the length of the queue of vehicles using infrared sensor assistance placed in each intersection path, then apply Greedy algorithm to help accelerate the movement of green light duration for the path that requires, while to apply the traffic lights regulation program based on greedy algorithm which is then stored on microcontroller with Arduino Mega 2560 type. Where a developed system implements the greedy algorithm with the help of the infrared sensor it will extend the duration of the green light on the long vehicle queue and accelerate the duration of the green light at the intersection that has the queue not too dense. Furthermore, the design is made to form an artificial form of the actual situation of the scale model or simple simulator (next we just called as scale model of simulator) of the intersection then tested. Sensors used are infrared sensors, where the placement of sensors in each intersection on the scale model is placed within 10 cm of each sensor and serves as a queue detector. From the results of the test process on the scale model with a longer queue obtained longer green light time so it will fix the problem of long queue of vehicles. Using greedy algorithms can add long green lights for 2 seconds on tracks that have long queues at least three sensor levels and accelerate time at other intersections that have longer queue sensor levels less than level three.

  11. Walking pattern classification and walking distance estimation algorithms using gait phase information.

    Science.gov (United States)

    Wang, Jeen-Shing; Lin, Che-Wei; Yang, Ya-Ting C; Ho, Yu-Jen

    2012-10-01

    This paper presents a walking pattern classification and a walking distance estimation algorithm using gait phase information. A gait phase information retrieval algorithm was developed to analyze the duration of the phases in a gait cycle (i.e., stance, push-off, swing, and heel-strike phases). Based on the gait phase information, a decision tree based on the relations between gait phases was constructed for classifying three different walking patterns (level walking, walking upstairs, and walking downstairs). Gait phase information was also used for developing a walking distance estimation algorithm. The walking distance estimation algorithm consists of the processes of step count and step length estimation. The proposed walking pattern classification and walking distance estimation algorithm have been validated by a series of experiments. The accuracy of the proposed walking pattern classification was 98.87%, 95.45%, and 95.00% for level walking, walking upstairs, and walking downstairs, respectively. The accuracy of the proposed walking distance estimation algorithm was 96.42% over a walking distance.

  12. A fast exact sequential algorithm for the partial digest problem.

    Science.gov (United States)

    Abbas, Mostafa M; Bahig, Hazem M

    2016-12-22

    Restriction site analysis involves determining the locations of restriction sites after the process of digestion by reconstructing their positions based on the lengths of the cut DNA. Using different reaction times with a single enzyme to cut DNA is a technique known as a partial digestion. Determining the exact locations of restriction sites following a partial digestion is challenging due to the computational time required even with the best known practical algorithm. In this paper, we introduce an efficient algorithm to find the exact solution for the partial digest problem. The algorithm is able to find all possible solutions for the input and works by traversing the solution tree with a breadth-first search in two stages and deleting all repeated subproblems. Two types of simulated data, random and Zhang, are used to measure the efficiency of the algorithm. We also apply the algorithm to real data for the Luciferase gene and the E. coli K12 genome. Our algorithm is a fast tool to find the exact solution for the partial digest problem. The percentage of improvement is more than 75% over the best known practical algorithm for the worst case. For large numbers of inputs, our algorithm is able to solve the problem in a suitable time, while the best known practical algorithm is unable.

  13. Momentos em freios e em embraiagens

    OpenAIRE

    Mimoso, Rui Miguel Pereira

    2011-01-01

    Dissertação para obtenção do Grau de Mestre em Mestrado Integrado em Engenharia Mecânica Nesta dissertação reúnem-se os modelos de cálculo utilizados na determinação dos momentos em freios e em embraiagens. Neste trabalho consideram-se os casos de freios e embraiagens de atrito seco e atrito viscoso. Nos freios de atrito viscoso são considerados casos em que as características dos fluidos não são induzidas, e outros em que são induzidas modificações a essas mesmas características. São a...

  14. A constrained optimization algorithm for total energy minimization in electronic structure calculations

    International Nuclear Information System (INIS)

    Yang Chao; Meza, Juan C.; Wang Linwang

    2006-01-01

    A new direct constrained optimization algorithm for minimizing the Kohn-Sham (KS) total energy functional is presented in this paper. The key ingredients of this algorithm involve projecting the total energy functional into a sequence of subspaces of small dimensions and seeking the minimizer of total energy functional within each subspace. The minimizer of a subspace energy functional not only provides a search direction along which the KS total energy functional decreases but also gives an optimal 'step-length' to move along this search direction. Numerical examples are provided to demonstrate that this new direct constrained optimization algorithm can be more efficient than the self-consistent field (SCF) iteration

  15. Utilização do tendão do músculo palmar longo em procedimentos cirúrgicos: estudo em cadáveres Use of the tendon of the palmaris longus muscle in surgical procedures: study on cadavers

    Directory of Open Access Journals (Sweden)

    Luiz Carlos Angelini Júnior

    2012-01-01

    Full Text Available OBJETIVO: Demonstrar que o tendão do músculo palmar longo pode ser estimado em relação ao seu comprimento e largura antes de usá-lo como enxerto em procedimento cirúrgico. MÉTODOS: Foram examinados 60 antebraços de 30 cadáveres de etnia negra; mensurou-se o comprimento e a largura do tendão do músculo palmar longo e comparou-se com o comprimento do antebraço. RESULTADOS: Foi constatada a ausência unilateral direita em dois cadáveres do sexo feminino. As médias do comprimento e da largura foram respectivamente 11,9 mais ou menos 15,2mm e 4,1 + 1,5mm. A média total do comprimento do antebraço foi de 275.4mm mais ou menos 17,9mm. CONCLUSÃO: Há uma relação significativa entre o comprimento do tendão e o comprimento do antebraço; assim poder-se-á avaliar o tamanho do tendão do músculo palmar longo quando for necessário usá-lo para enxertos. Nível de Evidências IV, Série de casos.OBJECTIVE: Demonstrate that the tendon of palmar long can be estimated in relation to its length and width before using it as a graft in surgical procedure. METHODS: There were examined 60 forearms of 30 corpses of black ethnicity; measure the length and width of the tendon of the palmaris longus muscle and compared the length of the forearm. RESULTS: There are notes their absence unilateral right in two female corpses. The medium length and width were more or less respectively 11.9, 15.2 mm and 4.1 + 1.5 mm. The total average forearm length of 275.4 was more or less 17.9 mm. CONCLUSION: There is a significant relationship between the length of the tendon and the length of the forearm; so we can evaluate the size of the tendon of the palmaris longus muscle when it is necessary to use it for grafts. Levels of Evidence IV, Case series.

  16. Parasitic zoonoses: survey in foxes (<em>Vulpes vulpesem> in the northern Apennines / Zoonosi parassitarie: indagini in volpi (<em>Vulpes vulpesem> dell'Appennino settentrionale

    Directory of Open Access Journals (Sweden)

    Vittorio Guberti

    1991-07-01

    Full Text Available Abstract A parasitological survey on 153 foxes was carried out in the northern Apennines, during the period 1984-1987. The following parasites were identified: <em>Toxocara canisem> (46.4%, <em>Taenia> sp. (17%, <em>Uncinaria stenocephalaem> (11.8%, <em>Mesocestoides lineatusem> (11.1%, <em>Ancylostoma caninumem> (3.9%, <em>Taenia hydatigenaem> (3.3%, <em>Trichuris vulpisem> (3.3%, <em>Dipylidium caninumem> (2.6%, <em>Taenia crassicepsem> (2%. All foxes were negative for <em>Trichinella> sp. A statistical analysis was performed to evaluate differences in the parasitic fauna according to the sex and age classes of the hosts. The role that the fox could have as a reservoir of helminthic zoonoses is discussed. The results are compared with those of similar studies carried out in Italy. Riassunto Nel periodo 1984-1987 è stata condotta un'indagine parassitologica su 153 volpi abbattute nell'Appennino romagnolo. Sono stati reperiti i seguenti parassiti: <em>Toxocara canisem> (46,4%, <em>Taenia> sp. (17%, <em>Uncinaria stenocephalaem> (11,8%, <em>Mesocestoides lineatusem> (11,1%, <em>Ancylostoma caninumem> (3,9%, <em>Taenia hydatigenaem> (3,3%, <em>Trichuris vulpisem> (3,3%, <em>Dipylidium caninumem> (2,6%, <em>Taenia crassicepsem> (2%. Tutte le volpi esaminate sono risultate negative per <em>Trichinella> sp. È stata effettuata l'analisi statistica dei dati per evidenziare eventuali differenze della fauna parassitaria in relazione al sesso e all'età delle volpi. Sulla base dei dati ottenuti viene discussa l'importanza che la Volpe può assumere come serbatoio di zoonosi elmintiche. I risultati acquisiti sono inoltre comparati con quelli ottenuti in analoghe ricerche condotte in Italia.

  17. Channel Parameter Estimation for Scatter Cluster Model Using Modified MUSIC Algorithm

    Directory of Open Access Journals (Sweden)

    Jinsheng Yang

    2012-01-01

    Full Text Available Recently, the scatter cluster models which precisely evaluate the performance of the wireless communication system have been proposed in the literature. However, the conventional SAGE algorithm does not work for these scatter cluster-based models because it performs poorly when the transmit signals are highly correlated. In this paper, we estimate the time of arrival (TOA, the direction of arrival (DOA, and Doppler frequency for scatter cluster model by the modified multiple signal classification (MUSIC algorithm. Using the space-time characteristics of the multiray channel, the proposed algorithm combines the temporal filtering techniques and the spatial smoothing techniques to isolate and estimate the incoming rays. The simulation results indicated that the proposed algorithm has lower complexity and is less time-consuming in the dense multipath environment than SAGE algorithm. Furthermore, the estimations’ performance increases with elements of receive array and samples length. Thus, the problem of the channel parameter estimation of the scatter cluster model can be effectively addressed with the proposed modified MUSIC algorithm.

  18. Calculation of electromagnetic parameter based on interpolation algorithm

    International Nuclear Information System (INIS)

    Zhang, Wenqiang; Yuan, Liming; Zhang, Deyuan

    2015-01-01

    Wave-absorbing material is an important functional material of electromagnetic protection. The wave-absorbing characteristics depend on the electromagnetic parameter of mixed media. In order to accurately predict the electromagnetic parameter of mixed media and facilitate the design of wave-absorbing material, based on the electromagnetic parameters of spherical and flaky carbonyl iron mixture of paraffin base, this paper studied two different interpolation methods: Lagrange interpolation and Hermite interpolation of electromagnetic parameters. The results showed that Hermite interpolation is more accurate than the Lagrange interpolation, and the reflectance calculated with the electromagnetic parameter obtained by interpolation is consistent with that obtained through experiment on the whole. - Highlights: • We use interpolation algorithm on calculation of EM-parameter with limited samples. • Interpolation method can predict EM-parameter well with different particles added. • Hermite interpolation is more accurate than Lagrange interpolation. • Calculating RL based on interpolation is consistent with calculating RL from experiment

  19. Vibration reduction of composite plates by piezoelectric patches using a modified artificial bee colony algorithm

    Directory of Open Access Journals (Sweden)

    Hadi Ghashochi-Bargh

    Full Text Available In Current paper, power consumption and vertical displacement optimization of composite plates subject to a step load are carried out by piezoelectric patches using the modified multi-objective Elitist-Artificial Bee Colony (E-ABC algorithm. The motivation behind this concept is to well balance the exploration and exploitation capability for attaining better convergence to the optimum. In order to reduce the calculation time, the elitist strategy is also used in Artificial Bee Colony algorithm. The voltages of patches, plate length/width ratios, ply angles, plate thickness/length ratios, number of layers and edge conditions are chosen as design variables. The formulation is based on the classical laminated plate theory (CLPT and Hamilton's principle. The performance of the new ABC approach is compared with the PSO algorithm and shows the good efficiency of the new ABC approach. To check the validity, the transient responses of isotropic and orthotropic plates are compared with those available in the literature and show a good agreement.

  20. EGNAS: an exhaustive DNA sequence design algorithm

    Directory of Open Access Journals (Sweden)

    Kick Alfred

    2012-06-01

    Full Text Available Abstract Background The molecular recognition based on the complementary base pairing of deoxyribonucleic acid (DNA is the fundamental principle in the fields of genetics, DNA nanotechnology and DNA computing. We present an exhaustive DNA sequence design algorithm that allows to generate sets containing a maximum number of sequences with defined properties. EGNAS (Exhaustive Generation of Nucleic Acid Sequences offers the possibility of controlling both interstrand and intrastrand properties. The guanine-cytosine content can be adjusted. Sequences can be forced to start and end with guanine or cytosine. This option reduces the risk of “fraying” of DNA strands. It is possible to limit cross hybridizations of a defined length, and to adjust the uniqueness of sequences. Self-complementarity and hairpin structures of certain length can be avoided. Sequences and subsequences can optionally be forbidden. Furthermore, sequences can be designed to have minimum interactions with predefined strands and neighboring sequences. Results The algorithm is realized in a C++ program. TAG sequences can be generated and combined with primers for single-base extension reactions, which were described for multiplexed genotyping of single nucleotide polymorphisms. Thereby, possible foldback through intrastrand interaction of TAG-primer pairs can be limited. The design of sequences for specific attachment of molecular constructs to DNA origami is presented. Conclusions We developed a new software tool called EGNAS for the design of unique nucleic acid sequences. The presented exhaustive algorithm allows to generate greater sets of sequences than with previous software and equal constraints. EGNAS is freely available for noncommercial use at http://www.chm.tu-dresden.de/pc6/EGNAS.

  1. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  2. Monte Carlo simulation of VHTR particle fuel with chord length sampling

    International Nuclear Information System (INIS)

    Ji, W.; Martin, W. R.

    2007-01-01

    The Very High Temperature Gas-Cooled Reactor (VHTR) poses a problem for neutronic analysis due to the double heterogeneity posed by the particle fuel and either the fuel compacts in the case of the prismatic block reactor or the fuel pebbles in the case of the pebble bed reactor. Direct Monte Carlo simulation has been used in recent years to analyze these VHTR configurations but is computationally challenged when space dependent phenomena are considered such as depletion or temperature feedback. As an alternative approach, we have considered chord length sampling to reduce the computational burden of the Monte Carlo simulation. We have improved on an existing method called 'limited chord length sampling' and have used it to analyze stochastic media representative of either pebble bed or prismatic VHTR fuel geometries. Based on the assumption that the PDF had an exponential form, a theoretical chord length distribution is derived and shown to be an excellent model for a wide range of packing fractions. This chord length PDF was then used to analyze a stochastic medium that was constructed using the RSA (Random Sequential Addition) algorithm and the results were compared to a benchmark Monte Carlo simulation of the actual stochastic geometry. The results are promising and suggest that the theoretical chord length PDF can be used instead of a full Monte Carlo random walk simulation in the stochastic medium, saving orders of magnitude in computational time (and memory demand) to perform the simulation. (authors)

  3. <em>In Vivoem> Histamine Optical Nanosensors

    Directory of Open Access Journals (Sweden)

    Heather A. Clark

    2012-08-01

    Full Text Available In this communication we discuss the development of ionophore based nanosensors for the detection and monitoring of histamine levels <em>in vivoem>. This approach is based on the use of an amine-reactive, broad spectrum ionophore which is capable of recognizing and binding to histamine. We pair this ionophore with our already established nanosensor platform, and demonstrate <em>in vitroem> and <em>in vivoem> monitoring of histamine levels. This approach enables capturing rapid kinetics of histamine after injection, which are more difficult to measure with standard approaches such as blood sampling, especially on small research models. The coupling together of <em>in vivoem> nanosensors with ionophores such as nonactin provide a way to generate nanosensors for novel targets without the difficult process of designing and synthesizing novel ionophores.

  4. AC-600 reactor reloading pattern optimization by using genetic algorithms

    International Nuclear Information System (INIS)

    Wu Hongchun; Xie Zhongsheng; Yao Dong; Li Dongsheng; Zhang Zongyao

    2000-01-01

    The use of genetic algorithms to optimize reloading pattern of the nuclear power plant reactor is proposed. And a new encoding and translating method is given. Optimization results of minimizing core power peak and maximizing cycle length for both low-leakage and out-in loading pattern of AC-600 reactor are obtained

  5. SU-E-J-252: A Motion Algorithm to Extract Physical and Motion Parameters of a Mobile Target in Cone-Beam Computed Tomographic Imaging Retrospective to Image Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Ali, I; Ahmad, S [University of Oklahoma Health Sciences, Oklahoma City, OK (United States); Alsbou, N [Department of Electrical and Computer Engineering, Ada, OH (United States)

    2014-06-01

    Purpose: A motion algorithm was developed to extract actual length, CT-numbers and motion amplitude of a mobile target imaged with cone-beam-CT (CBCT) retrospective to image-reconstruction. Methods: The motion model considered a mobile target moving with a sinusoidal motion and employed three measurable parameters: apparent length, CT number level and gradient of a mobile target obtained from CBCT images to extract information about the actual length and CT number value of the stationary target and motion amplitude. The algorithm was verified experimentally with a mobile phantom setup that has three targets with different sizes manufactured from homogenous tissue-equivalent gel material embedded into a thorax phantom. The phantom moved sinusoidal in one-direction using eight amplitudes (0–20mm) and a frequency of 15-cycles-per-minute. The model required imaging parameters such as slice thickness, imaging time. Results: This motion algorithm extracted three unknown parameters: length of the target, CT-number-level, motion amplitude for a mobile target retrospective to CBCT image reconstruction. The algorithm relates three unknown parameters to measurable apparent length, CT-number-level and gradient for well-defined mobile targets obtained from CBCT images. The motion model agreed with measured apparent lengths which were dependent on actual length of the target and motion amplitude. The cumulative CT-number for a mobile target was dependent on CT-number-level of the stationary target and motion amplitude. The gradient of the CT-distribution of mobile target is dependent on the stationary CT-number-level, actual target length along the direction of motion, and motion amplitude. Motion frequency and phase did not affect the elongation and CT-number distributions of mobile targets when imaging time included several motion cycles. Conclusion: The motion algorithm developed in this study has potential applications in diagnostic CT imaging and radiotherapy to extract

  6. Variabilità morfologica ed ecologica in <em>Neomys fodiensem> e <em>Neomys anomalusem> nell'Appennino settentrionale

    Directory of Open Access Journals (Sweden)

    Dino Scaravelli

    2003-10-01

    Full Text Available I due <em>Neomys> italiani sono ancora da chiarire dal punto di vista della loro caratterizzazione morfologica e ecologica. Il lavoro prende in considerazione un campione di entrambe le specie proveniente da habitat forestali dell?Appennino settentrionale per i quali sono stati identificati i principali parametri ambientali. Vengono quindi descritte la variabilità dei tratti morfologici delle due specie in aree localizzate nel Parco Nazionale Foreste Casentinesi, Monte Falterona e Campigna nell?Appennino tosco-romagnolo. Risultano di sicuro effetto discriminatorio la maschera facciale, il rapporto piede posteriore/coda e i caratteri cranici. Sulla base dei criteri identificativi si sono realizzate rilevazioni di misure corporee per le due specie e una comparazione degli habitat utilizzati. <em>N. fodiensem> appare unica specie nelle faggete-abetine e dominante nei castagneti, mentre nell?Ontaneta e nelle zone aperte e termofile si registra la sola presenza di <em>N. anomalusem>. Mancano entrambi nei prati cespugliati, nella pecceta e nella cerreta. I gradienti presenti sono quindi illustrati. Non appare una differenza altitudinale nel campione esaminato, posto in stazioni tra i 400 e i 1300 m, ma per entrambe vi sono maggiori riscontri nella fascia tra 700 e 850 m. Nell?analisi multivariata rispetto alle altre specie e alle variabili ambientali si riscontra sempre una discreta correlazione con la presenza di acqua di una certa ampiezza, che comunque è significativa solo per <em>N. fodiensem>, mentre risulta di interesse la positiva correlazione di <em>N. anomalusem> con <em>Apodemus sylvaticusem>.

  7. A method for evaluating discoverability and navigability of recommendation algorithms.

    Science.gov (United States)

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis

    2017-01-01

    Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

  8. Total Path Length and Number of Terminal Nodes for Decision Trees

    KAUST Repository

    Hussain, Shahid

    2014-09-13

    This paper presents a new tool for study of relationships between total path length (average depth) and number of terminal nodes for decision trees. These relationships are important from the point of view of optimization of decision trees. In this particular case of total path length and number of terminal nodes, the relationships between these two cost functions are closely related with space-time trade-off. In addition to algorithm to compute the relationships, the paper also presents results of experiments with datasets from UCI ML Repository1. These experiments show how two cost functions behave for a given decision table and the resulting plots show the Pareto frontier or Pareto set of optimal points. Furthermore, in some cases this Pareto frontier is a singleton showing the total optimality of decision trees for the given decision table.

  9. Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

    International Nuclear Information System (INIS)

    Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.

    1985-01-01

    Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter

  10. Performance of Energy Multiplier Module (EM2) with long-burn thorium fuel cycle

    International Nuclear Information System (INIS)

    Choi, Hangbok; Schleicher, Robert; Gupta, Puja

    2015-01-01

    Energy Multiplier Module (EM 2 ) is a helium-cooled fast reactor being developed by General Atomics for the 21 st century grid. It is designed as a modular plant with a net electric output of 265 MWe with an evaporative heat sink and 240 MWe with an air-cooled heat sink. EM 2 core performance is examined for the baseline loading of low-enriched uranium (LEU) as fissile material with depleted uranium (DU) as fertile material and compared to the alternate LEU with thorium loading. The latter has two options: a heterogeneous loading of thorium fuel in the place of DU that produces a longer fuel cycle, and homogeneously mixed thorium-uranium fuel loading. Compared to the baseline LEU/DU core, the cycle length of both thorium options is reduced due to higher neutron absorptions by thorium. However, for both, heterogeneous and homogenous thorium loading options, the fuel cycle length is over 24 years without refueling or reshuffling of fuel assemblies. The physics properties of the EM 2 thorium core are close to those of the baseline core which constitute low excess reactivity, negative fuel temperature coefficient, and very small void reactivity. However, unlike the case of baseline EM 2 , the homogeneous thorium fuel loading provides additional advantage in reducing the power peaking of the core, which in turn reduces the cladding material neutron damage rate by 23%. It is interpreted that the relatively slow 233 U buildup as compared to 239 Pu for baseline core retards reactivity increase without the need for a complicated fuel loading pattern of the heterogeneous fuel loading, while maintaining the peak power density low. Therefore both the heterogeneous and homogeneous thorium loading options will be feasible in the EM 2

  11. Multiple Word-Length High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Coussy Philippe

    2008-01-01

    Full Text Available Abstract Digital signal processing (DSP applications are nowadays widely used and their complexity is ever growing. The design of dedicated hardware accelerators is thus still needed in system-on-chip and embedded systems. Realistic hardware implementation requires first to convert the floating-point data of the initial specification into arbitrary length data (finite-precision while keeping an acceptable computation accuracy. Next, an optimized hardware architecture has to be designed. Considering uniform bit-width specification allows to use traditional automated design flow. However, it leads to oversized design. On the other hand, considering non uniform bit-width specification allows to get a smaller circuit but requires complex design tasks. In this paper, we propose an approach that inputs a C/C++ specification. The design flow, based on high-level synthesis (HLS techniques, automatically generates a potentially pipeline RTL architecture described in VHDL. Both bitaccurate integer and fixed-point data types can be used in the input specification. The generated architecture uses components (operator, register, etc. that have different widths. The design constraints are the clock period and the throughput of the application. The proposed approach considers data word-length information in all the synthesis steps by using dedicated algorithms. We show in this paper the effectiveness of the proposed approach through several design experiments in the DSP domain.

  12. Multiple Word-Length High-Level Synthesis

    Directory of Open Access Journals (Sweden)

    Dominique Heller

    2008-09-01

    Full Text Available Digital signal processing (DSP applications are nowadays widely used and their complexity is ever growing. The design of dedicated hardware accelerators is thus still needed in system-on-chip and embedded systems. Realistic hardware implementation requires first to convert the floating-point data of the initial specification into arbitrary length data (finite-precision while keeping an acceptable computation accuracy. Next, an optimized hardware architecture has to be designed. Considering uniform bit-width specification allows to use traditional automated design flow. However, it leads to oversized design. On the other hand, considering non uniform bit-width specification allows to get a smaller circuit but requires complex design tasks. In this paper, we propose an approach that inputs a C/C++ specification. The design flow, based on high-level synthesis (HLS techniques, automatically generates a potentially pipeline RTL architecture described in VHDL. Both bitaccurate integer and fixed-point data types can be used in the input specification. The generated architecture uses components (operator, register, etc. that have different widths. The design constraints are the clock period and the throughput of the application. The proposed approach considers data word-length information in all the synthesis steps by using dedicated algorithms. We show in this paper the effectiveness of the proposed approach through several design experiments in the DSP domain.

  13. Optimization of wind farm turbines layout using an evolutive algorithm

    International Nuclear Information System (INIS)

    Gonzalez, Javier Serrano; Santos, Jesus Riquelme; Payan, Manuel Burgos; Gonzalez Rodriguez, Angel G.; Mora, Jose Castro

    2010-01-01

    The optimum wind farm configuration problem is discussed in this paper and an evolutive algorithm to optimize the wind farm layout is proposed. The algorithm's optimization process is based on a global wind farm cost model using the initial investment and the present value of the yearly net cash flow during the entire wind-farm life span. The proposed algorithm calculates the yearly income due to the sale of the net generated energy taking into account the individual wind turbine loss of production due to wake decay effects and it can deal with areas or terrains with non-uniform load-bearing capacity soil and different roughness length for every wind direction or restrictions such as forbidden areas or limitations in the number of wind turbines or the investment. The results are first favorably compared with those previously published and a second collection of test cases is used to proof the performance and suitability of the proposed evolutive algorithm to find the optimum wind farm configuration. (author)

  14. Phytochemical Composition, Antioxidant and Xanthine Oxidase Inhibitory Activities of <em>Amaranthus cruentusem> L. and <em>Amaranthus hybridusem> L. Extracts

    Directory of Open Access Journals (Sweden)

    Jeanne F. Millogo

    2012-06-01

    Full Text Available This paper describes a preliminary assessment of the nutraceutical value of <em>Amaranthus cruentus (A. cruentusem> and <em>Amaranthus hybridus (A. hybridusem>, two food plant species found in Burkina Faso. Hydroacetonic (HAE, methanolic (ME, and aqueous extracts (AE from the aerial parts were screened for <em>in vitroem> antioxidant and xanthine oxidase inhibitory activities. Phytochemical analyses revealed the presence of polyphenols, tannins, flavonoids, steroids, terpenoids, saponins and betalains. Hydroacetonic extracts have shown the most diversity for secondary metabolites. The TLC analyses of flavonoids from HAE extracts showed the presence of rutin and other unidentified compounds. The phenolic compound contents of the HAE, ME and AE extracts were determined using the Folin–Ciocalteu method and ranged from 7.55 to 10.18 mg Gallic acid equivalent GAE/100 mg. Tannins, flavonoids, and flavonols ranged from 2.83 to 10.17 mg tannic acid equivalent (TAE/100 mg, 0.37 to 7.06 mg quercetin equivalent (QE /100 mg, and 0.09 to 1.31 mg QE/100 mg, respectively. The betacyanin contents were 40.42 and 6.35 mg Amaranthin Equivalent/100 g aerial parts (dry weight in <em>A. cruentusem> and <em>A. hybridusem>, respectively. Free-radical scavenging activity expressed as IC50 (DPPH method and iron reducing power (FRAP method ranged from 56 to 423 µg/mL and from 2.26 to 2.56 mmol AAE/g, respectively. Xanthine oxidase inhibitory activities of extracts of <em>A. cruentus em>and <em>A. hybridusem> were 3.18% and 38.22%, respectively.<em> em>The> A. hybridusem> extract showed the best antioxidant and xanthine oxidase inhibition activities. The results indicated that the phytochemical contents of the two species justify their traditional uses as nutraceutical food plants.

  15. An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian; Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro

    2016-01-01

    In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem

  16. Dermatoses em renais cronicos em terapia dialitica

    Directory of Open Access Journals (Sweden)

    Luis Alberto Batista Peres

    2014-03-01

    Full Text Available Objetivo: As desordens cutâneas e das mucosas são comuns em pacientes em hemodiálise a longo prazo. A diálise prolonga a expectativa de vida, dando tempo para a manifestação destas anormalidades. Os objetivos deste estudo foram avaliar a prevalência de problemas dermatológicos em pacientes com doença renal crônica (DRC em hemodiálise. Métodos: Cento e quarenta e cinco pacientes com doença renal crônica em hemodiálise foram estudados. Todos os pacientes foram completamente analisados para as alterações cutâneas, de cabelos, mucosas e unhas por um único examinador e foram coletados dados de exames laboratoriais. Os dados foram armazenados em um banco de dados do Microsolft Excel e analisados por estatística descritiva. As variáveis contínuas foram comparadas pelo teste t de Student e as variáveis categóricas utilizando o teste do qui-quadrado ou o teste Exato de Fischer, conforme adequado. Resultados: O estudo incluiu 145 pacientes, com idade média de 53,6 ± 14,7 anos, predominantemente do sexo masculino (64,1% e caucasianos (90,0%. O tempo médio de diálise foi de 43,3 ± 42,3 meses. As principais doenças subjacentes foram: hipertensão arterial em 33,8%, diabetes mellitus em 29,6% e glomerulonefrite crônica em 13,1%. As principais manifestações dermatológicas observadas foram: xerose em 109 (75,2%, equimose em 87 (60,0%, prurido em 78 (53,8% e lentigo em 33 (22,8% pacientes. Conclusão: O nosso estudo mostrou a presença de mais do que uma dermatose por paciente. As alterações cutâneas são frequentes em pacientes em diálise. Mais estudos são necessários para melhor caracterização e manejo destas dermatoses.

  17. Cytotoxicity and Glycan-Binding Properties of an 18 kDa Lectin Isolated from the Marine Sponge <em>Halichondria em>okadai>

    Directory of Open Access Journals (Sweden)

    Yasuhiro Ozeki

    2012-04-01

    Full Text Available A divalent cation-independent lectin—HOL-18, with cytotoxic activity against leukemia cells, was purified from a demosponge, <em>Halichondria okadaiem>. HOL-18 is a 72 kDa tetrameric lectin that consists of four non-covalently bonded 18 kDa subunits. Hemagglutination activity of the lectin was strongly inhibited by chitotriose (GlcNAcβ1-4GlcNAcβ1-4GlcNAc, fetuin and mucins from porcine stomach and bovine submaxillary gland. Lectin activity was stable at pH 4–12 and temperatures lower than 60 °C. Frontal affinity chromatography with 16 types of pyridylaminated oligosaccharides indicated that the lectin had an affinity for <em>N>-linked complex-type and sphingolipid-type oligosaccharides with <em>N>-acetylated hexosamines and neuramic acid at the non-reducing termini. The lectin killed Jurkat leukemia T cells and K562 erythroleukemia cells in a dose- and carbohydrate-dependent manner.

  18. Optical flow optimization using parallel genetic algorithm

    Science.gov (United States)

    Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe

    2011-06-01

    A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.

  19. Over-Expression of CYP2E1 mRNA and Protein: Implications of Xenobiotic Induced Damage in Patients with <em>De Novoem> Acute Myeloid Leukemia with inv(16(p13.1q22; <em>CBFβ>-MYH11em>

    Directory of Open Access Journals (Sweden)

    Carlos E. Bueso-Ramos

    2012-08-01

    Full Text Available Environmental exposure to benzene occurs through cigarette smoke, unleaded gasoline and certain types of plastic. Benzene is converted to hematotoxic metabolites by the hepatic phase-I enzyme CYP2E1, and these metabolites are detoxified by the phase-II enzyme NQO1. The genes encoding these enzymes are highly polymorphic and studies of these polymorphisms have shown different pathogenic and prognostic features in various hematological malignancies. The potential role of different cytochrome p450 metabolizing enzymes in the pathogenesis of acute myeloid leukemia (AML in an area of active interest. In this study, we demonstrate aberrant CYP2E1 mRNA over-expression by quantitative real-time polymerase chain reaction in 11 cases of <em>de novoem> AML with inv(16; CBFβ-MYH11. CYP2E1 mRNA levels correlated with <em>CBFβ>-MYH11 em>transcript levels and with bone marrow blast counts in all cases. CYP2E1 over-expression correlated positively with NQO1 mRNA levels (R2 = 0.934, n = 7. By immunohistochemistry, CYP2E1 protein was more frequently expressed in AML with inv(16 compared with other types of AML (<em>p> < 0.001. We obtained serial bone marrow samples from two patients with AML with inv(16 before and after treatment. CYP2E1 mRNA expression levels decreased in parallel with <em>CBFβ>-MYH11 em>transcript levels and blast counts following chemotherapy. In contrast, CYP1A2 transcript levels did not change in either patient. This is the first study to demonstrate concurrent over-expression of CYP2E1 and NQO1 mRNA in AML with inv(16. These findings also suggest that a balance between CYP2E1 and NQO1 may be important in the pathogenesis of AML with inv(16.

  20. Investigation of the three-dimensional lattice HP protein folding model using a genetic algorithm

    Directory of Open Access Journals (Sweden)

    Fábio L. Custódio

    2004-01-01

    Full Text Available An approach to the hydrophobic-polar (HP protein folding model was developed using a genetic algorithm (GA to find the optimal structures on a 3D cubic lattice. A modification was introduced to the scoring system of the original model to improve the model's capacity to generate more natural-like structures. The modification was based on the assumption that it may be preferable for a hydrophobic monomer to have a polar neighbor than to be in direct contact with the polar solvent. The compactness and the segregation criteria were used to compare structures created by the original HP model and by the modified one. An islands' algorithm, a new selection scheme and multiple-points crossover were used to improve the performance of the algorithm. Ten sequences, seven with length 27 and three with length 64 were analyzed. Our results suggest that the modified model has a greater tendency to form globular structures. This might be preferable, since the original HP model does not take into account the positioning of long polar segments. The algorithm was implemented in the form of a program with a graphical user interface that might have a didactical potential in the study of GA and on the understanding of hydrophobic core formation.

  1. Nuclear reactors project optimization based on neural network and genetic algorithm; Otimizacao em projetos de reatores nucleares baseada em rede neural e algoritmo genetico

    Energy Technology Data Exchange (ETDEWEB)

    Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil); Schirru, Roberto; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia

    1997-12-01

    This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs.

  2. Change detection algorithms for surveillance in visual iot: a comparative study

    International Nuclear Information System (INIS)

    Akram, B.A.; Zafar, A.; Akbar, A.H.; Chaudhry, A.

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule’s Coefficient) and JC (Jaccard’s Coefficient), execution time and memory consumption. Experimental results showed that Kapur’s algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes. (author)

  3. Image steganography based on 2k correction and coherent bit length

    Science.gov (United States)

    Sun, Shuliang; Guo, Yongning

    2014-10-01

    In this paper, a novel algorithm is proposed. Firstly, the edge of cover image is detected with Canny operator and secret data is embedded in edge pixels. Sorting method is used to randomize the edge pixels in order to enhance security. Coherent bit length L is determined by relevant edge pixels. Finally, the method of 2k correction is applied to achieve better imperceptibility in stego image. The experiment shows that the proposed method is better than LSB-3 and Jae-Gil Yu's in PSNR and capacity.

  4. The length-weight and length-length relationships of bluefish, Pomatomus saltatrix (Linnaeus, 1766 from Samsun, middle Black Sea region

    Directory of Open Access Journals (Sweden)

    Melek Özpiçak

    2017-10-01

    Full Text Available In this study, length-weight relationship (LWR and length-length relationship (LLR of bluefish, Pomatomus saltatrix were determined. A total of 125 specimens were sampled from Samsun, the middle Black Sea in 2014 fishing season. Bluefish specimens were monthly collected from commercial fishing boats from October to December 2014. All captured individuals (N=125 were measured to the nearest 0.1 cm for total, fork and standard lengths. The weight of each fish (W was recorded to the nearest 0.01 g. According to results of analyses, there were no statistically significant differences between sexes in term of length and weight (P˃0.05. The minimum and maximum total, fork and standard lengths of bluefish ranged between 13.5-23.6 cm, 12.50-21.80 cm and 10.60-20.10 cm, respectively. The equation of length-weight relationship were calculated as W=0.008TL3.12 (r2>0.962. Positive allometric growth was observed for bluefish (b>3. Length-length relationship was also highly significant (P<0.001 with coefficient of determination (r2 ranging from 0.916 to 0.988.

  5. Thin-Sheet Inversion Modeling of Geomagnetic Deep Sounding Data Using MCMC Algorithm

    Directory of Open Access Journals (Sweden)

    Hendra Grandis

    2013-01-01

    Full Text Available The geomagnetic deep sounding (GDS method is one of electromagnetic (EM methods in geophysics that allows the estimation of the subsurface electrical conductivity distribution. This paper presents the inversion modeling of GDS data employing Markov Chain Monte Carlo (MCMC algorithm to evaluate the marginal posterior probability of the model parameters. We used thin-sheet model to represent quasi-3D conductivity variations in the heterogeneous subsurface. The algorithm was applied to invert field GDS data from the zone covering an area that spans from eastern margin of the Bohemian Massif to the West Carpathians in Europe. Conductivity anomalies obtained from this study confirm the well-known large-scale tectonic setting of the area.

  6. Genetic algorithms for protein threading.

    Science.gov (United States)

    Yadgari, J; Amir, A; Unger, R

    1998-01-01

    Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).

  7. Length-weight regressions of the microcrustacean species from a tropical floodplain Regressões peso-comprimento das espécies de microcrustáceos em uma planície de inundação tropical

    Directory of Open Access Journals (Sweden)

    Fábio de Azevedo

    2012-03-01

    Full Text Available AIM: This study presents length-weight regressions adjusted for the most representative microcrustacean species and young stages of copepods from tropical lakes, together with a comparison of these results with estimates from the literature for tropical and temperate regions; METHODS: Samples were taken from six isolated lakes, in summer and winter, using a motorized pump and plankton net. The dry weight of each size class (for cladocerans or developmental stage (for copepods was measured using an electronic microbalance; RESULTS: Adjusted regressions were significant. We observed a trend of under-estimating the weights of smaller species and overestimating those of larger species, when using regressions obtained from temperate regions; CONCLUSION: We must be cautious about using pooled regressions from the literature, preferring models of similar species, or weighing the organisms and building new models.OBJETIVO: Este estudo apresenta as regressões peso-comprimento elaboradas para as espécies mais representativas de microcrustáceos e formas jovens de copépodes em lagos tropicais, bem como a comparação desses resultados com as estimativas da literatura para as regiões tropical e temperada; MÉTODOS: As amostragens foram realizadas em seis lagoas isoladas, no verão e no inverno, usando moto-bomba e rede de plâncton. O peso seco de cada classe de tamanho (para cladóceros e estágio de desenvolvimento (copépodes foi medido em microbalança eletrônica; RESULTADOS: As regressões ajustadas foram significativas. Observamos uma tendência em subestimar o peso das espécies de menor porte e superestimar as espécies de maior porte, quando se utiliza regressões peso-comprimento obtidas para a região de clima temperado; CONCLUSÃO: Devemos ter cautela no uso de regressões peso-comprimento existentes na literatura, preferindo modelos para as mesmas espécies, ou pesar os organismos e construir os próprios modelos.

  8. Application of Novel Polymorphic Microsatellite Loci Identified in the Korean Pacific Abalone (<em>Haliotis diversicolor supertextaem> (Haliotidae in the Genetic Characterization of Wild and Released Populations

    Directory of Open Access Journals (Sweden)

    Seong Wan Hong

    2012-08-01

    Full Text Available The small abalone,<em> Haliotis diversicolor supertextaem>, of the family Haliotidae, is one of the most important species of marine shellfish in eastern Asia. Over the past few decades, this species has drastically declined in Korea. Thus, hatchery-bred seeds have been released into natural coastal areas to compensate for the reduced fishery resources. However, information on the genetic background of the small abalone is scarce. In this study, 20 polymorphic microsatellite DNA markers were identified using next-generation sequencing techniques and used to compare allelic variation between wild and released abalone populations in Korea. Using high-throughput genomic sequencing, a total of 1516 (2.26%; average length of 385 bp reads containing simple sequence repeats were obtained from 86,011 raw reads. Among the 99 loci screened, 28 amplified successfully, and 20 were polymorphic. When comparing allelic variation between wild and released abalone populations, a total of 243 different alleles were observed, with 18.7 alleles per locus. High genetic diversity (mean heterozygosity = 0.81; mean allelic number = 15.5 was observed in both populations. A statistical analysis of the fixation index (<em>F>ST and analysis of molecular variance (AMOVA indicated limited genetic differences between the two populations (<em>F>ST = 0.002, <em>p> > 0.05. Although no significant reductions in the genetic diversity were found in the released population compared with the wild population (<em>p> > 0.05, the genetic diversity parameters revealed that the seeds released for stock abundance had a different genetic composition. These differences are likely a result of hatchery selection and inbreeding. Additionally, all the primer pair sets were effectively amplified in another congeneric species,<em> H. diversicolor diversicolorem>, indicating that these primers are useful for both abalone species. These microsatellite loci

  9. Efficient sequential and parallel algorithms for planted motif search.

    Science.gov (United States)

    Nicolae, Marius; Rajasekaran, Sanguthevar

    2014-01-31

    Motif searching is an important step in the detection of rare events occurring in a set of DNA or protein sequences. One formulation of the problem is known as (l,d)-motif search or Planted Motif Search (PMS). In PMS we are given two integers l and d and n biological sequences. We want to find all sequences of length l that appear in each of the input sequences with at most d mismatches. The PMS problem is NP-complete. PMS algorithms are typically evaluated on certain instances considered challenging. Despite ample research in the area, a considerable performance gap exists because many state of the art algorithms have large runtimes even for moderately challenging instances. This paper presents a fast exact parallel PMS algorithm called PMS8. PMS8 is the first algorithm to solve the challenging (l,d) instances (25,10) and (26,11). PMS8 is also efficient on instances with larger l and d such as (50,21). We include a comparison of PMS8 with several state of the art algorithms on multiple problem instances. This paper also presents necessary and sufficient conditions for 3 l-mers to have a common d-neighbor. The program is freely available at http://engr.uconn.edu/~man09004/PMS8/. We present PMS8, an efficient exact algorithm for Planted Motif Search. PMS8 introduces novel ideas for generating common neighborhoods. We have also implemented a parallel version for this algorithm. PMS8 can solve instances not solved by any previous algorithms.

  10. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    Science.gov (United States)

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. Extension algorithm for generic low-voltage networks

    Science.gov (United States)

    Marwitz, S.; Olk, C.

    2018-02-01

    Distributed energy resources (DERs) are increasingly penetrating the energy system which is driven by climate and sustainability goals. These technologies are mostly connected to low- voltage electrical networks and change the demand and supply situation in these networks. This can cause critical network states. Network topologies vary significantly and depend on several conditions including geography, historical development, network design or number of network connections. In the past, only some of these aspects were taken into account when estimating the network investment needs for Germany on the low-voltage level. Typically, fixed network topologies are examined or a Monte Carlo approach is used to quantify the investment needs at this voltage level. Recent research has revealed that DERs differ substantially between rural, suburban and urban regions. The low-voltage network topologies have different design concepts in these regions, so that different network topologies have to be considered when assessing the need for network extensions and investments due to DERs. An extension algorithm is needed to calculate network extensions and investment needs for the different typologies of generic low-voltage networks. We therefore present a new algorithm, which is capable of calculating the extension for generic low-voltage networks of any given topology based on voltage range deviations and thermal overloads. The algorithm requires information about line and cable lengths, their topology and the network state only. We test the algorithm on a radial, a loop, and a heavily meshed network. Here we show that the algorithm functions for electrical networks with these topologies. We found that the algorithm is able to extend different networks efficiently by placing cables between network nodes. The main value of the algorithm is that it does not require any information about routes for additional cables or positions for additional substations when it comes to estimating

  12. Development of information preserving data compression algorithm for CT images

    International Nuclear Information System (INIS)

    Kobayashi, Yoshio

    1989-01-01

    Although digital imaging techniques in radiology develop rapidly, problems arise in archival storage and communication of image data. This paper reports on a new information preserving data compression algorithm for computed tomographic (CT) images. This algorithm consists of the following five processes: 1. Pixels surrounding the human body showing CT values smaller than -900 H.U. are eliminated. 2. Each pixel is encoded by its numerical difference from its neighboring pixel along a matrix line. 3. Difference values are encoded by a newly designed code rather than the natural binary code. 4. Image data, obtained with the above process, are decomposed into bit planes. 5. The bit state transitions in each bit plane are encoded by run length coding. Using this new algorithm, the compression ratios of brain, chest, and abdomen CT images are 4.49, 4.34. and 4.40 respectively. (author)

  13. Quantum mean-field decoding algorithm for error-correcting codes

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Saika, Yohei; Okada, Masato

    2009-01-01

    We numerically examine a quantum version of TAP (Thouless-Anderson-Palmer)-like mean-field algorithm for the problem of error-correcting codes. For a class of the so-called Sourlas error-correcting codes, we check the usefulness to retrieve the original bit-sequence (message) with a finite length. The decoding dynamics is derived explicitly and we evaluate the average-case performance through the bit-error rate (BER).

  14. Detecting microsatellites within genomes: significant variation among algorithms

    Directory of Open Access Journals (Sweden)

    Rivals Eric

    2007-04-01

    Full Text Available Abstract Background Microsatellites are short, tandemly-repeated DNA sequences which are widely distributed among genomes. Their structure, role and evolution can be analyzed based on exhaustive extraction from sequenced genomes. Several dedicated algorithms have been developed for this purpose. Here, we compared the detection efficiency of five of them (TRF, Mreps, Sputnik, STAR, and RepeatMasker. Results Our analysis was first conducted on the human X chromosome, and microsatellite distributions were characterized by microsatellite number, length, and divergence from a pure motif. The algorithms work with user-defined parameters, and we demonstrate that the parameter values chosen can strongly influence microsatellite distributions. The five algorithms were then compared by fixing parameters settings, and the analysis was extended to three other genomes (Saccharomyces cerevisiae, Neurospora crassa and Drosophila melanogaster spanning a wide range of size and structure. Significant differences for all characteristics of microsatellites were observed among algorithms, but not among genomes, for both perfect and imperfect microsatellites. Striking differences were detected for short microsatellites (below 20 bp, regardless of motif. Conclusion Since the algorithm used strongly influences empirical distributions, studies analyzing microsatellite evolution based on a comparison between empirical and theoretical size distributions should therefore be considered with caution. We also discuss why a typological definition of microsatellites limits our capacity to capture their genomic distributions.

  15. Considerations and Algorithms for Compression of Sets

    DEFF Research Database (Denmark)

    Larsson, Jesper

    We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....

  16. An implementation of super-encryption using RC4A and MDTM cipher algorithms for securing PDF Files on android

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Parlindungan, M. R.

    2018-03-01

    MDTM is a classical symmetric cryptographic algorithm. As with other classical algorithms, the MDTM Cipher algorithm is easy to implement but it is less secure compared to modern symmetric algorithms. In order to make it more secure, a stream cipher RC4A is added and thus the cryptosystem becomes super encryption. In this process, plaintexts derived from PDFs are firstly encrypted with the MDTM Cipher algorithm and are encrypted once more with the RC4A algorithm. The test results show that the value of complexity is Θ(n2) and the running time is linearly directly proportional to the length of plaintext characters and the keys entered.

  17. A Fast Approximate Algorithm for Mapping Long Reads to Large Reference Databases.

    Science.gov (United States)

    Jain, Chirag; Dilthey, Alexander; Koren, Sergey; Aluru, Srinivas; Phillippy, Adam M

    2018-04-30

    Emerging single-molecule sequencing technologies from Pacific Biosciences and Oxford Nanopore have revived interest in long-read mapping algorithms. Alignment-based seed-and-extend methods demonstrate good accuracy, but face limited scalability, while faster alignment-free methods typically trade decreased precision for efficiency. In this article, we combine a fast approximate read mapping algorithm based on minimizers with a novel MinHash identity estimation technique to achieve both scalability and precision. In contrast to prior methods, we develop a mathematical framework that defines the types of mapping targets we uncover, establish probabilistic estimates of p-value and sensitivity, and demonstrate tolerance for alignment error rates up to 20%. With this framework, our algorithm automatically adapts to different minimum length and identity requirements and provides both positional and identity estimates for each mapping reported. For mapping human PacBio reads to the hg38 reference, our method is 290 × faster than Burrows-Wheeler Aligner-MEM with a lower memory footprint and recall rate of 96%. We further demonstrate the scalability of our method by mapping noisy PacBio reads (each ≥5 kbp in length) to the complete NCBI RefSeq database containing 838 Gbp of sequence and >60,000 genomes.

  18. Thermal Studies of Zn(II, Cd(II and Hg(II Complexes of Some <em>N-Alkyl-N>-Phenyl-Dithiocarbamates

    Directory of Open Access Journals (Sweden)

    Peter A. Ajibade

    2012-07-01

    Full Text Available The thermal decomposition of Zn(II, Cd(II and Hg(II complexes of <em>N-ethyl-N>-phenyl and <em>N-butyl-N>-phenyl dithiocarbamates have been studied using thermogravimetric analysis (TGA and differential scanning calorimetry (DSC. The products of the decomposition, at two different temperatures, were further characterized by scanning electron microscopy (SEM and energy-dispersive X-ray spectroscopy (EDX. The results show that while the zinc and cadmium complexes undergo decomposition to form metal sulphides, and further undergo oxidation forming metal oxides as final products, the mercury complexes gave unstable volatiles as the final product.

  19. Research algorithm for synthesis of double conjugation optical systems in the Gauss region

    Directory of Open Access Journals (Sweden)

    A. B. Ostrun

    2014-01-01

    Full Text Available The article focuses on the research of variable magnification optical systems of sophistic class - so-called double conjugation systems. When the magnification changes, they provide two pairs of fixed conjugate planes, namely object and image, as well as entrance and exit pupils. Similar systems are used in microscopy and complex schemes, where it is necessary to conform the pupils of contiguous removable optical components. Synthesis of double conjugation systems in Gauss region is not an easy task. To ensure complete immobility of the exit pupil in the system there should be three movable components or components with variable optical power.Analysis of the literature shows that the design of double conjugation optical system in the paraxial region has been neglected, all methods are not completely universal and suitable for automation.Based on the foregoing, the research and development of a universal method for automated synthesis of double conjugation systems in Gauss region formulated as an objective of the present work seem to be a challenge.To achieve this goal a universal algorithm is used. It is based on the fact that the output coordinates of paraxial rays are multilinear functions of optical surfaces and of axial thicknesses between surfaces. It allows us to create and solve a system of multilinear equations in semi-automatic mode to achieve the chosen values of paraxial characteristics.As a basic scheme for the synthesis a five-component system has been chosen with extreme fixed components and three mobile "internal" ones. The system was considered in two extreme states of moving parts. Initial values of axial thicknesses were taken from Hopkins' patent. Optical force five components were considered unknown. For calculation the system of five equations was created, which allowed us to obtain a certain back focal length, to provide the specified focal length and a fixed position of the exit pupil at a fixed entrance pupil.The scheme

  20. Angle Statistics Reconstruction: a robust reconstruction algorithm for Muon Scattering Tomography

    Science.gov (United States)

    Stapleton, M.; Burns, J.; Quillin, S.; Steer, C.

    2014-11-01

    Muon Scattering Tomography (MST) is a technique for using the scattering of cosmic ray muons to probe the contents of enclosed volumes. As a muon passes through material it undergoes multiple Coulomb scattering, where the amount of scattering is dependent on the density and atomic number of the material as well as the path length. Hence, MST has been proposed as a means of imaging dense materials, for instance to detect special nuclear material in cargo containers. Algorithms are required to generate an accurate reconstruction of the material density inside the volume from the muon scattering information and some have already been proposed, most notably the Point of Closest Approach (PoCA) and Maximum Likelihood/Expectation Maximisation (MLEM) algorithms. However, whilst PoCA-based algorithms are easy to implement, they perform rather poorly in practice. Conversely, MLEM is a complicated algorithm to implement and computationally intensive and there is currently no published, fast and easily-implementable algorithm that performs well in practice. In this paper, we first provide a detailed analysis of the source of inaccuracy in PoCA-based algorithms. We then motivate an alternative method, based on ideas first laid out by Morris et al, presenting and fully specifying an algorithm that performs well against simulations of realistic scenarios. We argue this new algorithm should be adopted by developers of Muon Scattering Tomography as an alternative to PoCA.

  1. Improved Road-Network-Flow Control Strategy Based on Macroscopic Fundamental Diagrams and Queuing Length in Connected-Vehicle Network

    Directory of Open Access Journals (Sweden)

    Xiaohui Lin

    2017-01-01

    Full Text Available Connected-vehicles network provides opportunities and conditions for improving traffic signal control, and macroscopic fundamental diagrams (MFD can control the road network at the macrolevel effectively. This paper integrated proposed real-time access to the number of mobile vehicles and the maximum road queuing length in the Connected-vehicles network. Moreover, when implementing a simple control strategy to limit the boundary flow of a road network based on MFD, we determined whether the maximum queuing length of each boundary section exceeds the road-safety queuing length in real-time calculations and timely adjusted the road-network influx rate to avoid the overflow phenomenon in the boundary section. We established a road-network microtraffic simulation model in VISSIM software taking a district as the experimental area, determined MFD of the region based on the number of mobile vehicles, and weighted traffic volume of the road network. When the road network was tending to saturate, we implemented a simple control strategy and our algorithm limits the boundary flow. Finally, we compared the traffic signal control indicators with three strategies: (1 no control strategy, (2 boundary control, and (3 boundary control with limiting queue strategy. The results show that our proposed algorithm is better than the other two.

  2. Conservazione e gestione della Lepre italica (<em>Lepus corsicanusem>

    Directory of Open Access Journals (Sweden)

    Francesco Riga

    2003-10-01

    Full Text Available Il recente riconoscimento dello <em>status> specifico della Lepre italica (<em>Lepus corsicanusem> e l?accertamento dell?areale distributivo rappresentano le azioni più importanti per la conservazione di un <em>taxon endemicoem> che si era creduto estinto. Nella penisola la specie presenta un areale discontinuo, il cui limite settentrionale è dato dal comune di Manciano (GR, sul versante tirrenico e da una linea che dalla provincia de L'Aquila arriva al Gargano. In Sicilia la distribuzione è relativamente continua anche in aree non protette. Dati genetici hanno permesso di confermare la presenza in Corsica. Al contrario, nell?Isola d'Elba, a seguito di estese ricerche, sono stati identificati solo esemplari di <em>L. europaeusem>. Nell?Italia peninsulare <em>L. corsicanusem> è spesso presente in simpatria con popolazioni di <em>L. europaeusem>, mentre in Sicilia la lepre europea non ha originato popolazioni stabili, nonostante l?immissione di molte migliaia di individui. La distribuzione ecologica di <em>L. corsicanusem> ed analisi ambientali specifiche, suggeriscono l?adattamento prevalente agli ambienti a clima mediterraneo, benché essa sia presente anche a quote elevate (> 1.500 m s.l.m.. Dati preliminari di abbondanza relativa hanno evidenziato una situazione diversificata tra la penisola e la Sicilia e tra aree a diverso regime di gestione; un confronto tra le aree protette ha evidenziato rispettivamente valori di 5,54 e 11,73 ind./km². La riduzione quali-quantitativa e la frammentazione dell?<em>habitat> delle lepri è un fenomeno potenzialmente pericoloso per la sopravvivenza delle popolazioni, determinando fenomeni di estinzione locale dovuti alle basse densità di popolazione, inducendo fenomeni di erosione della variabilità genetica e di riduzione della <em>fitness> degli individui. L?introduzione di <em>L. europaeusem> può costituire un importante fattore limitante sia per la possibile competizione

  3. Surface Length 3D: Plugin do OsiriX para cálculo da distância em superfícies

    Directory of Open Access Journals (Sweden)

    Alexandre Campos Moraes Amato

    Full Text Available Resumo Softwares tradicionais de avaliação de imagens médicas, como DICOM, possuem diversas ferramentas para mensuração de distância, área e volume. Nenhuma delas permite medir distâncias entre pontos em superfícies. O menor trajeto entre pontos possibilita o cálculo entre óstios de vasos, como no caso de aneurismas aórticos, e a avaliação dos vasos viscerais para planejamento cirúrgico. O desenvolvimento de um plugin para OsiriX para mensuração de distâncias em superfícies mostrou-se factível. A validação da ferramenta ainda se faz necessária.

  4. Methyl 2-Benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate

    Directory of Open Access Journals (Sweden)

    Alami Anouar

    2012-09-01

    Full Text Available The heterocyclic carboxylic α-aminoester methyl 2-benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate is obtained by <em>O>-alkylation of methyl α-azido glycinate <em>N>-benzoylated with 1<em>H>-benzimidazol-1-ylmethanol.

  5. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  6. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    Science.gov (United States)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  7. A Neural Network: Family Competition Genetic Algorithm and Its Applications in Electromagnetic Optimization

    Directory of Open Access Journals (Sweden)

    P.-Y. Chen

    2009-01-01

    Full Text Available This study proposes a neural network-family competition genetic algorithm (NN-FCGA for solving the electromagnetic (EM optimization and other general-purpose optimization problems. The NN-FCGA is a hybrid evolutionary-based algorithm, combining the good approximation performance of neural network (NN and the robust and effective optimum search ability of the family competition genetic algorithms (FCGA to accelerate the optimization process. In this study, the NN-FCGA is used to extract a set of optimal design parameters for two representative design examples: the multiple section low-pass filter and the polygonal electromagnetic absorber. Our results demonstrate that the optimal electromagnetic properties given by the NN-FCGA are comparable to those of the FCGA, but reducing a large amount of computation time and a well-trained NN model that can serve as a nonlinear approximator was developed during the optimization process of the NN-FCGA.

  8. EMS and process of identification and evaluation of environmental aspects: a proposal methodology

    International Nuclear Information System (INIS)

    Perotto, E.

    2006-01-01

    The Environmental Management System (EMS) is an instrument to manage the interaction between the organization and the environment. The scope od EMS is to reduce the environmental impact and to achieve improvements in overall performances. In particular, the focus point of EMS implementation is the method for identifying and assessing significant environmental aspects. The results of the literature and regulation reviews (Perotto 2006) have shown that rigourous repeatable and transparent methodologies do not exist. This paper presents a proposal method for identifying and assessing significant environmental aspects, that has all three of these important characteristics. In particular, the proposal methodology for assessing aspects is based on some criteria that are combined in a specific algorithm. It is important to specify that to make a correct application of the method a preliminary rigorous approach to investigating the environment and the activities of organizations is necessary [it

  9. Relationship between photoreceptor outer segment length and visual acuity in diabetic macular edema.

    Science.gov (United States)

    Forooghian, Farzin; Stetson, Paul F; Meyer, Scott A; Chew, Emily Y; Wong, Wai T; Cukras, Catherine; Meyerle, Catherine B; Ferris, Frederick L

    2010-01-01

    The purpose of this study was to quantify photoreceptor outer segment (PROS) length in 27 consecutive patients (30 eyes) with diabetic macular edema using spectral domain optical coherence tomography and to describe the correlation between PROS length and visual acuity. Three spectral domain-optical coherence tomography scans were performed on all eyes during each session using Cirrus HD-OCT. A prototype algorithm was developed for quantitative assessment of PROS length. Retinal thicknesses and PROS lengths were calculated for 3 parameters: macular grid (6 x 6 mm), central subfield (1 mm), and center foveal point (0.33 mm). Intrasession repeatability was assessed using coefficient of variation and intraclass correlation coefficient. The association between retinal thickness and PROS length with visual acuity was assessed using linear regression and Pearson correlation analyses. The main outcome measures include intrasession repeatability of macular parameters and correlation of these parameters with visual acuity. Mean retinal thickness and PROS length were 298 mum to 381 microm and 30 microm to 32 mum, respectively, for macular parameters assessed in this study. Coefficient of variation values were 0.75% to 4.13% for retinal thickness and 1.97% to 14.01% for PROS length. Intraclass correlation coefficient values were 0.96 to 0.99 and 0.73 to 0.98 for retinal thickness and PROS length, respectively. Slopes from linear regression analyses assessing the association of retinal thickness and visual acuity were not significantly different from 0 (P > 0.20), whereas the slopes of PROS length and visual acuity were significantly different from 0 (P < 0.0005). Correlation coefficients for macular thickness and visual acuity ranged from 0.13 to 0.22, whereas coefficients for PROS length and visual acuity ranged from -0.61 to -0.81. Photoreceptor outer segment length can be quantitatively assessed using Cirrus HD-OCT. Although the intrasession repeatability of PROS

  10. The length-weight and length-length relationships of bluefish, Pomatomus saltatrix (Linnaeus, 1766) from Samsun, middle Black Sea region

    OpenAIRE

    Özpiçak, Melek; Saygın, Semra; Polat, Nazmi

    2017-01-01

    In this study, length-weight relationship (LWR) and length-length relationship (LLR) of bluefish,Pomatomus saltatrix were determined. A total of 125 specimens were sampled from Samsun, themiddle Black Sea in 2014 fishing season. Bluefish specimens were monthly collected fromcommercial fishing boats from October to December 2014. All captured individuals (N=125) weremeasured to the nearest 0.1 cm for total, fork and standard lengths. The weight of each fish (W)was recorded to the nearest 0.01 ...

  11. A Probabilistic Analysis of the Nxt Forging Algorithm

    Directory of Open Access Journals (Sweden)

    Serguei Popov

    2016-12-01

    Full Text Available We discuss the forging algorithm of Nxt from a probabilistic point of view, and obtain explicit formulas and estimates for several important quantities, such as the probability that an account generates a block, the length of the longest sequence of consecutive blocks generated by one account, and the probability that one concurrent blockchain wins over an- other one. Also, we discuss some attack vectors related to splitting an account into many smaller ones.

  12. Fast algorithm for two-dimensional data table use in hydrodynamic and radiative-transfer codes

    International Nuclear Information System (INIS)

    Slattery, W.L.; Spangenberg, W.H.

    1982-01-01

    A fast algorithm for finding interpolated atomic data in irregular two-dimensional tables with differing materials is described. The algorithm is tested in a hydrodynamic/radiative transfer code and shown to be of comparable speed to interpolation in regularly spaced tables, which require no table search. The concepts presented are expected to have application in any situation with irregular vector lengths. Also, the procedures that were rejected either because they were too slow or because they involved too much assembly coding are described

  13. A speedup technique for (l, d-motif finding algorithms

    Directory of Open Access Journals (Sweden)

    Dinh Hieu

    2011-03-01

    Full Text Available Abstract Background The discovery of patterns in DNA, RNA, and protein sequences has led to the solution of many vital biological problems. For instance, the identification of patterns in nucleic acid sequences has resulted in the determination of open reading frames, identification of promoter elements of genes, identification of intron/exon splicing sites, identification of SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have proven to be extremely helpful in domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, etc. Motifs are important patterns that are helpful in finding transcriptional regulatory elements, transcription factor binding sites, functional genomics, drug design, etc. As a result, numerous papers have been written to solve the motif search problem. Results Three versions of the motif search problem have been proposed in the literature: Simple Motif Search (SMS, (l, d-motif search (or Planted Motif Search (PMS, and Edit-distance-based Motif Search (EMS. In this paper we focus on PMS. Two kinds of algorithms can be found in the literature for solving the PMS problem: exact and approximate. An exact algorithm identifies the motifs always and an approximate algorithm may fail to identify some or all of the motifs. The exact version of PMS problem has been shown to be NP-hard. Exact algorithms proposed in the literature for PMS take time that is exponential in some of the underlying parameters. In this paper we propose a generic technique that can be used to speedup PMS algorithms. Conclusions We present a speedup technique that can be used on any PMS algorithm. We have tested our speedup technique on a number of algorithms. These experimental results show that our speedup technique is indeed very

  14. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  15. Development of 101 Gene-based Single Nucleotide Polymorphism Markers in Sea Cucumber, <em>Apostichopus japonicusem>>

    Directory of Open Access Journals (Sweden)

    Wei Lu

    2012-06-01

    Full Text Available Single nucleotide polymorphisms (SNPs are currently the marker of choice in a variety of genetic studies. Using the high resolution melting (HRM genotyping approach, 101 gene-based SNP markers were developed for <em>Apostichopus japonicusem>, a sea cucumber species with economic significance for the aquaculture industry in East Asian countries. HRM analysis revealed that all the loci showed polymorphisms when evaluated using 40 <em>A. japonicusem> individuals collected from a natural population. The minor allele frequency ranged from 0.035 to 0.489. The observed and expected heterozygosities ranged from 0.050 to 0.833 and 0.073 to 0.907, respectively. Thirteen loci were found to depart significantly from Hardy–Weinberg equilibrium (HWE after Bonferroni corrections. Significant linkage disequilibrium (LD was detected in one pair of markers. These SNP markers are expected to be useful for future quantitative trait loci (QTL analysis, and to facilitate marker-assisted selection (MAS in <em>A. japonicusem>.

  16. A fast algorithm for identifying friends-of-friends halos

    Science.gov (United States)

    Feng, Y.; Modi, C.

    2017-07-01

    We describe a simple and fast algorithm for identifying friends-of-friends features and prove its correctness. The algorithm avoids unnecessary expensive neighbor queries, uses minimal memory overhead, and rejects slowdown in high over-density regions. We define our algorithm formally based on pair enumeration, a problem that has been heavily studied in fast 2-point correlation codes and our reference implementation employs a dual KD-tree correlation function code. We construct features in a hierarchical tree structure, and use a splay operation to reduce the average cost of identifying the root of a feature from O [ log L ] to O [ 1 ] (L is the size of a feature) without additional memory costs. This reduces the overall time complexity of merging trees from O [ L log L ] to O [ L ] , reducing the number of operations per splay by orders of magnitude. We next introduce a pruning operation that skips merge operations between two fully self-connected KD-tree nodes. This improves the robustness of the algorithm, reducing the number of merge operations in high density peaks from O [δ2 ] to O [ δ ] . We show that for cosmological data set the algorithm eliminates more than half of merge operations for typically used linking lengths b ∼ 0 . 2 (relative to mean separation). Furthermore, our algorithm is extremely simple and easy to implement on top of an existing pair enumeration code, reusing the optimization effort that has been invested in fast correlation function codes.

  17. Using genetic algorithms to optimise current and future health planning - the example of ambulance locations

    Directory of Open Access Journals (Sweden)

    Suzuki Hiroshi

    2010-01-01

    Full Text Available Abstract Background Ambulance response time is a crucial factor in patient survival. The number of emergency cases (EMS cases requiring an ambulance is increasing due to changes in population demographics. This is decreasing ambulance response times to the emergency scene. This paper predicts EMS cases for 5-year intervals from 2020, to 2050 by correlating current EMS cases with demographic factors at the level of the census area and predicted population changes. It then applies a modified grouping genetic algorithm to compare current and future optimal locations and numbers of ambulances. Sets of potential locations were evaluated in terms of the (current and predicted EMS case distances to those locations. Results Future EMS demands were predicted to increase by 2030 using the model (R2 = 0.71. The optimal locations of ambulances based on future EMS cases were compared with current locations and with optimal locations modelled on current EMS case data. Optimising the location of ambulance stations locations reduced the average response times by 57 seconds. Current and predicted future EMS demand at modelled locations were calculated and compared. Conclusions The reallocation of ambulances to optimal locations improved response times and could contribute to higher survival rates from life-threatening medical events. Modelling EMS case 'demand' over census areas allows the data to be correlated to population characteristics and optimal 'supply' locations to be identified. Comparing current and future optimal scenarios allows more nuanced planning decisions to be made. This is a generic methodology that could be used to provide evidence in support of public health planning and decision making.

  18. Telomere length analysis.

    Science.gov (United States)

    Canela, Andrés; Klatt, Peter; Blasco, María A

    2007-01-01

    Most somatic cells of long-lived species undergo telomere shortening throughout life. Critically short telomeres trigger loss of cell viability in tissues, which has been related to alteration of tissue function and loss of regenerative capabilities in aging and aging-related diseases. Hence, telomere length is an important biomarker for aging and can be used in the prognosis of aging diseases. These facts highlight the importance of developing methods for telomere length determination that can be employed to evaluate telomere length during the human aging process. Telomere length quantification methods have improved greatly in accuracy and sensitivity since the development of the conventional telomeric Southern blot. Here, we describe the different methodologies recently developed for telomere length quantification, as well as their potential applications for human aging studies.

  19. Activity-Guided Isolation of Antioxidant Compounds from <em>Rhizophora apiculataem>

    Directory of Open Access Journals (Sweden)

    Hongbin Xiao

    2012-09-01

    Full Text Available <em>Rhizophora apiculataem> (<em>R. apiculataem> contains an abundance of biologically active compounds due its special salt-tolerant living surroundings. In this study, the total phenolic content and antioxidant activities of various extract and fractions of stem of <em>R. apiculataem> were investigated. Results indicated that butanol fraction possesses the highest total phenolic content (181.84 mg/g GAE/g dry extract with strongest antioxidant abilities. Following <em>in vitroem> antioxidant activity-guided phytochemical separation procedures, lyoniresinol-3α-<em>O>-β-arabinopyranoside (1, lyoniresinol-3α-<em>O>-β-rhamnoside (2, and afzelechin-3-<em>O>-L-rhamno-pyranoside (3 were separated from the butanol fraction. These compounds showed more noticeable antioxidant activity than a BHT standard in the DPPH, ABTS and hydroxyl radical scavenging assays. HPLC analysis results showed that among different plant parts, the highest content of 13 was located in the bark (0.068%, 0.066% and 0.011%, respectively. The results imply that the <em>R. apiculataem> might be a potential source of natural antioxidants and 13 are antioxidant ingredients in <em>R. apiculataem>.

  20. Synthesis and Spectroscopic Analysis of Novel 1<em>H-Benzo[d>]imidazoles Phenyl Sulfonylpiperazines

    Directory of Open Access Journals (Sweden)

    Amjad M. Qandil

    2012-05-01

    Full Text Available A group of benzimidazole analogs of sildenafil, 3-benzimidazolyl-4-methoxy-phenylsulfonylpiperazines 2–4 and 3-benzimidazolyl-4-methoxy-<em>N,N>-dimethyl- benzenesulfonamide (5, were efficiently synthesized. Compounds 2–5 were characterized by NMR and MS and contrary to the reported mass spectra of sildenafil, the spectra of the piperazine-containing compounds 2–4 showed a novel fragmentation pattern leading to an <em>m/z> = 316. A mechanism for the formation of this fragment was proposed.

  1. Polarization ray tracing in anisotropic optically active media. I. Algorithms

    International Nuclear Information System (INIS)

    McClain, S.C.; Hillman, L.W.; Chipman, R.A.

    1993-01-01

    Procedures for performing polarization ray tracing through birefringent media are presented in a form compatible with the standard methods of geometrical ray tracing. The birefringent materials treated include the following: anisotropic optically active materials such as quartz, non-optically active uniaxial materials such as calcite, and isotropic optically active materials such as mercury sulfide and organic liquids. Refraction and reflection algorithms are presented that compute both ray directions and wave directions. Methods for computing polarization modes, refractive indices, optical path lengths, and Fresnel transmission and reflection coefficients are also specified. A numerical example of these algorithms is given for analyzing the field of view of a quartz rotator. 37 refs., 3 figs

  2. Opposition-Based Memetic Algorithm and Hybrid Approach for Sorting Permutations by Reversals.

    Science.gov (United States)

    Soncco-Álvarez, José Luis; Muñoz, Daniel M; Ayala-Rincón, Mauricio

    2018-02-21

    Sorting unsigned permutations by reversals is a difficult problem; indeed, it was proved to be NP-hard by Caprara (1997). Because of its high complexity, many approximation algorithms to compute the minimal reversal distance were proposed until reaching the nowadays best-known theoretical ratio of 1.375. In this article, two memetic algorithms to compute the reversal distance are proposed. The first one uses the technique of opposition-based learning leading to an opposition-based memetic algorithm; the second one improves the previous algorithm by applying the heuristic of two breakpoint elimination leading to a hybrid approach. Several experiments were performed with one-hundred randomly generated permutations, single benchmark permutations, and biological permutations. Results of the experiments showed that the proposed OBMA and Hybrid-OBMA algorithms achieve the best results for practical cases, that is, for permutations of length up to 120. Also, Hybrid-OBMA showed to improve the results of OBMA for permutations greater than or equal to 60. The applicability of our proposed algorithms was checked processing permutations based on biological data, in which case OBMA gave the best average results for all instances.

  3. Assessing Long-Term Wind Conditions by Combining Different Measure-Correlate-Predict Algorithms: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J.; Chowdhury, S.; Messac, A.; Hodge, B. M.

    2013-08-01

    This paper significantly advances the hybrid measure-correlate-predict (MCP) methodology, enabling it to account for variations of both wind speed and direction. The advanced hybrid MCP method uses the recorded data of multiple reference stations to estimate the long-term wind condition at a target wind plant site. The results show that the accuracy of the hybrid MCP method is highly sensitive to the combination of the individual MCP algorithms and reference stations. It was also found that the best combination of MCP algorithms varies based on the length of the correlation period.

  4. Osservazioni in cattività sul ciclo stagionale del peso corporeo e sull'efficienza digestiva di <em>Pipistrellus kuhliiem> e <em>Hypsugo saviiem> (Chiroptera: Verspertilionidae

    Directory of Open Access Journals (Sweden)

    Gianna Dondini

    2004-06-01

    Full Text Available Abstract Captivity observation on body weight cycle and digestive efficiency in <em>Pipistrellus kuhliiem> and <em>Hypsugo saviiem> (Chiroptera: Vespertilionidae Many bat species of cold-temperate climate are subject to seasonal variation of temperature and food availability. Fat reserve during summer-autumn is therefore a physiological adaptation to spend the winter months by hibernating or to sustain migration. During a research on bats in urban areas, two juveniles of Kuhl's bat (<em>Pipistrellus kuhliiem>, 2 females and two juveniles of Savi's bat (<em>Hypsugo saviiem>, 1 male and 1 female were collected in 1997 in the urban area of Florence (central Italy. Bats were kept in a cage of 50x40x30 cm with a temperature between 17° and 22° C. Every day they were weighted with an electronic balance before eating mealworms (<em>Tenebrio molitorem>. Digestive efficiency, calculated on dry material, was about 90% for both species. In about six months <em>P. kuhliiem> and <em>H. saviiem> increased on the average of 450% and 280% in weight respectively. Deposition of fat reserve seemed to be faster in <em>P. kuhliiem> than in <em>H. saviiem>. Both species showed a circannual cycle in the variation of weight. Riassunto Molte specie di pipistrelli dei climi temperato-freddi sono soggette a marcate variazioni stagionali di temperatura e disponibilità di cibo. L'accumulo di grasso in tarda estate-autunno è quindi un adattamento fisiologico per trascorrere in ibernazione i mesi invernali o per intraprendere la migrazione. Nell'ambito di una ricerca pluriennale sui pipistrelli in ambienti urbani, 4 esemplari giovani, di cui 2 di Pipistrello albolimbato (<em>Pipistrellus kuhliiem>, 2 femmine e due di Pipistrello di Savi (<em>Hypsugo saviiem>, 1 maschio e 1 femmina, sono stati raccolti nella pianura di Firenze durante l'estate del 1997 e mantenuti in un contenitore di 50x40x30 cm ad

  5. Three-dimensional ophthalmic optical coherence tomography with a refraction correction algorithm

    Science.gov (United States)

    Zawadzki, Robert J.; Leisser, Christoph; Leitgeb, Rainer; Pircher, Michael; Fercher, Adolf F.

    2003-10-01

    We built an optical coherence tomography (OCT) system with a rapid scanning optical delay (RSOD) line, which allows probing full axial eye length. The system produces Three-dimensional (3D) data sets that are used to generate 3D tomograms of the model eye. The raw tomographic data were processed by an algorithm, which is based on Snell"s law to correct the interface positions. The Zernike polynomials representation of the interfaces allows quantitative wave aberration measurements. 3D images of our results are presented to illustrate the capabilities of the system and the algorithm performance. The system allows us to measure intra-ocular distances.

  6. Breadth-First Search-Based Single-Phase Algorithms for Bridge Detection in Wireless Sensor Networks

    Science.gov (United States)

    Akram, Vahid Khalilpour; Dagdeviren, Orhan

    2013-01-01

    Wireless sensor networks (WSNs) are promising technologies for exploring harsh environments, such as oceans, wild forests, volcanic regions and outer space. Since sensor nodes may have limited transmission range, application packets may be transmitted by multi-hop communication. Thus, connectivity is a very important issue. A bridge is a critical edge whose removal breaks the connectivity of the network. Hence, it is crucial to detect bridges and take preventions. Since sensor nodes are battery-powered, services running on nodes should consume low energy. In this paper, we propose energy-efficient and distributed bridge detection algorithms for WSNs. Our algorithms run single phase and they are integrated with the Breadth-First Search (BFS) algorithm, which is a popular routing algorithm. Our first algorithm is an extended version of Milic's algorithm, which is designed to reduce the message length. Our second algorithm is novel and uses ancestral knowledge to detect bridges. We explain the operation of the algorithms, analyze their proof of correctness, message, time, space and computational complexities. To evaluate practical importance, we provide testbed experiments and extensive simulations. We show that our proposed algorithms provide less resource consumption, and the energy savings of our algorithms are up by 5.5-times. PMID:23845930

  7. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  8. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  9. Análise de um modelo de risco pré-operatório específico para cirurgia valvar e a relação com o tempo de internação em unidade de terapia intensiva Analysis of specific pre-operative model to valve surgery and relationship with the length of stay in intensive care unit

    Directory of Open Access Journals (Sweden)

    Felipe Montes Pena

    2010-12-01

    Full Text Available OBJETIVOS: O tempo de internação prolongado após cirurgia cardíaca é associado a resultados imediatos ruins e aumento dos custos. O objetivo deste estudo foi analisar o poder preditor do escore de Ambler na previsão do tempo de internação em unidade de terapia intensiva. MÉTODOS: Estudo de coorte retrospectiva com dados coletados de 110 pacientes submetidos à cirurgia de troca valvar isolada ou associada. Os valores do escore aditivo e logístico do escore de Ambler e as performances preditivas do escore de Ambler foram obtidos por meio da curva ROC. A estadia em unidade de terapia intensiva definiu-se como normal 3 dias. A área sobre as curvas dos modelos aditivo e logístico foram comparadas por meio do teste de Hanley-MacNeil. RESULTADOS: A média de permanência em unidade de terapia intensiva foi de 4,2 dias. Sessenta e três pacientes pertenciam ao sexo masculino. O modelo logístico apresentou área sob a curva ROC de 0,73 e 0,79 para internação >3 dias e 3 dias e OBJECTIVES: The length of stay after prolonged cardiac surgery has been associated with poor immediate outcomes and increased costs. This study aimed to evaluate the predictive power of the Ambler Score to anticipate the length of stay in the intensive care unit. METHODS: This was a retrospective cohort study based on data collected from 110 patients undergoing valve replacement surgery alone or in combination with other procedures. Additive and logistic Ambler Scores were obtained and their predictive performances calculated using the Receiver Operating Characteristic curve. The normal length stay in the intensive care unit was assumed to be 3 days. The areas under the receiver operating curves for both the additive and logistic models were compared using the Hanley-MacNeil test. RESULTS: The mean intensive care unit length of stay was 4.2 days. Sixty-three patients were male. The logistic model showed areas under the receiver operating characteristic curve of 0

  10. Hipertensão Arterial Experimental e Prenhez em Ratas: Repercussões sobre o Peso, Comprimento e Órgãos dos Recém-nascidos Experimental Arterial Hypertension and Pregnancy in Rats: Repercussion Regarding Body Weight Gain, Body Length and Organs of Offspring

    Directory of Open Access Journals (Sweden)

    Rogério Dias

    2000-10-01

    Full Text Available Objetivo: estudar as repercussões da hipertensão arterial sobre o peso e comprimento corpóreo e sobre o peso do fígado e do cérebro de recém-nascidos (RN. Métodos: foram utilizadas 82 ratas virgens da raça Wistar em idade de reprodução. Após a indução da hipertensão arterial experimental (modelo Goldblatt I: 1 rim - 1 clipe as ratas foram sorteadas para compor os quatro grandes grupos experimentais (controle (C, manipulação (M, nefrectomia (N e hipertensão (H. A seguir, as ratas foram distribuídas por sorteio em 8 subgrupos, sendo quatro grupos prenhes e quatro grupos não-prenhes. Após acasalamento dos quatro grupos prenhes, obtivemos com o nascimento dos recém-nascidos os seguintes grupos: RN-C, RN-M, RN-N e RN-H, respectivamente controle, manipulação, nefrectomia e hipertensão. Resultados: quanto ao peso e comprimento corpóreo dos recém-nascidos observamos que os grupos RN-N e RN-H apresentaram os menores pesos ( = 3,64 ± 0,50 e ou = 3,37 ± 0,44, respectivamente e comprimentos ( = 3,89 ± 0,36 e ou = 3,68 ± 0,32, respectivamente em relação ao seus controles ( = 5,40 ± 0,51 e ou = 4,95 ± 0,23, respectivamente. Quanto ao peso do fígado os RN-H apresentaram os menores pesos ( = 0,22 ± 0,03 em relação a todos os demais grupos em estudo, e quanto ao peso do encéfalo os RN-N e RN-H apresentaram os menores pesos ( = 0,16 ± 0,01 e ou = 0,16 ± 0,05, respectivamente em relação aos seus controles ( = 0,22 ± 0,04. Conclusão: a hipertensão arterial determinou redução no peso corpóreo, no comprimento, no peso do fígado e no peso do encéfalo dos recém-nascidos.Purpose: to study the repercussion of arterial hypertension regarding body weight gain and body length, as well as liver and brain weight of offspring. Methods: a total of 82 animals in reproductive age were used. They were randomly assigned to 4 different groups (control, handled, nephrectomized and hypertensive. Renal hypertension was produced by a

  11. Properties of the center of gravity as an algorithm for position measurements: Two-dimensional geometry

    CERN Document Server

    Landi, Gregorio

    2003-01-01

    The center of gravity as an algorithm for position measurements is analyzed for a two-dimensional geometry. Several mathematical consequences of discretization for various types of detector arrays are extracted. Arrays with rectangular, hexagonal, and triangular detectors are analytically studied, and tools are given to simulate their discretization properties. Special signal distributions free of discretized error are isolated. It is proved that some crosstalk spreads are able to eliminate the center of gravity discretization error for any signal distribution. Simulations, adapted to the CMS em-calorimeter and to a triangular detector array, are provided for energy and position reconstruction algorithms with a finite number of detectors.

  12. Encke-Beta Predictor for Orion Burn Targeting and Guidance

    Science.gov (United States)

    Robinson, Shane; Scarritt, Sara; Goodman, John L.

    2016-01-01

    The state vector prediction algorithm selected for Orion on-board targeting and guidance is known as the Encke-Beta method. Encke-Beta uses a universal anomaly (beta) as the independent variable, valid for circular, elliptical, parabolic, and hyperbolic orbits. The variable, related to the change in eccentric anomaly, results in integration steps that cover smaller arcs of the trajectory at or near perigee, when velocity is higher. Some burns in the EM-1 and EM-2 mission plans are much longer than burns executed with the Apollo and Space Shuttle vehicles. Burn length, as well as hyperbolic trajectories, has driven the use of the Encke-Beta numerical predictor by the predictor/corrector guidance algorithm in place of legacy analytic thrust and gravity integrals.

  13. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  14. Greedy Algorithm for the Construction of Approximate Decision Rules for Decision Tables with Many-Valued Decisions

    KAUST Repository

    Azad, Mohammad

    2016-10-20

    The paper is devoted to the study of a greedy algorithm for construction of approximate decision rules. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. We consider bounds on the precision of this algorithm relative to the length of rules. To illustrate proposed approach we study a problem of recognition of labels of points in the plain. This paper contains also results of experiments with modified decision tables from UCI Machine Learning Repository.

  15. Greedy Algorithm for the Construction of Approximate Decision Rules for Decision Tables with Many-Valued Decisions

    KAUST Repository

    Azad, Mohammad; Moshkov, Mikhail; Zielosko, Beata

    2016-01-01

    The paper is devoted to the study of a greedy algorithm for construction of approximate decision rules. This algorithm is applicable to decision tables with many-valued decisions where each row is labeled with a set of decisions. For a given row, we should find a decision from the set attached to this row. We consider bounds on the precision of this algorithm relative to the length of rules. To illustrate proposed approach we study a problem of recognition of labels of points in the plain. This paper contains also results of experiments with modified decision tables from UCI Machine Learning Repository.

  16. On the normalization of the minimum free energy of RNAs by sequence length.

    Science.gov (United States)

    Trotta, Edoardo

    2014-01-01

    The minimum free energy (MFE) of ribonucleic acids (RNAs) increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.

  17. Automatic boiling water reactor loading pattern design using ant colony optimization algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Wang, C.-D. [Department of Engineering and System Science, National Tsing Hua University, 101, Section 2 Kuang Fu Road, Hsinchu 30013, Taiwan (China); Nuclear Engineering Division, Institute of Nuclear Energy Research, No. 1000, Wenhua Rd., Jiaan Village, Longtan Township, Taoyuan County 32546, Taiwan (China)], E-mail: jdwang@iner.gov.tw; Lin Chaung [Department of Engineering and System Science, National Tsing Hua University, 101, Section 2 Kuang Fu Road, Hsinchu 30013, Taiwan (China)

    2009-08-15

    An automatic boiling water reactor (BWR) loading pattern (LP) design methodology was developed using the rank-based ant system (RAS), which is a variant of the ant colony optimization (ACO) algorithm. To reduce design complexity, only the fuel assemblies (FAs) of one eight-core positions were determined using the RAS algorithm, and then the corresponding FAs were loaded into the other parts of the core. Heuristic information was adopted to exclude the selection of the inappropriate FAs which will reduce search space, and thus, the computation time. When the LP was determined, Haling cycle length, beginning of cycle (BOC) shutdown margin (SDM), and Haling end of cycle (EOC) maximum fraction of limit for critical power ratio (MFLCPR) were calculated using SIMULATE-3 code, which were used to evaluate the LP for updating pheromone of RAS. The developed design methodology was demonstrated using FAs of a reference cycle of the BWR6 nuclear power plant. The results show that, the designed LP can be obtained within reasonable computation time, and has a longer cycle length than that of the original design.

  18. Wavelength converter placement for different RWA algorithms in wavelength-routed all-optical networks

    Science.gov (United States)

    Chu, Xiaowen; Li, Bo; Chlamtac, Imrich

    2002-07-01

    Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength

  19. Microstructure, length, and connection of limbic tracts in normal human brain development

    Directory of Open Access Journals (Sweden)

    Qiaowen eYu

    2014-08-01

    Full Text Available The cingulum and fornix play an important role in memory, attention, spatial orientation and feeling functions. Both microstructure and length of these limbic tracts can be affected by mental disorders such as Alzheimer’s disease, depression, autism, anxiety, and schizophrenia. To date, there has been little systematic characterization of their microstructure, length and functional connectivity in normally developing brains. In this study, diffusion tensor imaging (DTI and resting state functional MRI (rs-fMRI data from 65 normally developing right-handed subjects from birth to young adulthood was acquired. After cingulate gyrus part of the cingulum (cgc, hippocampal part of the cingulum (cgh and fornix (fx were traced with DTI tractography, absolute and normalized tract lengths and DTI-derived metrics including fractional anisotropy, mean, axial and radial diffusivity were measured for traced limbic tracts. Free water elimination (FWE algorithm was adopted to improve accuracy of the measurements of DTI-derived metrics. The role of these limbic tracts in the functional network at birth and adulthood was explored. We found a logarithmic age-dependent trajectory for FWE-corrected DTI metric changes with fast increase of microstructural integrity from birth to 2-year-old followed by a slow increase to 25-year-old. Normalized tract length of cgc increases with age, while no significant relationship with age was found for normalized tract lengths of cgh and fx. Stronger microstructural integrity on the left side compared to that of right side was found. With integrated DTI and rs-fMRI, the key connectional role of cgc and cgh in the default mode network (DMN was confirmed as early as birth. Systematic characterization of length and DTI metrics after FWE correction of limbic tracts offers insight into their morphological and microstructural developmental trajectories. These trajectories may serve as a normal reference for pediatric patients with

  20. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis

    2017-04-05

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.

  1. Glycosylation of Vanillin and 8-Nordihydrocapsaicin by Cultured <em>Eucalyptus perrinianaem> Cells

    Directory of Open Access Journals (Sweden)

    Naoji Kubota

    2012-05-01

    Full Text Available Glycosylation of vanilloids such as vanillin and 8-nordihydrocapsaicin by cultured plant cells of <em>Eucalyptus perrinianaem> was studied. Vanillin was converted into vanillin 4-<em>O>-b-D-glucopyranoside, vanillyl alcohol, and 4-<em>O>-b-D-glucopyranosylvanillyl alcohol by <em>E. perriniana em>cells. Incubation of cultured <em>E. perrinianaem> cells with 8-nor- dihydrocapsaicin gave 8-nordihydrocapsaicin 4-<em>O>-b-D-glucopyranoside and 8-nordihydro- capsaicin 4-<em>O>-b-D-gentiobioside.

  2. Evaluating and comparing algorithms for respiratory motion prediction

    International Nuclear Information System (INIS)

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-01-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient

  3. DEVELOPMENT OF A PEDESTRIAN INDOOR NAVIGATION SYSTEM BASED ON MULTI-SENSOR FUSION AND FUZZY LOGIC ESTIMATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Y. C. Lai

    2015-05-01

    Full Text Available This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS. There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system

  4. Development of a Pedestrian Indoor Navigation System Based on Multi-Sensor Fusion and Fuzzy Logic Estimation Algorithms

    Science.gov (United States)

    Lai, Y. C.; Chang, C. C.; Tsai, C. M.; Lin, S. Y.; Huang, S. C.

    2015-05-01

    This paper presents a pedestrian indoor navigation system based on the multi-sensor fusion and fuzzy logic estimation algorithms. The proposed navigation system is a self-contained dead reckoning navigation that means no other outside signal is demanded. In order to achieve the self-contained capability, a portable and wearable inertial measure unit (IMU) has been developed. Its adopted sensors are the low-cost inertial sensors, accelerometer and gyroscope, based on the micro electro-mechanical system (MEMS). There are two types of the IMU modules, handheld and waist-mounted. The low-cost MEMS sensors suffer from various errors due to the results of manufacturing imperfections and other effects. Therefore, a sensor calibration procedure based on the scalar calibration and the least squares methods has been induced in this study to improve the accuracy of the inertial sensors. With the calibrated data acquired from the inertial sensors, the step length and strength of the pedestrian are estimated by multi-sensor fusion and fuzzy logic estimation algorithms. The developed multi-sensor fusion algorithm provides the amount of the walking steps and the strength of each steps in real-time. Consequently, the estimated walking amount and strength per step are taken into the proposed fuzzy logic estimation algorithm to estimates the step lengths of the user. Since the walking length and direction are both the required information of the dead reckoning navigation, the walking direction is calculated by integrating the angular rate acquired by the gyroscope of the developed IMU module. Both the walking length and direction are calculated on the IMU module and transmit to a smartphone with Bluetooth to perform the dead reckoning navigation which is run on a self-developed APP. Due to the error accumulating of dead reckoning navigation, a particle filter and a pre-loaded map of indoor environment have been applied to the APP of the proposed navigation system to extend its

  5. Length-weight and length-length relationships of common carp (Cyprinus carpio L.) in the middle and southern Iraq provinces

    Science.gov (United States)

    Al-jebory, Taymaa A.; Das, Simon K.; Usup, Gires; Bakar, Y.; Al-saadi, Ali H.

    2018-04-01

    In this study, length-weight and length-length relationships of common carp (Cyprinus carpio L.) in the middle and southern Iraq provinces were determined. Fish specimens were procured from seven provinces from July to December, 2015. A negative and positive allometric growth pattern was obtained, where the total length (TL) ranged from 25.60 cm to 33.53 cm, and body weight (BW) ranged from 700 g to 1423 g. Meanwhile, the lowest of 1.03 and highest of 3.54 in "b" value was recorded in group F and group C, respectively. Therefore, Fulton condition factor (K) range from 2.57 to 4.94. While, relative condition factor (Kn) was in the ranged of 0.95 to 1.01. A linear relationship between total length (TL) and standard length (SL) among the provinces for fish groups was obtained. The variances in "b" value ranged from 0.10 to 0.93 with correlation coefficient (r2) of 0.02 to 0.97. This research could be used as a guide to study the ecology and biology of common Carp (Cyprinus carpio L.) in the middle and southern Iraq provinces.

  6. A quantum algorithm for Viterbi decoding of classical convolutional codes

    Science.gov (United States)

    Grice, Jon R.; Meyer, David A.

    2015-07-01

    We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper, the proposed algorithm is applied to decoding classical convolutional codes, for instance, large constraint length and short decode frames . Other applications of the classical Viterbi algorithm where is large (e.g., speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butterfly diagram of the fast Fourier transform, with its corresponding fast quantum algorithm. The tensor-product structure of the butterfly diagram corresponds to a quantum superposition that we show can be efficiently prepared. The quantum speedup is possible because the performance of the QVA depends on the fanout (number of possible transitions from any given state in the hidden Markov model) which is in general much less than . The QVA constructs a superposition of states which correspond to all legal paths through the decoding lattice, with phase as a function of the probability of the path being taken given received data. A specialized amplitude amplification procedure is applied one or more times to recover a superposition where the most probable path has a high probability of being measured.

  7. AES Encryption Algorithm Optimization Based on 64-bit Processor Android Platform

    Directory of Open Access Journals (Sweden)

    ZHAO Jun

    2017-06-01

    Full Text Available Algorithm implemented on the mobile phone is different from one on PC. It requires little storage space and low power consumption. Standard AES S-box design uses look up table,and has high complexity and high power consumption,so it needs to be optimized when used in mobile phones. In our optimization AES encryption algorithm,the packet length is expanded to 256 bits,which would increase the security of our algorithm; look up table is replaced by adding the affine transformation based on inversion,which would reduce the storage space; operation is changed into 16-bit input and 64-bit output by merging the three steps,namely SubWords,ShiftRows MixColumns and AddRoundKey,which would improve the operation efficiency of the algorithm. The experimental results show that our algorithm not only can greatly enhance the encryption strength,but also maintain high computing efficiency.

  8. IMPROVED ESTIMATION OF FIBER LENGTH FROM 3-DIMENSIONAL IMAGES

    Directory of Open Access Journals (Sweden)

    Joachim Ohser

    2013-03-01

    Full Text Available A new method is presented for estimating the specific fiber length from 3D images of macroscopically homogeneous fiber systems. The method is based on a discrete version of the Crofton formula, where local knowledge from 3x3x3-pixel configurations of the image data is exploited. It is shown that the relative error resulting from the discretization of the outer integral of the Crofton formula amonts at most 1.2%. An algorithmic implementation of the method is simple and the runtime as well as the amount of memory space are low. The estimation is significantly improved by considering 3x3x3-pixel configurations instead of 2x2x2, as already studied in literature.

  9. Preferencia alimenticia del ácaro depredador <em>Balaustium> sp. en condiciones controladas

    Directory of Open Access Journals (Sweden)

    Muñoz Karen

    2009-04-01

    Full Text Available

    Se evaluó la preferencia de presas de <em>Balaustium> sp., enemigo natural de diferentes artrópodos plaga, y el cual es nativo de la Sabana de Bogotá. En unidades experimentales construidas con foliolos de plantas de rosa se colocaron independientemente individuos de <em>Balaustium> sp. y se registró el número de presas consumidas. De esta manera se determinó la preferencia de los tres estados móviles del ácaro depredador <em>Balaustium> sp. por diferentes edades de tres presas. Las especies y edades de las presas estudiadas fueron: huevos, ninfas y adultos de <em>Trialeurodes vaporariorumem>, huevos, ninfas y adultos de <em>Tetranychus urticaeem>, y larvas de primer y segundo instar y adultos de <em>Frankliniella occidentalisem>. Los estados menos desarrollados fueron preferidos, aunque se observó que los adultos del depredador tienen gran habilidad para consumir adultos de <em>T. vaporariorumem>. La presa preferida por las larvas de <em>Balaustium> sp. fue los huevos de <em>T. urticaeem> con una proporción de consumo de 0,54 de los huevos que se ofrecieron de esta presa; las deutoninfas del depredador eligieron huevos de <em>T. vaporariorumem> (0,537 o de <em>T. urticaeem> (0,497 y los adultos de <em>Balaustium> sp. prefrieron los huevos de <em>T. vaporariorumem> (0,588.

  10. Ultrasound-Assisted Extraction of Carnosic Acid and Rosmarinic Acid Using Ionic Liquid Solution from <em>Rosmarinus officinalisem>>

    Directory of Open Access Journals (Sweden)

    Chunjian Zhao

    2012-09-01

    Full Text Available Ionic liquid based, ultrasound-assisted extraction was successfully applied to the extraction of phenolcarboxylic acids, carnosic acid and rosmarinic acid, from <em>Rosmarinus officinalisem>. Eight ionic liquids, with different cations and anions, were investigated in this work and [C8mim]Br was selected as the optimal solvent. Ultrasound extraction parameters, including soaking time, solid–liquid ratio, ultrasound power and time, and the number of extraction cycles, were discussed by single factor experiments and the main influence factors were optimized by response surface methodology. The proposed approach was demonstrated as having higher efficiency, shorter extraction time and as a new alternative for the extraction of carnosic acid and rosmarinic acid from <em>R. officinalisem>> em>compared with traditional reference extraction methods. Ionic liquids are considered to be green solvents, in the ultrasound-assisted extraction of key chemicals from medicinal plants, and show great potential.

  11. A novel algorithm for fast grasping of unknown objects using C-shape configuration

    Science.gov (United States)

    Lei, Qujiang; Chen, Guangming; Meijer, Jonathan; Wisse, Martijn

    2018-02-01

    Increasing grasping efficiency is very important for the robots to grasp unknown objects especially subjected to unfamiliar environments. To achieve this, a new algorithm is proposed based on the C-shape configuration. Specifically, the geometric model of the used under-actuated gripper is approximated as a C-shape. To obtain an appropriate graspable position, this C-shape configuration is applied to fit geometric model of an unknown object. The geometric model of unknown object is constructed by using a single-view partial point cloud. To examine the algorithm using simulations, a comparison of the commonly used motion planners is made. The motion planner with the highest number of solved runs, lowest computing time and the shortest path length is chosen to execute grasps found by this grasping algorithm. The simulation results demonstrate that excellent grasping efficiency is achieved by adopting our algorithm. To validate this algorithm, experiment tests are carried out using a UR5 robot arm and an under-actuated gripper. The experimental results show that steady grasping actions are obtained. Hence, this research provides a novel algorithm for fast grasping of unknown objects.

  12. An automated A-value measurement tool for accurate cochlear duct length estimation.

    Science.gov (United States)

    Iyaniwura, John E; Elfarnawany, Mai; Ladak, Hanif M; Agrawal, Sumit K

    2018-01-22

    There has been renewed interest in the cochlear duct length (CDL) for preoperative cochlear implant electrode selection and postoperative generation of patient-specific frequency maps. The CDL can be estimated by measuring the A-value, which is defined as the length between the round window and the furthest point on the basal turn. Unfortunately, there is significant intra- and inter-observer variability when these measurements are made clinically. The objective of this study was to develop an automated A-value measurement algorithm to improve accuracy and eliminate observer variability. Clinical and micro-CT images of 20 cadaveric cochleae specimens were acquired. The micro-CT of one sample was chosen as the atlas, and A-value fiducials were placed onto that image. Image registration (rigid affine and non-rigid B-spline) was applied between the atlas and the 19 remaining clinical CT images. The registration transform was applied to the A-value fiducials, and the A-value was then automatically calculated for each specimen. High resolution micro-CT images of the same 19 specimens were used to measure the gold standard A-values for comparison against the manual and automated methods. The registration algorithm had excellent qualitative overlap between the atlas and target images. The automated method eliminated the observer variability and the systematic underestimation by experts. Manual measurement of the A-value on clinical CT had a mean error of 9.5 ± 4.3% compared to micro-CT, and this improved to an error of 2.7 ± 2.1% using the automated algorithm. Both the automated and manual methods correlated significantly with the gold standard micro-CT A-values (r = 0.70, p value measurement tool using atlas-based registration methods was successfully developed and validated. The automated method eliminated the observer variability and improved accuracy as compared to manual measurements by experts. This open-source tool has the potential to benefit

  13. Heartbeat Cycle Length Detection by a Ballistocardiographic Sensor in Atrial Fibrillation and Sinus Rhythm

    Directory of Open Access Journals (Sweden)

    Matthias Daniel Zink

    2015-01-01

    Full Text Available Background. Heart rate monitoring is especially interesting in patients with atrial fibrillation (AF and is routinely performed by ECG. A ballistocardiography (BCG foil is an unobtrusive sensor for mechanical vibrations. We tested the correlation of heartbeat cycle length detection by a novel algorithm for a BCG foil to an ECG in AF and sinus rhythm (SR. Methods. In 22 patients we obtained BCG and synchronized ECG recordings before and after cardioversion and examined the correlation between heartbeat characteristics. Results. We analyzed a total of 4317 heartbeats during AF and 2445 during SR with a correlation between ECG and BCG during AF of r=0.70 (95% CI 0.68–0.71, P<0.0001 and r=0.75 (95% CI 0.73–0.77, P<0.0001 during SR. By adding a quality index, artifacts could be reduced and the correlation increased for AF to 0.76 (95% CI 0.74–0.77, P<0.0001, n=3468 and for SR to 0.85 (95% CI 0.83–0.86, P<0.0001, n=2176. Conclusion. Heartbeat cycle length measurement by our novel algorithm for BCG foil is feasible during SR and AF, offering new possibilities of unobtrusive heart rate monitoring. This trial is registered with IRB registration number EK205/11. This trial is registered with clinical trials registration number NCT01779674.

  14. Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms

    Directory of Open Access Journals (Sweden)

    N. A. Azeez

    2017-04-01

    Full Text Available Data compression is the process of reducing the size of a file to effectively reduce storage space and communication cost. The evolvement in technology and digital age has led to an unparalleled usage of digital files in this current decade. The usage of data has resulted to an increase in the amount of data being transmitted via various channels of data communication which has prompted the need to look into the current lossless data compression algorithms to check for their level of effectiveness so as to maximally reduce the bandwidth requirement in communication and transfer of data. Four lossless data compression algorithm: Lempel-Ziv Welch algorithm, Shannon-Fano algorithm, Adaptive Huffman algorithm and Run-Length encoding have been selected for implementation. The choice of these algorithms was based on their similarities, particularly in application areas. Their level of efficiency and effectiveness were evaluated using some set of predefined performance evaluation metrics namely compression ratio, compression factor, compression time, saving percentage, entropy and code efficiency. The algorithms implementation was done in the NetBeans Integrated Development Environment using Java as the programming language. Through the statistical analysis performed using Boxplot and ANOVA and comparison made on the four algo

  15. Incubator embedded cell culture imaging system (EmSight) based on Fourier ptychographic microscopy.

    Science.gov (United States)

    Kim, Jinho; Henley, Beverley M; Kim, Charlene H; Lester, Henry A; Yang, Changhuei

    2016-08-01

    Multi-day tracking of cells in culture systems can provide valuable information in bioscience experiments. We report the development of a cell culture imaging system, named EmSight, which incorporates multiple compact Fourier ptychographic microscopes with a standard multiwell imaging plate. The system is housed in an incubator and presently incorporates six microscopes. By using the same low magnification objective lenses as the objective and the tube lens, the EmSight is configured as a 1:1 imaging system that, providing large field-of-view (FOV) imaging onto a low-cost CMOS imaging sensor. The EmSight improves the image resolution by capturing a series of images of the sample at varying illumination angles; the instrument reconstructs a higher-resolution image by using the iterative Fourier ptychographic algorithm. In addition to providing high-resolution brightfield and phase imaging, the EmSight is also capable of fluorescence imaging at the native resolution of the objectives. We characterized the system using a phase Siemens star target, and show four-fold improved coherent resolution (synthetic NA of 0.42) and a depth of field of 0.2 mm. To conduct live, long-term dopaminergic neuron imaging, we cultured ventral midbrain from mice driving eGFP from the tyrosine hydroxylase promoter. The EmSight system tracks movements of dopaminergic neurons over a 21 day period.

  16. A density based algorithm to detect cavities and holes from planar points

    Science.gov (United States)

    Zhu, Jie; Sun, Yizhong; Pang, Yueyong

    2017-12-01

    Delaunay-based shape reconstruction algorithms are widely used in approximating the shape from planar points. However, these algorithms cannot ensure the optimality of varied reconstructed cavity boundaries and hole boundaries. This inadequate reconstruction can be primarily attributed to the lack of efficient mathematic formulation for the two structures (hole and cavity). In this paper, we develop an efficient algorithm for generating cavities and holes from planar points. The algorithm yields the final boundary based on an iterative removal of the Delaunay triangulation. Our algorithm is mainly divided into two steps, namely, rough and refined shape reconstructions. The rough shape reconstruction performed by the algorithm is controlled by a relative parameter. Based on the rough result, the refined shape reconstruction mainly aims to detect holes and pure cavities. Cavity and hole are conceptualized as a structure with a low-density region surrounded by the high-density region. With this structure, cavity and hole are characterized by a mathematic formulation called as compactness of point formed by the length variation of the edges incident to point in Delaunay triangulation. The boundaries of cavity and hole are then found by locating a shape gradient change in compactness of point set. The experimental comparison with other shape reconstruction approaches shows that the proposed algorithm is able to accurately yield the boundaries of cavity and hole with varying point set densities and distributions.

  17. Antioxidant Profile of <em>Trifolium pratenseem> L.

    Directory of Open Access Journals (Sweden)

    Heidy Schwartsova

    2012-09-01

    Full Text Available In order to examine the antioxidant properties of five different extracts of <em>Trifolium pratenseem> L. (Leguminosae leaves, various assays which measure free radical scavenging ability were carried out: 1,1-diphenyl-2-picrylhydrazyl, hydroxyl, superoxide anion and nitric oxide radical scavenger capacity tests and lipid peroxidation assay. In all of the tests, only the H2O and (to some extent the EtOAc extracts showed a potent antioxidant effect compared with BHT and BHA, well-known synthetic antioxidants. In addition, <em>in vivo em>experiments were conducted with antioxidant systems (activities of GSHPx, GSHR, Px, CAT, XOD, GSH content and intensity of LPx in liver homogenate and blood of mice after their treatment with extracts of <em>T. pratenseem> leaves, or in combination with CCl4. Besides, in the extracts examined the total phenolic and flavonoid amounts were also determined, together with presence of the selected flavonoids: quercetin, luteolin, apigenin, naringenin and kaempferol, which were studied using a HPLC-DAD technique. HPLC-DAD analysis showed a noticeable content of natural products according to which the examined <em>Trifolium pratenseem> species could well be regarded as a promising new source of bioactive natural compounds, which can be used both as a food supplement and a remedy.

  18. Comprimento da estaca e tipo de substrato na propagação vegetativa de atroveran Shoot cutting length and substrate types on vegetative propagation of atroveran

    Directory of Open Access Journals (Sweden)

    Larissa Corrêa do Bomfim Costa

    2007-08-01

    Full Text Available A propagação vegetativa de espécies medicinais vem despertando interesse das pesquisas agronômicas, uma vez que se constitui no ponto de partida e em ferramenta básica para qualquer cultivo em escala comercial. Este trabalho objetivou determinar o comprimento de estaca e o tipo de substrato mais adequados para a propagação vegetativa de atroveran. Em condições de casa de vegetação sob nebulização intermitente, foram testados dois comprimentos de estacas (10 e 20cm e três substratos (areia lavada, casca de arroz carbonizada e substrato comercial Plantmax®, em delineamento experimental em blocos casualizados, com quatro repetições e cinco estacas por parcela. Aos trinta e cinco dias, foram avaliados a porcentagem de enraizamento, o comprimento da maior raiz (cm e a biomassa seca das folhas e das raízes (mg. Os resultados indicaram que a propagação vegetativa de atroveran por meio de estaquia é viável, uma vez que o seu enraizamento médio ficou acima de 70%. As mudas de atroveran obtidas de estacas com 20cm apresentaram maior biomassa seca das folhas e das raízes, apesar de o comprimento da estaca não ter afetado a porcentagem de enraizamento e o comprimento da raiz. Os tipos de substrato não proporcionaram efeito sobre o desenvolvimento das estacas de atroveran. Recomenda-se a produção de mudas de atroveran com estacas de 20cm de comprimento, utilizando-se qualquer um dos três substratos testados.The vegetative propagation of medicinal species is in increasing agronomic interest because it is the starting point and a basic tool for any cultivation in commercial scale. The objective of this work was to determine the best shoot cutting length and the best substrate for vegetative propagation of Ocimum selloi. Cuttings were placed in greenhouse conditions under intermittent mist. Two cutting sizes (10 and 20cm and three substrate types (washed sand, carbonized rice hulls and commercial substrate Plantmax® were tested

  19. An Improved Fast Flocking Algorithm with Obstacle Avoidance for Multiagent Dynamic Systems

    Directory of Open Access Journals (Sweden)

    Jialiang Wang

    2014-01-01

    Full Text Available Flocking behavior is a common phenomenon in nature, such as flocks of birds and groups of fish. In order to make the agents effectively avoid obstacles and fast form flocking towards the direction of destination point, this paper proposes a fast multiagent obstacle avoidance (FMOA algorithm. FMOA is illustrated based on the status of whether the flocking has formed. If flocking has not formed, agents should avoid the obstacles toward the direction of target. If otherwise, these agents have reached the state of lattice and then these agents only need to avoid the obstacles and ignore the direction of target. The experimental results show that the proposed FMOA algorithm has better performance in terms of flocking path length. Furthermore, the proposed FMOA algorithm is applied to the formation flying of quad-rotor helicopters. Compared with other technologies to perform the localization of quad-rotor helicopter, this paper innovatively constructs a smart environment by deploying some wireless sensor network (WSN nodes using the proposed localization algorithm. Finally, the proposed FMOA algorithm is used to conduct the formation flying of these quad-rotor helicopters in the smart environment.

  20. Effective noise-suppressed and artifact-reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275 (China); Zhang, Jiahan; Lipson, Edward [Department of Physics, Syracuse University, Syracuse, New York 13244 (United States); Krol, Andrzej; Feiglin, David [Department of Radiology, SUNY Upstate Medical University, Syracuse, New York 13210 (United States); Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vogelsang, Levon [Carestream Health, Rochester, New York 14608 (United States); Shen, Lixin [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275, China and Department of Mathematics, Syracuse University, Syracuse, New York 13244 (United States)

    2015-08-15

    Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean

  1. Effective noise-suppressed and artifact-reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm

    International Nuclear Information System (INIS)

    Li, Si; Xu, Yuesheng; Zhang, Jiahan; Lipson, Edward; Krol, Andrzej; Feiglin, David; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin

    2015-01-01

    Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean

  2. Performance of a parallel algorithm for solving the neutron diffusion equation on the hypercube

    International Nuclear Information System (INIS)

    Kirk, B.L.; Azmy, Y.Y.

    1989-01-01

    The one-group, steady state neutron diffusion equation in two- dimensional Cartesian geometry is solved using the nodal method technique. By decoupling sets of equations representing the neutron current continuity along the length of rows and columns of computational cells a new iterative algorithm is derived that is more suitable to solving large practical problems. This algorithm is highly parallelizable and is implemented on the Intel iPSC/2 hypercube in three versions which differ essentially in the total size of communicated data. Even though speedup was achieved, the efficiency is very low when many processors are used leading to the conclusion that the hypercube is not as well suited for this algorithm as shared memory machines. 10 refs., 1 fig., 3 tabs

  3. On the normalization of the minimum free energy of RNAs by sequence length.

    Directory of Open Access Journals (Sweden)

    Edoardo Trotta

    Full Text Available The minimum free energy (MFE of ribonucleic acids (RNAs increases at an apparent linear rate with sequence length. Simple indices, obtained by dividing the MFE by the number of nucleotides, have been used for a direct comparison of the folding stability of RNAs of various sizes. Although this normalization procedure has been used in several studies, the relationship between normalized MFE and length has not yet been investigated in detail. Here, we demonstrate that the variation of MFE with sequence length is not linear and is significantly biased by the mathematical formula used for the normalization procedure. For this reason, the normalized MFEs strongly decrease as hyperbolic functions of length and produce unreliable results when applied for the comparison of sequences with different sizes. We also propose a simple modification of the normalization formula that corrects the bias enabling the use of the normalized MFE for RNAs longer than 40 nt. Using the new corrected normalized index, we analyzed the folding free energies of different human RNA families showing that most of them present an average MFE density more negative than expected for a typical genomic sequence. Furthermore, we found that a well-defined and restricted range of MFE density characterizes each RNA family, suggesting the use of our corrected normalized index to improve RNA prediction algorithms. Finally, in coding and functional human RNAs the MFE density appears scarcely correlated with sequence length, consistent with a negligible role of thermodynamic stability demands in determining RNA size.

  4. Bioassay-Guided Antidiabetic Study of <em>Phaleria macrocarpaem> Fruit Extract

    Directory of Open Access Journals (Sweden)

    Mohd Z. Asmawi

    2012-04-01

    Full Text Available An earlier anti-hyperglycemic study with serial crude extracts of <em>Phaleria macrocarpaem> (PM fruit indicated methanol extract (ME as the most effective. In the present investigation, the methanol extract was further fractionated to obtain chloroform (CF, ethyl acetate (EAF, <em>n>-butanol (NBF and aqueous (AF fractions, which were tested for antidiabetic activity. The NBF reduced blood glucose (<em>p> < 0.05 15 min after administration, in an intraperitoneal glucose tolerance test (IPGTT similar to metformin. Moreover, it lowered blood glucose in diabetic rats by 66.67% (<em>p> em>< 0.05, similar to metformin (51.11%, glibenclamide (66.67% and insulin (71.43% after a 12-day treatment, hence considered to be the most active fraction. Further fractionation of NBF yielded sub-fractions I (SFI and II (SFII, and only SFI lowered blood glucose (<em>p em>< 0.05, in IPGTT similar to glibenclamide. The ME, NBF, and SFI correspondingly lowered plasma insulin (<em>p em>< 0.05 and dose-dependently inhibited glucose transport across isolated rat jejunum implying an extra-pancreatic mechanism. Phytochemical screening showed the presence of flavonoids, terpenes and tannins, in ME, NBF and SFI, and LC-MS analyses revealed 9.52%, 33.30% and 22.50% mangiferin respectively. PM fruit possesses anti-hyperglycemic effect, exerted probably through extra-pancreatic action. Magniferin, contained therein may be responsible for this reported activity.

  5. Modelo hipercubo integrado a um algoritmo genético para análise de sistemas médicos emergenciais em rodovias The hypercube queuing model integrated to a genetic algorithm to analyze emergency medical systems on highways

    Directory of Open Access Journals (Sweden)

    Ana Paula Iannoni

    2006-04-01

    Full Text Available O modelo hipercubo, conhecido na literatura de problemas de localização de sistemas servidor para cliente, é um modelo baseado em teoria de filas espacialmente distribuídas e aproximações Markovianas. O modelo pode ser modificado para analisar os sistemas de atendimentos emergenciais (SAEs em rodovias, considerando as particularidades da política de despacho destes sistemas. Neste estudo, combinou-se o modelo hipercubo com um algoritmo genético para otimizar a configuração e operação de SAEs em rodovias. A abordagem é efetiva para apoiar decisões relacionadas ao planejamento e operação destes sistemas, por exemplo, em determinar o tamanho ideal para as áreas de cobertura de cada ambulância, de forma a minimizar o tempo médio de resposta aos usuários e o desbalanceamento das cargas de trabalho das ambulâncias. Os resultados computacionais desta abordagem foram analisados utilizando dados reais do sistema Anjos do Asfalto (rodovia Presidente Dutra.The hypercube model, well-known in the literature on problems of server-to-customer localization systems, is based on the spatially distributed queuing theory and Markovian analysis approximations. The model can be modified to analyze Emergency Medical Systems (EMSs on highways, considering the particularities of these systems' dispatching policies. In this study, we combine the hypercube model with a genetic algorithm to optimize the configuration and operation of EMSs on highways. This approach is effective to support planning and operation decisions, such as determining the ideal size of the area each ambulance should cover to minimize not only the average time of response to the user but also ambulance workload imbalances, as well as generating a Pareto efficient boundary between these measures. The computational results of this approach were analyzed using real data Anjos do Asfalto EMS (which covers the Presidente Dutra highway.

  6. Telomere Length and Mortality

    DEFF Research Database (Denmark)

    Kimura, Masayuki; Hjelmborg, Jacob V B; Gardner, Jeffrey P

    2008-01-01

    Leukocyte telomere length, representing the mean length of all telomeres in leukocytes, is ostensibly a bioindicator of human aging. The authors hypothesized that shorter telomeres might forecast imminent mortality in elderly people better than leukocyte telomere length. They performed mortality...

  7. Ship Pipe Routing Design Using NSGA-II and Coevolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Wentie Niu

    2016-01-01

    Full Text Available Pipe route design plays a prominent role in ship design. Due to the complex configuration in layout space with numerous pipelines, diverse design constraints, and obstacles, it is a complicated and time-consuming process to obtain the optimal route of ship pipes. In this article, an optimized design method for branch pipe routing is proposed to improve design efficiency and to reduce human errors. By simplifying equipment and ship hull models and dividing workspace into three-dimensional grid cells, the mathematic model of layout space is constructed. Based on the proposed concept of pipe grading method, the optimization model of pipe routing is established. Then an optimization procedure is presented to deal with pipe route planning problem by combining maze algorithm (MA, nondominated sorting genetic algorithm II (NSGA-II, and cooperative coevolutionary nondominated sorting genetic algorithm II (CCNSGA-II. To improve the performance in genetic algorithm procedure, a fixed-length encoding method is presented based on improved maze algorithm and adaptive region strategy. Fuzzy set theory is employed to extract the best compromise pipeline from Pareto optimal solutions. Simulation test of branch pipe and design optimization of a fuel piping system were carried out to illustrate the design optimization procedure in detail and to verify the feasibility and effectiveness of the proposed methodology.

  8. BFACF-style algorithms for polygons in the body-centered and face-centered cubic lattices

    Energy Technology Data Exchange (ETDEWEB)

    Janse van Rensburg, E J [Department of Mathematics and Statistics, York University, Toronto, Ontario M3J 1P3 (Canada); Rechnitzer, A, E-mail: rensburg@yorku.ca, E-mail: andrewr@math.ubc.ca [Department of Mathematics, The University of British Columbia, Vancouver V6T 1Z2, British Columbia (Canada)

    2011-04-22

    In this paper, the elementary moves of the BFACF-algorithm (Aragao de Carvalho and Caracciolo 1983 Phys. Rev. B 27 1635-45, Aragao de Carvalho and Caracciolo 1983 Nucl. Phys. B 215 209-48, Berg and Foester 1981 Phys. Lett. B 106 323-6) for lattice polygons are generalized to elementary moves of BFACF-style algorithms for lattice polygons in the body-centered (BCC) and face-centered (FCC) cubic lattices. We prove that the ergodicity classes of these new elementary moves coincide with the knot types of unrooted polygons in the BCC and FCC lattices and so expand a similar result for the cubic lattice (see Janse van Rensburg and Whittington (1991 J. Phys. A: Math. Gen. 24 5553-67)). Implementations of these algorithms for knotted polygons using the GAS algorithm produce estimates of the minimal length of knotted polygons in the BCC and FCC lattices.

  9. BFACF-style algorithms for polygons in the body-centered and face-centered cubic lattices

    Science.gov (United States)

    Janse van Rensburg, E. J.; Rechnitzer, A.

    2011-04-01

    In this paper, the elementary moves of the BFACF-algorithm (Aragão de Carvalho and Caracciolo 1983 Phys. Rev. B 27 1635-45, Aragão de Carvalho and Caracciolo 1983 Nucl. Phys. B 215 209-48, Berg and Foester 1981 Phys. Lett. B 106 323-6) for lattice polygons are generalized to elementary moves of BFACF-style algorithms for lattice polygons in the body-centered (BCC) and face-centered (FCC) cubic lattices. We prove that the ergodicity classes of these new elementary moves coincide with the knot types of unrooted polygons in the BCC and FCC lattices and so expand a similar result for the cubic lattice (see Janse van Rensburg and Whittington (1991 J. Phys. A: Math. Gen. 24 5553-67)). Implementations of these algorithms for knotted polygons using the GAS algorithm produce estimates of the minimal length of knotted polygons in the BCC and FCC lattices.

  10. BFACF-style algorithms for polygons in the body-centered and face-centered cubic lattices

    International Nuclear Information System (INIS)

    Janse van Rensburg, E J; Rechnitzer, A

    2011-01-01

    In this paper, the elementary moves of the BFACF-algorithm (Aragao de Carvalho and Caracciolo 1983 Phys. Rev. B 27 1635-45, Aragao de Carvalho and Caracciolo 1983 Nucl. Phys. B 215 209-48, Berg and Foester 1981 Phys. Lett. B 106 323-6) for lattice polygons are generalized to elementary moves of BFACF-style algorithms for lattice polygons in the body-centered (BCC) and face-centered (FCC) cubic lattices. We prove that the ergodicity classes of these new elementary moves coincide with the knot types of unrooted polygons in the BCC and FCC lattices and so expand a similar result for the cubic lattice (see Janse van Rensburg and Whittington (1991 J. Phys. A: Math. Gen. 24 5553-67)). Implementations of these algorithms for knotted polygons using the GAS algorithm produce estimates of the minimal length of knotted polygons in the BCC and FCC lattices.

  11. A Fast Inspection of Tool Electrode and Drilling Depth in EDM Drilling by Detection Line Algorithm.

    Science.gov (United States)

    Huang, Kuo-Yi

    2008-08-21

    The purpose of this study was to develop a novel measurement method using a machine vision system. Besides using image processing techniques, the proposed system employs a detection line algorithm that detects the tool electrode length and drilling depth of a workpiece accurately and effectively. Different boundaries of areas on the tool electrode are defined: a baseline between base and normal areas, a ND-line between normal and drilling areas (accumulating carbon area), and a DD-line between drilling area and dielectric fluid droplet on the electrode tip. Accordingly, image processing techniques are employed to extract a tool electrode image, and the centroid, eigenvector, and principle axis of the tool electrode are determined. The developed detection line algorithm (DLA) is then used to detect the baseline, ND-line, and DD-line along the direction of the principle axis. Finally, the tool electrode length and drilling depth of the workpiece are estimated via detected baseline, ND-line, and DD-line. Experimental results show good accuracy and efficiency in estimation of the tool electrode length and drilling depth under different conditions. Hence, this research may provide a reference for industrial application in EDM drilling measurement.

  12. Modelling Inverse Gaussian Data with Censored Response Values: EM versus MCMC

    Directory of Open Access Journals (Sweden)

    R. S. Sparks

    2011-01-01

    Full Text Available Low detection limits are common in measure environmental variables. Building models using data containing low or high detection limits without adjusting for the censoring produces biased models. This paper offers approaches to estimate an inverse Gaussian distribution when some of the data used are censored because of low or high detection limits. Adjustments for the censoring can be made if there is between 2% and 20% censoring using either the EM algorithm or MCMC. This paper compares these approaches.

  13. Computational Recognition of RNA Splice Sites by Exact Algorithms for the Quadratic Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Anja Fischer

    2015-06-01

    Full Text Available One fundamental problem of bioinformatics is the computational recognition of DNA and RNA binding sites. Given a set of short DNA or RNA sequences of equal length such as transcription factor binding sites or RNA splice sites, the task is to learn a pattern from this set that allows the recognition of similar sites in another set of DNA or RNA sequences. Permuted Markov (PM models and permuted variable length Markov (PVLM models are two powerful models for this task, but the problem of finding an optimal PM model or PVLM model is NP-hard. While the problem of finding an optimal PM model or PVLM model of order one is equivalent to the traveling salesman problem (TSP, the problem of finding an optimal PM model or PVLM model of order two is equivalent to the quadratic TSP (QTSP. Several exact algorithms exist for solving the QTSP, but it is unclear if these algorithms are capable of solving QTSP instances resulting from RNA splice sites of at least 150 base pairs in a reasonable time frame. Here, we investigate the performance of three exact algorithms for solving the QTSP for ten datasets of splice acceptor sites and splice donor sites of five different species and find that one of these algorithms is capable of solving QTSP instances of up to 200 base pairs with a running time of less than two days.

  14. zipHMMlib: a highly optimised HMM library exploiting repetitions in the input to speed up the forward algorithm.

    Science.gov (United States)

    Sand, Andreas; Kristiansen, Martin; Pedersen, Christian N S; Mailund, Thomas

    2013-11-22

    Hidden Markov models are widely used for genome analysis as they combine ease of modelling with efficient analysis algorithms. Calculating the likelihood of a model using the forward algorithm has worst case time complexity linear in the length of the sequence and quadratic in the number of states in the model. For genome analysis, however, the length runs to millions or billions of observations, and when maximising the likelihood hundreds of evaluations are often needed. A time efficient forward algorithm is therefore a key ingredient in an efficient hidden Markov model library. We have built a software library for efficiently computing the likelihood of a hidden Markov model. The library exploits commonly occurring substrings in the input to reuse computations in the forward algorithm. In a pre-processing step our library identifies common substrings and builds a structure over the computations in the forward algorithm which can be reused. This analysis can be saved between uses of the library and is independent of concrete hidden Markov models so one preprocessing can be used to run a number of different models.Using this library, we achieve up to 78 times shorter wall-clock time for realistic whole-genome analyses with a real and reasonably complex hidden Markov model. In one particular case the analysis was performed in less than 8 minutes compared to 9.6 hours for the previously fastest library. We have implemented the preprocessing procedure and forward algorithm as a C++ library, zipHMM, with Python bindings for use in scripts. The library is available at http://birc.au.dk/software/ziphmm/.

  15. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  16. Modelling of multiple short-length-scale stall cells in an axial compressor using evolved GMDH neural networks

    International Nuclear Information System (INIS)

    Amanifard, N.; Nariman-Zadeh, N.; Farahani, M.H.; Khalkhali, A.

    2008-01-01

    Over the past 15 years there have been several research efforts to capture the stall inception nature in axial flow compressors. However previous analytical models could not explain the formation of short-length-scale stall cells. This paper provides a new model based on evolved GMDH neural network for transient evolution of multiple short-length-scale stall cells in an axial compressor. Genetic Algorithms (GAs) are also employed for optimal design of connectivity configuration of such GMDH-type neural networks. In this way, low-pass filter (LPF) pressure trace near the rotor leading edge is modelled with respect to the variation of pressure coefficient, flow rate coefficient, and number of rotor rotations which are defined as inputs

  17. Alimentazione di <em>Marmota marmotaem> in praterie altimontane delle dolomiti bellunesi

    Directory of Open Access Journals (Sweden)

    Alessandro Rudatis

    2006-03-01

    Full Text Available Abstract The diet of <em>Marmota marmotaem> in the mountain prairie of south-eastern Italian Alps. Diet composition of two family groups of alpine marmots was investigated in two areas of the Agordino’s Dolomites (Italian Alps in June-September 2001, by means of microscopic analysis of faeces and of direct observation of feeding activity. During the whole period of activity, a high consume of Angiosperms was confirmed, especially plants in flower; among them the “graminoids” seemed to play an important role only during the initial part of the active period. Generally vegetative parts predominated over flowers. The ingestion of animal preys was not confirmed by the analysis of droppings. Comparing diet composition of the two groups, Graminaceae (<em>Poa>, <em>Phleum>, Compositae (<em>Achillea>, Cyperaceae/Juncaceae, Leguminosae (<em>Anthyllis>, Rosaceae, and Labiatae (<em>Prunella>, <em>Stachys> formed the bulk of marmot diet in the study areas. Diet showed low diversity considering the abundance of plant species in the surrounding environment. Food resources were probably used in relation to their easy digestibility, with a high content in proteins, sugar and water. The knowledge of vegetation features in relation to marmot trophic habits can represent a useful tool for the management of this species. Riassunto Il regime alimentare di due gruppi di Marmotta alpina è stato studiato in giugno-settembre 2001 in due aree delle Dolomiti agordine (SE Italia, attraverso l’analisi microscopica delle feci e l’osservazione diretta dell’attività alimentare. Durante tutto il periodo di attività si è notato un forte consumo di Angiosperme, specialmente piante a fiore, mentre le ”graminoidi” sembra giochino un ruolo importante all’inizio della stagione. In generale le parti vegetali predominano sui fiori. L’ingestione di prede animali non è stata

  18. Fast and robust ray casting algorithms for virtual X-ray imaging

    International Nuclear Information System (INIS)

    Freud, N.; Duvauchelle, P.; Letang, J.M.; Babot, D.

    2006-01-01

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or γ-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation

  19. Fast and robust ray casting algorithms for virtual X-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Freud, N. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)]. E-mail: Nicolas.Freud@insa-lyon.fr; Duvauchelle, P. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Letang, J.M. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France); Babot, D. [CNDRI, Laboratory of Nondestructive Testing Using Ionizing Radiations, INSA-Lyon Scientific and Technical University, Bat. Antoine de Saint-Exupery, 20, Avenue Albert Einstein, 69621 Villeurbanne Cedex (France)

    2006-07-15

    Deterministic calculations based on ray casting techniques are known as a powerful alternative to the Monte Carlo approach to simulate X- or {gamma}-ray imaging modalities (e.g. digital radiography and computed tomography), whenever computation time is a critical issue. One of the key components, from the viewpoint of computing resource expense, is the algorithm which determines the path length travelled by each ray through complex 3D objects. This issue has given rise to intensive research in the field of 3D rendering (in the visible light domain) during the last decades. The present work proposes algorithmic solutions adapted from state-of-the-art computer graphics to carry out ray casting in X-ray imaging configurations. This work provides an algorithmic basis to simulate direct transmission of X-rays, as well as scattering and secondary emission of radiation. Emphasis is laid on the speed and robustness issues. Computation times are given in a typical case of radiography simulation.

  20. RSA algoritam i njegova praktična primena / RSA algorithm

    Directory of Open Access Journals (Sweden)

    Sonja R. Kuljanski

    2010-07-01

    Full Text Available RSA algoritam jeste algoritam sa javnim ključem koji uključuje tri koraka: generisanje ključa, enkripciju i dekripciju. RSA enkripciona šema je determinističaka što znači da se osnovni tekst uvek enkriptuje u isti šifrovani tekst za unapred zadati javni ključ. Da bi se izbegao ovaj problem, praktična implementacija RSA algoritma obično koristi neke strukture, kao što je dodavanje slučajnog teksta u samu poruku pre enkripcije. Ovo dodavanje obezbeđuje da osnovna poruka bude sigurna i da se može enkriptovati u veliki broj različitih šifrovanih poruka. Standardi, kao što je PKCS #1, pažljivo su dizajnirani tako da dodaju tekst u osnovnu poruku pre RSA same enkripcije. / RSA is an algorithm for public-key encryption. It is the first algorithm known to be suitable for encryption as well as digital signing. The RSA encryption scheme is deterministic in the sense that under a fixed public key, a particular plaintext is always encrypted to the same ciphertext. A deterministic encryption scheme (as opposed to a probabilistic encryption scheme is a cryptosystem which always produces the same ciphertext for a given plaintext and key, even over separate executions of the encryption algorithm. Probabilistic encryption uses randomness in an encryption algorithm, so that when encrypting the same message several times it will, in general, yield different ciphertexts.

  1. Cultivo hidropônico de lisianto para flor de corte em sistema de fluxo laminar de nutrientes Hydroponic growth of lisianthus as cut flower under nutrient film technique

    Directory of Open Access Journals (Sweden)

    Fernanda Alice Antonello Londero Backes

    2007-11-01

    Full Text Available O objetivo deste trabalho foi avaliar as características produtivas e comerciais do cultivo de quatro cultivares de lisianto (Eustoma grandiflorum em três soluções nutritivas em sistema de fluxo laminar de nutrientes (NFT. Utilizou-se o delineamento em blocos casualizados, em esquema fatorial 4x3, com três repetições. Os tratamentos foram compostos de quatro cultivares (Echo Champagne, Mariachi Pure White, Balboa Yellow e Ávila Blue Rim e três soluções nutritivas (Teste, Steiner modificada e Barbosa. O sistema NFT é uma alternativa viável para o cultivo de lisianto nas soluções Barbosa e Teste. A cultivar Echo Champagne foi superior quanto ao ciclo, período em produção, altura da haste floral, número de folhas, diâmetro de botão e produção de massa fresca e seca, enquanto a cultivar Mariachi Pure White se destacou quanto ao período em produção. A cultivar Ávila Blue Rim apresentou maior período de produção, número de flores e produção de massa de matéria fresca e seca, enquanto a cultivar Balboa Yellow apresentou maior período em produção e diâmetro de botão.The objective of this work was to evaluate yield and commercial traits of lisianthus (Eustoma grandiflorum flowers growth in nutrient film technique (NFT. The experimental design was in randomized blocks, in factorial scheme (4x3, with three replicates. The treatments were four cultivars (Echo Champagne, Mariachi Pure White, Balboa Yellow and Ávila Blue Rim and three nutrient solutions (Test, modified Steiner and Barbosa. The NFT system is a feasible alternative for the growth of lisianthus in Barbosa and Test solutions. The cultivar Echo Champagne was superior for cycle, length of production, height of flower stem, number of leaves, diameter of the bud flower and fresh and dry weight production, while the cultivar Mariachi Pure White was superior for length of production. The cultivar Ávila Blue Rim showed good length of production, number of flowers

  2. <em>Ipomoea aquaticaem> Extract Shows Protective Action Against Thioacetamide-Induced Hepatotoxicity

    Directory of Open Access Journals (Sweden)

    A. Hamid A. Hadi

    2012-05-01

    Full Text Available In the Indian system of traditional medicine (Ayurveda it is recommended to consume <em>Ipomoea em>aquatica> to mitigate disorders like jaundice. In this study, the protective effects of ethanol extract of <em>I. aquaticaem> against liver damage were evaluated in thioacetamide (TAA-induced chronic hepatotoxicity in rats. There was no sign of toxicity in the acute toxicity study, in which Sprague-Dawley (SD rats were orally fed with <em>I. aquaticaem> (250 and 500 mg/kg for two months along with administration of TAA (i.p injection 200 mg/kg three times a week for two months. The results showed that the treatment of <em>I. aquaticaem> significantly lowered the TAA-induced serum levels of hepatic enzyme markers (ALP, ALT, AST, protein, albumin, bilirubin and prothrombin time. The hepatic content of activities and expressions SOD and CAT that were reduced by TAA were brought back to control levels by the plant extract supplement. Meanwhile, the rise in MDA level in the TAA receiving groups also were significantly reduced by <em>I. aquaticaem> treatment. Histopathology of hepatic tissues by H&E and Masson trichrome stains displayed that <em>I. aquaticaem> has reduced the incidence of liver lesions, including hepatic cells cloudy swelling, infiltration, hepatic necrosis, and fibrous connective tissue proliferation induced by TAA in rats. Therefore, the results of this study show that the protective effect of <em>I. aquaticaem> in TAA-induced liver damage might be contributed to its modulation on detoxification enzymes and its antioxidant and free radical scavenger effects. Moreover, it confirms a scientific basis for the traditional use of <em>I. aquaticaem> for the treatment of liver disorders.

  3. Classe social: conceitos e esquemas operacionais em pesquisa em saude

    Directory of Open Access Journals (Sweden)

    Rita Barradas Barata

    2013-08-01

    Full Text Available Discute-se a utilização do conceito de classe em pesquisas em saúde, as diferentes abordagens sociológicas de estratificação social e de estrutura de classes, o potencial explicativo do conceito em estudos de determinação social e desigualdades em saúde, os modelos de operacionalização elaborados para uso em pesquisas sociológicas, demográficas ou de saúde e os limites e possibilidades desses modelos. Foram destacados quatro modelos de operacionalização: de Singer para estudo da distribuição de renda no Brasil, adaptado por Barros para uso em pesquisas epidemiológicas; de Bronfman & Tuirán para o censo demográfico mexicano, adaptado por Lombardi et al para pesquisas epidemiológicas; de Goldthorpe para estudos socioeconômicos ingleses, adaptado pela Sociedade Espanhola de Epidemiologia; e o modelo de Wright para pesquisa em sociologia e ciência política, também usado em inquéritos populacionais em saúde. Em conclusão, conceitualmente cada um dos modelos apresentados é coerente com a concepção teórica que os embasam, mas não há como optar por qualquer deles, descartando os demais.

  4. The Analysis of an End Effect according to the Input Frequency Change in the EM Pump

    International Nuclear Information System (INIS)

    Kim, Hee Reyoung; Kim, Jong Man; Cha, Jae Eun; Choi, Jong Hyun; Nam, Ho Yoon

    2006-01-01

    In general, an electromagnetic (EM) pump is considered to circulate a liquid sodium coolant for a Sodium Fast Reactor (SFR). The EM pump has an end effect at both ends basically due to its finite core length. The generated magnetic field across the flow gap is distorted at both ends of the pump. Consequently, there arises reduction on the developed force by the vector product of that magnetic field and its perpendicular induced current. Especially, it experiences even the opposite pumping force near the pump inlet. That causes low efficiency of the pump and resultantly brings about bad performance of a pump. The present study theoretically shows that this end effect can be lessened by control of input frequency. It is predicted that pump operates much more efficiently in the range of low frequency around teen hertz than in that of high frequency over 60 Hz. The force density is investigated in the narrow annular channel of the pump with the length of 84cm according to pump axial coordinates at various frequency

  5. An approach of traffic signal control based on NLRSQP algorithm

    Science.gov (United States)

    Zou, Yuan-Yang; Hu, Yu

    2017-11-01

    This paper presents a linear program model with linear complementarity constraints (LPLCC) to solve traffic signal optimization problem. The objective function of the model is to obtain the minimization of total queue length with weight factors at the end of each cycle. Then, a combination algorithm based on the nonlinear least regression and sequence quadratic program (NLRSQP) is proposed, by which the local optimal solution can be obtained. Furthermore, four numerical experiments are proposed to study how to set the initial solution of the algorithm that can get a better local optimal solution more quickly. In particular, the results of numerical experiments show that: The model is effective for different arrival rates and weight factors; and the lower bound of the initial solution is, the better optimal solution can be obtained.

  6. Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    Science.gov (United States)

    Manohar, M.; Ramapriyan, H. K.; Strong, J. P.

    1988-01-01

    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images.

  7. Four-dimensional MAP-RBI-EM image reconstruction method with a 4D motion prior for 4D gated myocardial perfusion SPECT

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Taek-Soo; Tsui, Benjamin M.W. [Johns Hopkins Univ., Baltimore, MD (United States). Dept. of Radiology; Gullberg, Grant T. [Lawrence Berkeley National Laboratory, Berkeley, CA (United States)

    2011-07-01

    We evaluated and proposed here a 4D maximum a posteriori rescaled-block iterative (MAP-RBI)-EM image reconstruction method with a motion prior to improve the accuracy of 4D gated myocardial perfusion (GMP) SPECT images. We hypothesized that a 4D motion prior which resembles the global motion of the true 4D motion of the heart will improve the accuracy of the reconstructed images with regional myocardial motion defect. Normal heart model in the 4D XCAT (eXtended CArdiac-Torso) phantom is used as the prior in the 4D MAP-RBI-EM algorithm where a Gaussian-shaped distribution is used as the derivative of potential function (DPF) that determines the smoothing strength and range of the prior in the algorithm. The mean and width of the DPF equal to the expected difference between the reconstructed image and the motion prior, and smoothing range, respectively. To evaluate the algorithm, we used simulated projection data from a typical clinical {sup 99m}Tc Sestamibi GMP SPECT study using the 4D XCAT phantom. The noise-free projection data were generated using an analytical projector that included the effects of attenuation, collimator-detector response and scatter (ADS) and Poisson noise was added to generated noisy projection data. The projection datasets were reconstructed using the modified 4D MAP-RBI-EM with various iterations, prior weights, and sigma values as well as with ADS correction. The results showed that the 4D reconstructed image estimates looked more like the motion prior with sharper edges as the weight of prior increased. It also demonstrated that edge preservation of the myocardium in the GMP SPECT images could be controlled by a proper motion prior. The Gaussian-shaped DPF allowed stronger and weaker smoothing force for smaller and larger difference of neighboring voxel values, respectively, depending on its parameter values. We concluded the 4D MAP-RBI-EM algorithm with the general motion prior can be used to provide 4D GMP SPECT images with improved

  8. Suppression of EM Fields using Active Control Algorithms and MIMO Antenna System

    Directory of Open Access Journals (Sweden)

    A. Mohammed

    2004-09-01

    Full Text Available Active methods for attenuating acoustic pressure fields have beensuccessfully used in many applications. In this paper we investigatesome of these active control methods in combination with a MIMO antennasystem in order to assess their validity and performance when appliedto electromagnetic fields. The application that we evaluated in thispaper is a model of a mobile phone equipped with one ordinarytransmitting antenna and two actuator-antennas which purpose is toreduce the electromagnetic field at a specific area in space (e.g. atthe human head. Simulation results show the promise of using theadaptive active control algorithms and MIMO system to attenuate theelectromagnetic field power density.

  9. Highly variable aerodynamic roughness length (z0) for a hummocky debris-covered glacier

    Science.gov (United States)

    Miles, Evan S.; Steiner, Jakob F.; Brun, Fanny

    2017-08-01

    The aerodynamic roughness length (z0) is an essential parameter in surface energy balance studies, but few literature values exist for debris-covered glaciers. We use microtopographic and aerodynamic methods to assess the spatial variability of z0 for Lirung Glacier, Nepal. We apply structure from motion to produce digital elevation models for three nested domains: five 1 m2 plots, a 21,300 m2 surface depression, and the lower 550,000 m2 of the debris-mantled tongue. Wind and temperature sensor towers were installed in the vicinity of the plots within the surface depression in October 2014. We calculate z0 according to a variety of transect-based microtopographic parameterizations for each plot, then develop a grid version of the algorithms by aggregating data from all transects. This grid approach is applied to the surface depression digital elevation model to characterize z0 spatial variability. The algorithms reproduce the same variability among transects and plots, but z0 estimates vary by an order of magnitude between algorithms. Across the study depression, results from different algorithms are strongly correlated. Using Monin-Obukov similarity theory, we derive z0 values from the meteorological data. Using different stability criteria, we derive median values of z0 between 0.03 m and 0.05 m, but with considerable uncertainty due to the glacier's complex topography. Considering estimates from these algorithms, results suggest that z0 varies across Lirung Glacier between ˜0.005 m (gravels) to ˜0.5 m (boulders). Future efforts should assess the importance of such variable z0 values in a distributed energy balance model.

  10. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  11. Does length or neighborhood size cause the word length effect?

    Science.gov (United States)

    Jalbert, Annie; Neath, Ian; Surprenant, Aimée M

    2011-10-01

    Jalbert, Neath, Bireta, and Surprenant (2011) suggested that past demonstrations of the word length effect, the finding that words with fewer syllables are recalled better than words with more syllables, included a confound: The short words had more orthographic neighbors than the long words. The experiments reported here test two predictions that would follow if neighborhood size is a more important factor than word length. In Experiment 1, we found that concurrent articulation removed the effect of neighborhood size, just as it removes the effect of word length. Experiment 2 demonstrated that this pattern is also found with nonwords. For Experiment 3, we factorially manipulated length and neighborhood size, and found only effects of the latter. These results are problematic for any theory of memory that includes decay offset by rehearsal, but they are consistent with accounts that include a redintegrative stage that is susceptible to disruption by noise. The results also confirm the importance of lexical and linguistic factors on memory tasks thought to tap short-term memory.

  12. Correlated evolution of sternal keel length and ilium length in birds

    Directory of Open Access Journals (Sweden)

    Tao Zhao

    2017-07-01

    Full Text Available The interplay between the pectoral module (the pectoral girdle and limbs and the pelvic module (the pelvic girdle and limbs plays a key role in shaping avian evolution, but prior empirical studies on trait covariation between the two modules are limited. Here we empirically test whether (size-corrected sternal keel length and ilium length are correlated during avian evolution using phylogenetic comparative methods. Our analyses on extant birds and Mesozoic birds both recover a significantly positive correlation. The results provide new evidence regarding the integration between the pelvic and pectoral modules. The correlated evolution of sternal keel length and ilium length may serve as a mechanism to cope with the effect on performance caused by a tradeoff in muscle mass between the pectoral and pelvic modules, via changing moment arms of muscles that function in flight and in terrestrial locomotion.

  13. Fatores associados à maior mortalidade e tempo de internação prolongado em uma unidade de terapia intensiva de adultos Factors associated with increased mortality and prolonged length of stay in an adult intensive care unit

    Directory of Open Access Journals (Sweden)

    Ana Beatriz Francioso de Oliveira

    2010-09-01

    Full Text Available OBJETIVO: A unidade de terapia intensiva é sinônimo de gravidade e apresenta taxa de mortalidade entre 5,4% e 33%. Com o aperfeiçoamento de novas tecnologias, o paciente pode ser mantido por longo período nessa unidade, ocasionando altos custos financeiros, morais e psicológicos para todos os envolvidos. O objetivo do presente estudo foi avaliar os fatores associados à maior mortalidade e tempo de internação prolongado em uma unidade de terapia intensiva adulto. MÉTODOS: Participaram deste estudo todos os pacientes admitidos consecutivamente na unidade de terapia intensiva de adultos, clínica/cirúrgica do Hospital das Clínicas da Universidade Estadual de Campinas, no período de seis meses. Foram coletados dados como: sexo, idade, diagnóstico, antecedentes pessoais, APACHE II, dias de ventilação mecânica invasiva, reintubação orotraqueal, traqueostomia, dias de internação na unidade de terapia intensiva, alta ou óbito na unidade de terapia intensiva. RESULTADOS: Foram incluídos no estudo 401 pacientes, sendo 59,6% homens e 40,4% mulheres, com idade média de 53,8±18,0 anos. A média de internação na unidade de terapia intensiva foi de 8,2±10,8 dias, com taxa de mortalidade de 13,46%. Dados significativos para mortalidade e tempo de internação prolongado em unidade de terapia intensiva (p11, traqueostomia e reintubação. CONCLUSÃO: APACHE >11, traqueostomia e reintubação estiveram associados, neste estudo, à maior taxa de mortalidade e tempo de permanência prolongado em unidade de terapia intensiva.OBJECTIVE: The intensive care unit is synonymous of high severity, and its mortality rates are between 5.4 and 33%. With the development of new technologies, a patient can be maintained for long time in the unit, causing high costs, psychological and moral for all involved. This study aimed to evaluate the risk factors for mortality and prolonged length of stay in an adult intensive care unit. METHODS: The study

  14. Fundamental length

    International Nuclear Information System (INIS)

    Pradhan, T.

    1975-01-01

    The concept of fundamental length was first put forward by Heisenberg from purely dimensional reasons. From a study of the observed masses of the elementary particles known at that time, it is sumrised that this length should be of the order of magnitude 1 approximately 10 -13 cm. It was Heisenberg's belief that introduction of such a fundamental length would eliminate the divergence difficulties from relativistic quantum field theory by cutting off the high energy regions of the 'proper fields'. Since the divergence difficulties arise primarily due to infinite number of degrees of freedom, one simple remedy would be the introduction of a principle that limits these degrees of freedom by removing the effectiveness of the waves with a frequency exceeding a certain limit without destroying the relativistic invariance of the theory. The principle can be stated as follows: It is in principle impossible to invent an experiment of any kind that will permit a distintion between the positions of two particles at rest, the distance between which is below a certain limit. A more elegant way of introducing fundamental length into quantum theory is through commutation relations between two position operators. In quantum field theory such as quantum electrodynamics, it can be introduced through the commutation relation between two interpolating photon fields (vector potentials). (K.B.)

  15. Exploring SWOT discharge algorithm accuracy on the Sacramento River

    Science.gov (United States)

    Durand, M. T.; Yoon, Y.; Rodriguez, E.; Minear, J. T.; Andreadis, K.; Pavelsky, T. M.; Alsdorf, D. E.; Smith, L. C.; Bales, J. D.

    2012-12-01

    Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on several specific research questions affecting algorithm performance: 1) To what extent do lateral inflows confound algorithm performance? We examine the ~100 km stretch of river from Colusa, CA to the Yolo Bypass, and investigate how the

  16. A Flexible Reservation Algorithm for Advance Network Provisioning

    Energy Technology Data Exchange (ETDEWEB)

    Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex

    2010-04-12

    Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.

  17. 2D evaluation of spectral LIBS data derived from heterogeneous materials using cluster algorithm

    Science.gov (United States)

    Gottlieb, C.; Millar, S.; Grothe, S.; Wilsch, G.

    2017-08-01

    Laser-induced Breakdown Spectroscopy (LIBS) is capable of providing spatially resolved element maps in regard to the chemical composition of the sample. The evaluation of heterogeneous materials is often a challenging task, especially in the case of phase boundaries. In order to determine information about a certain phase of a material, the need for a method that offers an objective evaluation is necessary. This paper will introduce a cluster algorithm in the case of heterogeneous building materials (concrete) to separate the spectral information of non-relevant aggregates and cement matrix. In civil engineering, the information about the quantitative ingress of harmful species like Cl-, Na+ and SO42- is of great interest in the evaluation of the remaining lifetime of structures (Millar et al., 2015; Wilsch et al., 2005). These species trigger different damage processes such as the alkali-silica reaction (ASR) or the chloride-induced corrosion of the reinforcement. Therefore, a discrimination between the different phases, mainly cement matrix and aggregates, is highly important (Weritz et al., 2006). For the 2D evaluation, the expectation-maximization-algorithm (EM algorithm; Ester and Sander, 2000) has been tested for the application presented in this work. The method has been introduced and different figures of merit have been presented according to recommendations given in Haddad et al. (2014). Advantages of this method will be highlighted. After phase separation, non-relevant information can be excluded and only the wanted phase displayed. Using a set of samples with known and unknown composition, the EM-clustering method has been validated regarding to Gustavo González and Ángeles Herrador (2007).

  18. Floor versus cage rearing: effects on production, egg quality and physical condition of laying hens housed in furnished cages Cria em piso versus cria em bateria: efeitos na produção, qualidade de ovos e condição física de poedeiras alojadas em gaiolas enriquecidas

    Directory of Open Access Journals (Sweden)

    Victor Fernando Büttow Roll

    2009-08-01

    Full Text Available The influences of floor- and cage-rearing on egg production, egg quality and physical condition were investigated in laying hens housed in furnished cages. Two groups of 180 Isa Brown commercial layer pullets were reared in cages (CR or floor pens (FR and transferred to furnished cages, where their production, egg quality and physical condition was observed throughout the laying period (18-78wks of age. At 17 weeks of age, hens were placed in one of 36 furnished cages with 10 birds in each cage, each containing a nest box, perches, a dust bath, and abrasive strips. From 19 to 78 weeks of age, egg production data were collected daily. Commercial egg quality was assessed monthly. At, 19 and 78 weeks of age, claw length and feather cover were visually assessed using a four-point scale in a sample (10% of hens. Production variables were above breeders’ standards and not significantly affected by rearing system. Dirty eggs and cracked eggs were more frequent in FR birds. Meat spots were significantly more frequent in FR hens at middle lay, but less frequently at the end of the laying period. Rearing system did not influence egg and yolk weight or unit Haugh and shell colour. Among FR hens, eggshell density, thickness and mass were significantly lower at the end of the laying period. Rearing system did not affect claw length, but the plumage of FR hens was negatively affected at the end of production cycle.Avaliou-se a influência dos sistemas de criação (em piso ou em baterias sobre o desempenho produtivo, a qualidade de ovos e a condição física de poedeiras alojadas em gaiolas enriquecidas. Dois grupos de 180 frangas Isa brown foram criados em baterias (CR ou em piso (FR e transferidos para gaiolas enriquecidas, onde a produção, a qualidade de ovos e a condição física foram observadas durante um ciclo completo de postura (18-78 semanas de idade. Com 17 semanas de idade, as frangas foram alojadas em 36 gaiolas enriquecidas, 10 aves por

  19. Blind Extraction of Chaotic Signals by Using the Fast Independent Component Analysis Algorithm

    International Nuclear Information System (INIS)

    Hong-Bin, Chen; Jiu-Chao, Feng; Yong, Fang

    2008-01-01

    We report the results of using the fast independent component analysis (FastICA) algorithm to realize blind extraction of chaotic signals. Two cases are taken into consideration: namely, the mixture is noiseless or contaminated by noise. Pre-whitening is employed to reduce the effect of noise before using the FastICA algorithm. The correlation coefficient criterion is adopted to evaluate the performance, and the success rate is defined as a new criterion to indicate the performance with respect to noise or different mixing matrices. Simulation results show that the FastICA algorithm can extract the chaotic signals effectively. The impact of noise, the length of a signal frame, the number of sources and the number of observed mixtures on the performance is investigated in detail. It is also shown that regarding a noise as an independent source is not always correct

  20. Do Branch Lengths Help to Locate a Tree in a Phylogenetic Network?

    Science.gov (United States)

    Gambette, Philippe; van Iersel, Leo; Kelk, Steven; Pardi, Fabio; Scornavacca, Celine

    2016-09-01

    Phylogenetic networks are increasingly used in evolutionary biology to represent the history of species that have undergone reticulate events such as horizontal gene transfer, hybrid speciation and recombination. One of the most fundamental questions that arise in this context is whether the evolution of a gene with one copy in all species can be explained by a given network. In mathematical terms, this is often translated in the following way: is a given phylogenetic tree contained in a given phylogenetic network? Recently this tree containment problem has been widely investigated from a computational perspective, but most studies have only focused on the topology of the phylogenies, ignoring a piece of information that, in the case of phylogenetic trees, is routinely inferred by evolutionary analyses: branch lengths. These measure the amount of change (e.g., nucleotide substitutions) that has occurred along each branch of the phylogeny. Here, we study a number of versions of the tree containment problem that explicitly account for branch lengths. We show that, although length information has the potential to locate more precisely a tree within a network, the problem is computationally hard in its most general form. On a positive note, for a number of special cases of biological relevance, we provide algorithms that solve this problem efficiently. This includes the case of networks of limited complexity, for which it is possible to recover, among the trees contained by the network with the same topology as the input tree, the closest one in terms of branch lengths.

  1. Fattori influenzanti la distribuzione del Daino (<em>Dama damaem> in un'area dell'Appennino settentrionale

    Directory of Open Access Journals (Sweden)

    Enrico Merli

    2003-10-01

    Full Text Available Abstract Factors affecting the distribution of fallow deer (<em>Dama damaem> in an area of the northern Apennines (Italy A wild population of fallow deer (<em>Dama damaem>, originated in the Seventies from accidental release, has been monitored from 1996 to 2000 in the hilly and mountainous habitat of the Pavia county (Northern Apennines. Sightings and signs of presence on 34 transects (total length of 163.2 km were collected twice a year in spring. The distribution of the species in the study area (732 km² was defined on the basis of a grid of 215 4-km² sample units. The species occupied the 28% and 22% of the study area respectively in 1997 and 2000. The relative abundance of the population, estimated as mean of sightings-signs/km varied from 0.50 (S.E. = 0.27 in 1997 to 0.03 (S.E. = 0.03 in 1998. Discriminant Function and Logistic Regression Analysis were performed to investigate the influence of 23 habitat variables on the distribution of fallow deers in 1997. Both statistical techniques identified the length of the edge between woods and crops as the main variable affecting fallow deer distribution. The analysis also pointed out that the area potentially suitable for the species raised up to 388 km² (53.0% of the study area. Riassunto Una popolazione di Daino presente nella zona collinare e montana della provincia di Pavia (732 km², originatasi da una fuga di animali da un allevamento negli anni 70, è stata monitorata dal 1996 al 2000. Per la definizione della distribuzione è stato predisposto un sistema di 34 transetti (163,2 km in totale percorsi 2 volte l'anno, in primavera, annotando e mappando tutti i segni di presenza e le osservazioni dirette della specie. L'area di studio è quindi stata suddivisa in maglie quadrate di 400 ha ciascuna (Unità di Campionamento, a cui è stata associata, per ogni anno la presenza o l'assenza della specie. La superficie interessata dalla

  2. Vermiform appendix: positions and length – a study of 377 cases and literature review

    Directory of Open Access Journals (Sweden)

    Sandro Cilindro de Souza

    2015-10-01

    Full Text Available Objective: Evaluation of the frequency of the relative positions and length of vermiform appendix in a group of corpses examined by the authors. Method: Dissection of 377 adult cadavers autopsied. Results and conclusions: Retrocecal: 43.5%; subcecal: 24.4% post-ileal: 14.3%, pelvic: 9.3%; paracecal: 5.8%; and pre-ileal appendices: 2.4%, other positions: 0.27%, mean length: 11.4 cm. Resumo: Objetivo: Avaliação da frequência das posições relativas e do comprimento do apêndice vermiforme em um grupo de cadáveres examinados pelos autores. Método: Dissecção de 377 cadáveres adultos necropsiados. Resultados e conclusões: Apêndices retrocecais: 43,5%, subcecais: 24,4%, pós-ileais: 14,3%, pélvico: 9,3%, paracecais: 5,8%, pré-ileais 2,4%, outras posições: 0,27%. Comprimento médio: 11,4 cm. Keywords: Vermiform appendix, Cecum, Anatomical variation, Appendicitis, Palavras-chave: Apêndice vermiforme, Ceco, Variação anatômica, Apendicite

  3. Symmetric encryption algorithms using chaotic and non-chaotic generators: A review.

    Science.gov (United States)

    Radwan, Ahmed G; AbdElHaleem, Sherif H; Abd-El-Hafiz, Salwa K

    2016-03-01

    This paper summarizes the symmetric image encryption results of 27 different algorithms, which include substitution-only, permutation-only or both phases. The cores of these algorithms are based on several discrete chaotic maps (Arnold's cat map and a combination of three generalized maps), one continuous chaotic system (Lorenz) and two non-chaotic generators (fractals and chess-based algorithms). Each algorithm has been analyzed by the correlation coefficients between pixels (horizontal, vertical and diagonal), differential attack measures, Mean Square Error (MSE), entropy, sensitivity analyses and the 15 standard tests of the National Institute of Standards and Technology (NIST) SP-800-22 statistical suite. The analyzed algorithms include a set of new image encryption algorithms based on non-chaotic generators, either using substitution only (using fractals) and permutation only (chess-based) or both. Moreover, two different permutation scenarios are presented where the permutation-phase has or does not have a relationship with the input image through an ON/OFF switch. Different encryption-key lengths and complexities are provided from short to long key to persist brute-force attacks. In addition, sensitivities of those different techniques to a one bit change in the input parameters of the substitution key as well as the permutation key are assessed. Finally, a comparative discussion of this work versus many recent research with respect to the used generators, type of encryption, and analyses is presented to highlight the strengths and added contribution of this paper.

  4. An efficient algorithm for sorting by block-interchanges and its application to the evolution of vibrio species.

    Science.gov (United States)

    Lin, Ying Chih; Lu, Chin Lung; Chang, Hwan-You; Tang, Chuan Yi

    2005-01-01

    In the study of genome rearrangement, the block-interchanges have been proposed recently as a new kind of global rearrangement events affecting a genome by swapping two nonintersecting segments of any length. The so-called block-interchange distance problem, which is equivalent to the sorting-by-block-interchange problem, is to find a minimum series of block-interchanges for transforming one chromosome into another. In this paper, we study this problem by considering the circular chromosomes and propose a Omicron(deltan) time algorithm for solving it by making use of permutation groups in algebra, where n is the length of the circular chromosome and delta is the minimum number of block-interchanges required for the transformation, which can be calculated in Omicron(n) time in advance. Moreover, we obtain analogous results by extending our algorithm to linear chromosomes. Finally, we have implemented our algorithm and applied it to the circular genomic sequences of three human vibrio pathogens for predicting their evolutionary relationships. Consequently, our experimental results coincide with the previous ones obtained by others using a different comparative genomics approach, which implies that the block-interchange events seem to play a significant role in the evolution of vibrio species.

  5. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  6. The use of a standardized PCT-algorithm reduces costs in intensive care in septic patients - a DRG-based simulation model

    Directory of Open Access Journals (Sweden)

    Wilke MH

    2011-12-01

    Full Text Available Abstract Introduction The management of bloodstream infections especially sepsis is a difficult task. An optimal antibiotic therapy (ABX is paramount for success. Procalcitonin (PCT is a well investigated biomarker that allows close monitoring of the infection and management of ABX. It has proven to be a cost-efficient diagnostic tool. In Diagnoses Related Groups (DRG based reimbursement systems, hospitals get only a fixed amount of money for certain treatments. Thus it's very important to obtain an optimal balance of clinical treatment and resource consumption namely the length of stay in hospital and especially in the Intensive Care Unit (ICU. We investigated which economic effects an optimized PCT-based algorithm for antibiotic management could have. Materials and methods We collected inpatient episode data from 16 hospitals. These data contain administrative and clinical information such as length of stay, days in the ICU or diagnoses and procedures. From various RCTs and reviews there are different algorithms for the use of PCT to manage ABX published. Moreover RCTs and meta-analyses have proven possible savings in days of ABX (ABD and length of stay in ICU (ICUD. As the meta-analyses use studies on different patient populations (pneumonia, sepsis, other bacterial infections, we undertook a short meta-analyses of 6 relevant studies investigating in sepsis or ventilator associated pneumonia (VAP. From this analyses we obtained savings in ABD and ICUD by calculating the weighted mean differences. Then we designed a new PCT-based algorithm using results from two very recent reviews. The algorithm contains evidence from several studies. From the patient data we calculated cost estimates using German National standard costing information for the German G-DRG system. We developed a simulation model where the possible savings and the extra costs for (in average 8 PCT tests due to our algorithm were brought into equation. Results We calculated ABD

  7. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  8. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    International Nuclear Information System (INIS)

    Han, H; Xing, L; Liang, Z; Li, L

    2016-01-01

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of each tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions

  9. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    Energy Technology Data Exchange (ETDEWEB)

    Han, H; Xing, L [Stanford University, Palo Alto, CA (United States); Liang, Z [Stony Brook University, Stony Brook, NY (United States); Li, L [City University of New York College of Staten Island, Staten Island, NY (United States)

    2016-06-15

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of each tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions

  10. The diet of the fox (<em>Vulpes vulpesem> in woodlands of Orobie Alps (Lombardy region, Northern Italy / Alimentazione della Volpe (<em>Vulpes vulpesem> in aree boscate delle Alpi Orobie

    Directory of Open Access Journals (Sweden)

    Marco Cantini

    1991-07-01

    Full Text Available Abstract The diet of the fox was investigated by analysis of 273 scats, collected along standard trails from April to November 1987 and 1988. Food habits of foxes were described for three altitudinal ranges. Mammals, mainly <em>Clethrionomys glareolusem> and <em>Microtus multiplexem>, were the staple food (percentage of frequency 42.8%, followed by fruits and other vegetables (26.7% and 37.3% respectively. Birds, Invertebrates (mainly Insects and garbage were little eaten. The game species (ungulates, hares, pheasants occurred with a low frequency (8.4% in the diet. The trophic niche breadth varied little through the altitudinal ranges and the seasons. The trophic niche overlap between the fox and the genus <em>Martes> (190 scats of <em>M. martesem> and <em>M. foinaem> were examined is relatively wide (O=0.868. Riassunto La dieta della Volpe (<em>Vulpes vulpesem> in aree boscate delle Alpi Orobie (Val Lesina è stata indagata nel periodo aprile-novembre 1987 e 1988 mediante l'analisi di 273 feci, raccolte lungo percorsi-campione ricadenti in tre piani vegetazionali. I Mammiferi, in particolare <em>Clethrionomys glareolusem> e <em>Microtus multiplexem>, sono la componente principale della dieta (frequenza percentuale 42,8%. Rilevante è anche il consumo di frutti (soprattutto in estate e autunno e di altri vegetali (26,7% e 37,3% rispettivamente, mentre poco frequente è quello di Uccelli, Invertebrati e rifiuti. Complessivamente ridotta è l'azione predatoria della Volpe nei confronti delle specie di interesse venatorio (Ungulati, lepri, Galliformi. L'ampiezza della nicchia trofica mostra modeste variazioni stagionali e altitudinali. I1 grado di sovrapposizione tra la nicchia trofica della Volpe e quella del genere <em>Martes>, quest'ultima ricavata dall'analisi di 190 feci di Martora (<em>M. martesem> e Faina (<em>M. foinaem>, è elevato (O=0,868. Tuttavia, poiché in condizioni di

  11. A New Algorithm for Cartographic Simplification of Streams and Lakes Using Deviation Angles and Error Bands

    Directory of Open Access Journals (Sweden)

    Türkay Gökgöz

    2015-10-01

    Full Text Available Multi-representation databases (MRDBs are used in several geographical information system applications for different purposes. MRDBs are mainly obtained through model and cartographic generalizations. Simplification is the essential operator of cartographic generalization, and streams and lakes are essential features in hydrography. In this study, a new algorithm was developed for the simplification of streams and lakes. In this algorithm, deviation angles and error bands are used to determine the characteristic vertices and the planimetric accuracy of the features, respectively. The algorithm was tested using a high-resolution national hydrography dataset of Pomme de Terre, a sub-basin in the USA. To assess the performance of the new algorithm, the Bend Simplify and Douglas-Peucker algorithms, the medium-resolution hydrography dataset of the sub-basin, and Töpfer’s radical law were used. For quantitative analysis, the vertex numbers, the lengths, and the sinuosity values were computed. Consequently, it was shown that the new algorithm was able to meet the main requirements (i.e., accuracy, legibility and aesthetics, and storage.

  12. Osservazioni in cattività sul ciclo stagionale del peso corporeo e sull'efficienza digestiva di <em>Pipistrellus kuhliiem> e <em>Hypsugo saviiem> (Chiroptera: Vespertilionidae

    Directory of Open Access Journals (Sweden)

    Gianna Dondini

    2003-10-01

    Full Text Available Molte specie di pipistrelli delle fasce climatiche temperato-fredde sono soggette a marcate variazioni stagionali di temperatura e disponibilità di cibo. L'accumulo di grasso in autunno è quindi un adattamento per trascorrere, in uno stato di profondo torpore definibile ibernazione, i mesi invernali, aumentando così la probabilità di sopravvivenza durante tale periodo. Nell?ambito di una attività pluriennale relativa alla raccolta, studio e, quando possibile, riabilitazione di pipistrelli in ambienti urbani, due esemplari di <em>Pipistrellus kuhliiem> (2 femmine e due di <em>Hypsugo saviiem> (1 maschio e 1 femmina, in entrambi casi giovani che ancora non avevano acquisito una sufficiente capacità nel volo e quindi non liberabili, sono stati raccolti nella pianura di Firenze durante l?estate del 1998 e mantenuti in condizioni di temperatura ambientale oscillante tra i 17 e i 22°C, in un contenitore di 150x40x30 cm. Ogni sera sono stati pesati, prima della somministrazione di cibo e acqua, con una bilancia elettronica con precisione di 0.1 g (modello Tanita 1479. L'alimentazione è stata a base di vermi della farina (<em>Tenebrio molitorem>. L?efficienza digestiva è calcolata nel seguente modo, su materiale disidratato: (quantità ingerita ? quantità escrementi/quantità ingerita*100. Per il calcolo di tale indice gli esemplari delle due specie sono stati separati e mantenuti per 24 ore a partire dalla successiva sera dell?ultima somministrazione, favorendo così lo svuotamento dell?intestino. Successivamente, per due giorni è stato fornito del cibo <em>ad libitumem>, pesando i singoli esemplari una volta terminata la fase di alimentazione, per determinare la quantità ingerita. Al termine abbiamo mantenuto gli esemplari a digiuno per 24 ore successive all?ultima somministrazione per permettere lo svuotamento dell?intestino. Gli escrementi raccolti sono stati posti in forno elettrico a 90 °C per 24 ore e successivamente pesati

  13. Net Energy, CO2 Emission and Land-Based Cost-Benefit Analyses of <em>Jatropha> em>Biodiesel: A Case Study of the Panzhihua Region of Sichuan Province in China

    Directory of Open Access Journals (Sweden)

    Xiangzheng Deng

    2012-06-01

    Full Text Available Bioenergy is currently regarded as a renewable energy source with a high growth potential. Forest-based biodiesel, with the significant advantage of not competing with grain production on cultivated land, has been considered as a promising substitute for diesel fuel by many countries, including China. Consequently, extracting biodiesel from <em>Jatropha> curcasem> has become a growing industry. However, many key issues related to the development of this industry are still not fully resolved and the prospects for this industry are complicated. The aim of this paper is to evaluate the net energy, CO2 emission, and cost efficiency of <em>Jatropha> biodiesel as a substitute fuel in China to help resolve some of the key issues by studying data from this region of China that is well suited to growing <em>Jatropha>. Our results show that: (1 <em>Jatropha> biodiesel is preferable for global warming mitigation over diesel fuel in terms of the carbon sink during <em>Jatropha> tree growth. (2 The net energy yield of <em>Jatropha> biodiesel is much lower than that of fossil fuel, induced by the high energy consumption during <em>Jatropha> plantation establishment and the conversion from seed oil to diesel fuel step. Therefore, the energy efficiencies of the production of <em>Jatropha> and its conversion to biodiesel need to be improved. (3 Due to current low profit and high risk in the study area, farmers have little incentive to continue or increase <em>Jatropha> production. (4 It is necessary to provide more subsidies and preferential policies for <em>Jatropha> plantations if this industry is to grow. It is also necessary for local government to set realistic objectives and make rational plans to choose proper sites for <em>Jatropha> biodiesel development and the work reported here should assist that effort. Future research focused on breading high-yield varieties, development of efficient field

  14. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  15. Threatening “the Good Order”: West Meets East in Cecil B. DeMille’s <em>The Cheatem> and John Updike’s <em>Terrorist>

    Directory of Open Access Journals (Sweden)

    Bradley M. Freeman

    2011-12-01

    Full Text Available

    Despite almost a hundred years of separation, both Cecil B. DeMille’s film <em>The Cheatem> (1915 and John Updike’s novel <em>Terrorist> (2006 deploy a clear-cut territorial divide between Western and Eastern spaces in order to envision a unified American space. These narratives superimpose a “natural” division on these historically opposed spaces and thereby suggest that any contact between these spaces will have dangerous consequences. These consequences include the potential dissolution and eventual destruction of American productivity, surveillance, and territorial integrity. DeMille’s film and Updike’s novel represent America as a nation-state that must be protected from the East. In 1915, <em>The Cheatem> warned against an interracial America and the upsurge in immigration that characterized the turn of the century. Nearly a century later, <em>Terrorist> presupposes an interracial America but still constructs an East that threatens the security of America. While registering the particular concerns of two distinct historical moments, these narratives represent a larger attempt in American aesthetics to imagine an East that jeopardizes the utopian possibilities of an overly idealized American space.

  16. <em>In Vitro em>Phytotoxicity and Antioxidant Activity of Selected Flavonoids

    Directory of Open Access Journals (Sweden)

    Rita Patrizia Aquino

    2012-05-01

    Full Text Available The knowledge of flavonoids involved in plant-plant interactions and their mechanisms of action are poor and, moreover, the structural characteristics required for these biological activities are scarcely known. The objective of this work was to study the possible <em>in vitro em>phytotoxic effects of 27 flavonoids on the germination and early radical growth of <em>Raphanus sativus em>L.> and <em>Lepidium sativumem> L., with the aim to evaluate the possible structure/activity relationship. Moreover, the antioxidant activity of the same compounds was also evaluated. Generally, in response to various tested flavonoids, germination was only slightly affected, whereas significant differences were observed in the activity of the various tested flavonoids against radical elongation. DPPH test confirms the antioxidant activity of luteolin, quercetin, catechol, morin, and catechin. The biological activity recorded is discussed in relation to the structure of compounds and their capability to interact with cell structures and physiology. No correlation was found between phytotoxic and antioxidant activities.

  17. BFL: a node and edge betweenness based fast layout algorithm for large scale networks

    Science.gov (United States)

    Hashimoto, Tatsunori B; Nagasaki, Masao; Kojima, Kaname; Miyano, Satoru

    2009-01-01

    Background Network visualization would serve as a useful first step for analysis. However, current graph layout algorithms for biological pathways are insensitive to biologically important information, e.g. subcellular localization, biological node and graph attributes, or/and not available for large scale networks, e.g. more than 10000 elements. Results To overcome these problems, we propose the use of a biologically important graph metric, betweenness, a measure of network flow. This metric is highly correlated with many biological phenomena such as lethality and clusters. We devise a new fast parallel algorithm calculating betweenness to minimize the preprocessing cost. Using this metric, we also invent a node and edge betweenness based fast layout algorithm (BFL). BFL places the high-betweenness nodes to optimal positions and allows the low-betweenness nodes to reach suboptimal positions. Furthermore, BFL reduces the runtime by combining a sequential insertion algorim with betweenness. For a graph with n nodes, this approach reduces the expected runtime of the algorithm to O(n2) when considering edge crossings, and to O(n log n) when considering only density and edge lengths. Conclusion Our BFL algorithm is compared against fast graph layout algorithms and approaches requiring intensive optimizations. For gene networks, we show that our algorithm is faster than all layout algorithms tested while providing readability on par with intensive optimization algorithms. We achieve a 1.4 second runtime for a graph with 4000 nodes and 12000 edges on a standard desktop computer. PMID:19146673

  18. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  19. Development of a Data Reduction Algorithm for Optical Wide Field Patrol (OWL) II: Improving Measurement of Lengths of Detected Streaks

    Science.gov (United States)

    Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung

    2016-09-01

    As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.

  20. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  1. Exergetic optimization of shell and tube heat exchangers using a genetic based algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Oezcelik, Yavuz [Ege University, Bornova, Izmir (Turkey). Engineering Faculty, Chemical Engineering Department

    2007-08-15

    In the computer-based optimization, many thousands of alternative shell and tube heat exchangers may be examined by varying the high number of exchanger parameters such as tube length, tube outer diameter, pitch size, layout angle, baffle space ratio, number of tube side passes. In the present study, a genetic based algorithm was developed, programmed, and applied to estimate the optimum values of discrete and continuous variables of the MINLP (mixed integer nonlinear programming) test problems. The results of the test problems show that the genetic based algorithm programmed can estimate the acceptable values of continuous variables and optimum values of integer variables. Finally the genetic based algorithm was extended to make parametric studies and to find optimum configuration of heat exchangers by minimizing the sum of the annual capital cost and exergetic cost of the shell and tube heat exchangers. The results of the example problems show that the proposed algorithm is applicable to find optimum and near optimum alternatives of the shell and tube heat exchanger configurations. (author)

  2. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  3. An Optimal Online Resource Allocation Algorithm for Energy Harvesting Body Area Networks

    Directory of Open Access Journals (Sweden)

    Guangyuan Wu

    2018-01-01

    Full Text Available In Body Area Networks (BANs, how to achieve energy management to extend the lifetime of the body area networks system is one of the most critical problems. In this paper, we design a body area network system powered by renewable energy, in which the sensors carried by patient with energy harvesting module can transmit data to a personal device. We do not require any a priori knowledge of the stochastic nature of energy harvesting and energy consumption. We formulate a user utility optimization problem. We use Lyapunov Optimization techniques to decompose the problem into three sub-problems, i.e., battery management, collecting rate control and transmission power allocation. We propose an online resource allocation algorithm to achieve two major goals: (1 balancing sensors’ energy harvesting and energy consumption while stabilizing the BANs system; and (2 maximizing the user utility. Performance analysis addresses required battery capacity, bounded data queue length and optimality of the proposed algorithm. Simulation results verify the optimization of algorithm.

  4. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  5. Implementation of Super-Encryption with Trithemius Algorithm and Double Transposition Cipher in Securing PDF Files on Android Platform

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.; Jessica

    2018-03-01

    This study aims to combine the trithemus algorithm and double transposition cipher in file security that will be implemented to be an Android-based application. The parameters being examined are the real running time, and the complexity value. The type of file to be used is a file in PDF format. The overall result shows that the complexity of the two algorithms with duper encryption method is reported as Θ (n 2). However, the processing time required in the encryption process uses the Trithemius algorithm much faster than using the Double Transposition Cipher. With the length of plaintext and password linearly proportional to the processing time.

  6. A Novel Entropy-Based Decoding Algorithm for a Generalized High-Order Discrete Hidden Markov Model

    Directory of Open Access Journals (Sweden)

    Jason Chin-Tiong Chan

    2018-01-01

    Full Text Available The optimal state sequence of a generalized High-Order Hidden Markov Model (HHMM is tracked from a given observational sequence using the classical Viterbi algorithm. This classical algorithm is based on maximum likelihood criterion. We introduce an entropy-based Viterbi algorithm for tracking the optimal state sequence of a HHMM. The entropy of a state sequence is a useful quantity, providing a measure of the uncertainty of a HHMM. There will be no uncertainty if there is only one possible optimal state sequence for HHMM. This entropy-based decoding algorithm can be formulated in an extended or a reduction approach. We extend the entropy-based algorithm for computing the optimal state sequence that was developed from a first-order to a generalized HHMM with a single observational sequence. This extended algorithm performs the computation exponentially with respect to the order of HMM. The computational complexity of this extended algorithm is due to the growth of the model parameters. We introduce an efficient entropy-based decoding algorithm that used reduction approach, namely, entropy-based order-transformation forward algorithm (EOTFA to compute the optimal state sequence of any generalized HHMM. This EOTFA algorithm involves a transformation of a generalized high-order HMM into an equivalent first-order HMM and an entropy-based decoding algorithm is developed based on the equivalent first-order HMM. This algorithm performs the computation based on the observational sequence and it requires OTN~2 calculations, where N~ is the number of states in an equivalent first-order model and T is the length of observational sequence.

  7. A Parallel Biological Optimization Algorithm to Solve the Unbalanced Assignment Problem Based on DNA Molecular Computing.

    Science.gov (United States)

    Wang, Zhaocai; Pu, Jun; Cao, Liling; Tan, Jian

    2015-10-23

    The unbalanced assignment problem (UAP) is to optimally resolve the problem of assigning n jobs to m individuals (m applied mathematics, having numerous real life applications. In this paper, we present a new parallel DNA algorithm for solving the unbalanced assignment problem using DNA molecular operations. We reasonably design flexible-length DNA strands representing different jobs and individuals, take appropriate steps, and get the solutions of the UAP in the proper length range and O(mn) time. We extend the application of DNA molecular operations and simultaneity to simplify the complexity of the computation.

  8. Multi-criteria ACO-based Algorithm for Ship’s Trajectory Planning

    Directory of Open Access Journals (Sweden)

    Agnieszka Lazarowska

    2017-03-01

    Full Text Available The paper presents a new approach for solving a path planning problem for ships in the environment with static and dynamic obstacles. The algorithm utilizes a heuristic method, classified to the group of Swarm Intelligence approaches, called the Ant Colony Optimization. The method is inspired by a collective behaviour of ant colonies. A group of agents - artificial ants searches through the solution space in order to find a safe, optimal trajectory for a ship. The problem is considered as a multi-criteria optimization task. The criteria taken into account during problem solving are: path safety, path length, the International Regulations for Preventing Collisions at Sea (COLREGs compliance and path smoothness. The paper includes the description of the new multi-criteria ACO-based algorithm along with the presentation and discussion of simulation tests results.

  9. Effects of the length and timing of nighttime naps on task performance and physiological function

    Directory of Open Access Journals (Sweden)

    Hidemaro Takeyama

    2004-12-01

    Full Text Available OBJECTIVE: To examine the effects of the length and timing of nighttime naps on performance and physiological functions, an experimental study was carried out under simulated night shift schedules. METHODS: Six students were recruited for this study that was composed of 5 experiments. Each experiment involved 3 consecutive days with one night shift (22:00-8:00 followed by daytime sleep and night sleep. The experiments had 5 conditions in which the length and timing of naps were manipulated: 0:00-1:00 (E60, 0:00-2:00 (E120, 4:00-5:00 (L60, 4:00-6:00 (L120, and no nap (No-nap. During the night shifts, participants underwent performance tests. A questionnaire on subjective fatigue and a critical flicker fusion frequency test were administered after the performance tests. Heart rate variability and rectal temperature were recorded continuously during the experiments. Polysomnography was also recorded during the nap. RESULTS: Sleep latency was shorter and sleep efficiency was higher in the nap in L60 and L120 than that in E60 and E120. Slow wave sleep in the naps in E120 and L120 was longer than that in E60 and L60. The mean reaction time in L60 became longer after the nap, and faster in E60 and E120. Earlier naps serve to counteract the decrement in performance and physiological functions during night shifts. Performance was somewhat improved by taking a 2-hour nap later in the shift, but deteriorated after a one-hour nap. CONCLUSIONS: Naps in the latter half of the night shift were superior to earlier naps in terms of sleep quality. However performance declined after a 1-hour nap taken later in the night shift due to sleep inertia. This study suggests that appropriate timing of a short nap must be carefully considered, such as a 60-min nap during the night shift.OBJETIVO: Para investigar os efeitos da duração e horário de cochilos noturnos sobre o desempenho e as funções fisiológicas foi realizado um estudo experimental por meio do trabalho

  10. Energetic food in rations for growing goats Alimentos energéticos em rações para caprinos em crescimento

    Directory of Open Access Journals (Sweden)

    Betina Raquel Cunha dos Santos

    2009-06-01

    Full Text Available The use of native or cultivated crops in semi-arid region, such as the wild cassava, sorghum and cassava, can reduce costs, increase productivity and competitiveness of the production systems. The objective of this study was to evaluate the effects of three energy sources as a supplement of a wild cassava silage meal over the productive performance and carcass characteristics of growing goats. The experimental treatments were supplementation sources: cassava meal, cassava meal in association to wheat bran and sorghum middling and cassava meal in association to sorghum grain. Eighteen male goats with 14.06±3.61 kg of initial body weight were allocated into three groups in collective pens. The experimental design was a completely randomized with six replications by treatments. The supplementations were not affect the daily weight gain, total weight gain, body condition score, carcass characteristics (cold and hot carcass weight and dressing, carcass measurements (carcass length and depth and leg width, depth and length and commercial meat cuts yields (leg, shoulder, rib, briscket and neck . The cassava meal, in ration containing wild cassava silage with relations of 80% roughage and 20% concentrate, may substitute the grain of sorghum and the wheat bran.A utilização de culturas nativas ou adaptadas ao semi-árido, como a maniçoba, o sorgo e a mandioca, na alimentação animal, pode reduzir os custos de produção, elevar os índices de produtividade e conferir competitividade aos sistemas produtivos. Objetivou-se com este estudo avaliar o efeito de fontes energéticas em rações à base de silagem de maniçoba sobre o desempenho produtivo e características de carcaça de caprinos. Os tratamentos consistiram de fontes suplementares, raspa de mandioca, raspa de mandioca associada ao farelo de trigo e ao sorgo grão moído e raspa de mandioca em associação ao sorgo grão moído. Foram utilizados 18 caprinos machos, com peso corporal inicial m

  11. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  12. Variabilidade local e regional da evapotranspiração estimada pelo algoritmo SEBAL Local and regional variability of evapotranspiration estimated by SEBAL algorithm

    Directory of Open Access Journals (Sweden)

    Luis C. J. Moreira

    2010-12-01

    Full Text Available Em face da importância em conhecer a evapotranspiração (ET para uso racional da água na irrigação no contexto atual de escassez desse recurso, algoritmos de estimativa da ET a nível regional foram desenvolvidos utilizando-se de ferramentas de sensoriamento remoto. Este estudo objetivou aplicar o algoritmo SEBAL (Surface Energy Balance Algorithms for Land em três imagens do satélite Landsat 5, do segundo semestre de 2006. As imagens correspondem a áreas irrigadas, floresta nativa densa e a Caatinga do Estado do Ceará (Baixo Acaraú, Chapada do Apodi e Chapada do Araripe. Este algoritmo calcula a evapotranspiração horária a partir do fluxo de calor latente, estimado como resíduo do balanço de energia na superfície. Os valores de ET obtidos nas três regiões foram superiores a 0,60 mm h-1 nas áreas irrigadas ou de vegetação nativa densa. As áreas de vegetação nativa menos densa apresentaram taxa da ET horária de 0,35 a 0,60 mm h-1, e valores quase nulos em áreas degradadas. A análise das médias de evapotranspiração horária pelo teste de Tukey a 5% de probabilidade permitiu evidenciar uma variabilidade significativa local, bem como regional no Estado do Ceará.In the context of water resources scarcity, the rational use of water for irrigation is necessary, implying precise estimations of the actual evapotranspiration (ET. With the recent progresses of remote-sensed technologies, regional algorithms estimating evapotranspiration from satellite observations were developed. This work aimed at applying the SEBAL algorithm (Surface Energy Balance Algorithms for Land at three Landsat-5 images during the second semester of 2006. These images cover irrigated areas, dense native forest areas and caatinga areas in three regions of the state of Ceará (Baixo Acaraú, Chapada do Apodi and Chapada do Araripe. The SEBAL algorithm calculates the hourly evapotranspiration from the latent heat flux, estimated from the surface energy

  13. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  14. Alteration of Box-Jenkins methodology by implementing genetic algorithm method

    Science.gov (United States)

    Ismail, Zuhaimy; Maarof, Mohd Zulariffin Md; Fadzli, Mohammad

    2015-02-01

    A time series is a set of values sequentially observed through time. The Box-Jenkins methodology is a systematic method of identifying, fitting, checking and using integrated autoregressive moving average time series model for forecasting. Box-Jenkins method is an appropriate for a medium to a long length (at least 50) time series data observation. When modeling a medium to a long length (at least 50), the difficulty arose in choosing the accurate order of model identification level and to discover the right parameter estimation. This presents the development of Genetic Algorithm heuristic method in solving the identification and estimation models problems in Box-Jenkins. Data on International Tourist arrivals to Malaysia were used to illustrate the effectiveness of this proposed method. The forecast results that generated from this proposed model outperformed single traditional Box-Jenkins model.

  15. Utilização de probiótico em rações com diferentes níveis de proteína sobre o comprimento e a morfometria do intestino delgado de codornas de corte = Use of probiotic on diets with different protein levels on the length and morphometry of the small intestine of meat quails

    Directory of Open Access Journals (Sweden)

    Luciana Kazue Otutumi

    2008-07-01

    Full Text Available O trabalho objetivou avaliar o efeito do probiótico associado a diferentes níveis de proteína bruta (PB sobre o comprimento e a morfometria da mucosa do intestino delgado de codornas de corte. Foram utilizadas 2.304 codornas de corte, distribuídas em umdelineamento experimental inteiramente casualizado em esquema fatorial 2 x 4 (com e sem probiótico; quatro níveis de PB – 15, 20, 25 e 30%, com duas repetições por tratamento, em dois períodos experimentais. Aos sete, 14, 21 e 35 dias de idade, foram abatidas duas aves de cada repetição para avaliação do comprimento do intestino delgado (CID e a morfometria da mucosa do duodeno e íleo. O probiótico não influenciou o CID e a morfometria da mucosa do intestino delgado. O comprimento do intestino aumentou de maneira linear com a elevação dos níveis de PB aos sete, 14 e 21 dias, e a morfometria da mucosa aumentou de forma linear somente para altura vilo do íleo. Pode-se concluir que, nas condições ambientais em que as codornas foram criadas, apenas o nível de proteína influenciou o comprimento do intestino delgado e a altura de vilo do íleo, não sendoobservado efeito do probiótico sobre estes parâmetros.The aim of this study was to evaluate the effect of probiotic associated to different levels of crude protein (CP on thelength and mucous morphometry of the small intestine of meat quails. The study used 2,304 meat quails, distributed in a completely randomized experimental design in a 2 x 4 factorial scheme (with and without probiotic; four levels of CP – 15, 20, 25 and 30%, withtwo replications per treatment, in two experimental periods. At seven, 14, 21 and 35 days of age, two quails of each replication were slaughtered in order to evaluate the length of the small intestine (LSI, as well as duodenum and ileum mucous morphometry. LSI and smallintestine mucous morphometry were not influenced by probiotic. Intestine length increased in a linear fashion with the increase

  16. Busca de estruturas em grandes escalas em altos redshifts

    Science.gov (United States)

    Boris, N. V.; Sodré, L., Jr.; Cypriano, E.

    2003-08-01

    A busca por estruturas em grandes escalas (aglomerados de galáxias, por exemplo) é um ativo tópico de pesquisas hoje em dia, pois a detecção de um único aglomerado em altos redshifts pode por vínculos fortes sobre os modelos cosmológicos. Neste projeto estamos fazendo uma busca de estruturas distantes em campos contendo pares de quasares próximos entre si em z Â3 0.9. Os pares de quasares foram extraídos do catálogo de Véron-Cetty & Véron (2001) e estão sendo observados com os telescópios: 2,2m da University of Hawaii (UH), 2,5m do Observatório de Las Campanas e com o GEMINI. Apresentamos aqui a análise preliminar de um par de quasares observado nos filtros i'(7800 Å) e z'(9500 Å) com o GEMINI. A cor (i'-z') mostrou-se útil para detectar objetos "early-type" em redshifts menores que 1.1. No estudo do par 131046+0006/J131055+0008, com redshift ~ 0.9, o uso deste método possibilitou a detecção de sete objetos candidatos a galáxias "early-type". Num mapa da distribuição projetada dos objetos para 22 escala. Um outro argumento em favor dessa hipótese é que eles obedecem uma relação do tipo Kormendy (raio equivalente X brilho superficial dentro desse raio), como a apresentada pelas galáxias elípticas em z = 0.

  17. Encapsulation-Induced Stress Helps <em>Saccharomyces cerevisiae em>Resist Convertible Lignocellulose Derived Inhibitors

    Directory of Open Access Journals (Sweden)

    Johan O. Westman

    2012-09-01

    Full Text Available The ability of macroencapsulated <em>Saccharomyces cerevisiae em>CBS8066<em> em>to withstand readily and not readily <em>in situem> convertible lignocellulose-derived inhibitors was investigated in anaerobic batch cultivations. It was shown that encapsulation increased the tolerance against readily convertible furan aldehyde inhibitors and to dilute acid spruce hydrolysate, but not to organic acid inhibitors that cannot be metabolized anaerobically. Gene expression analysis showed that the protective effect arising from the encapsulation is evident also on the transcriptome level, as the expression of the stress-related genes <em>YAP1em>, <em>ATR1em> and <em>FLR1em> was induced upon encapsulation. The transcript levels were increased due to encapsulation already in the medium without added inhibitors, indicating that the cells sensed low stress level arising from the encapsulation itself. We present a model, where the stress response is induced by nutrient limitation, that this helps the cells to cope with the increased stress added by a toxic medium, and that superficial cells in the capsules degrade convertible inhibitors, alleviating the inhibition for the cells deeper in the capsule.

  18. Denni Algorithm An Enhanced Of SMS (Scan, Move and Sort) Algorithm

    Science.gov (United States)

    Aprilsyah Lubis, Denni; Salim Sitompul, Opim; Marwan; Tulus; Andri Budiman, M.

    2017-12-01

    Sorting has been a profound area for the algorithmic researchers, and many resources are invested to suggest a more working sorting algorithm. For this purpose many existing sorting algorithms were observed in terms of the efficiency of the algorithmic complexity. Efficient sorting is important to optimize the use of other algorithms that require sorted lists to work correctly. Sorting has been considered as a fundamental problem in the study of algorithms that due to many reasons namely, the necessary to sort information is inherent in many applications, algorithms often use sorting as a key subroutine, in algorithm design there are many essential techniques represented in the body of sorting algorithms, and many engineering issues come to the fore when implementing sorting algorithms., Many algorithms are very well known for sorting the unordered lists, and one of the well-known algorithms that make the process of sorting to be more economical and efficient is SMS (Scan, Move and Sort) algorithm, an enhancement of Quicksort invented Rami Mansi in 2010. This paper presents a new sorting algorithm called Denni-algorithm. The Denni algorithm is considered as an enhancement on the SMS algorithm in average, and worst cases. The Denni algorithm is compared with the SMS algorithm and the results were promising.

  19. Fixed-Point Algorithms for the Blind Separation of Arbitrary Complex-Valued Non-Gaussian Signal Mixtures

    Directory of Open Access Journals (Sweden)

    Douglas Scott C

    2007-01-01

    Full Text Available We derive new fixed-point algorithms for the blind separation of complex-valued mixtures of independent, noncircularly symmetric, and non-Gaussian source signals. Leveraging recently developed results on the separability of complex-valued signal mixtures, we systematically construct iterative procedures on a kurtosis-based contrast whose evolutionary characteristics are identical to those of the FastICA algorithm of Hyvarinen and Oja in the real-valued mixture case. Thus, our methods inherit the fast convergence properties, computational simplicity, and ease of use of the FastICA algorithm while at the same time extending this class of techniques to complex signal mixtures. For extracting multiple sources, symmetric and asymmetric signal deflation procedures can be employed. Simulations for both noiseless and noisy mixtures indicate that the proposed algorithms have superior finite-sample performance in data-starved scenarios as compared to existing complex ICA methods while performing about as well as the best of these techniques for larger data-record lengths.

  20. Altitudinal distribution of the common longeared bat <em>Plecotus auritusem> (Linnaeus, 1758 and grey long-eared bat <em>Plecotus austriacusem> (J. B. Fischer, 1829 (Chiroptera, Vespertilionidae in the Tatra mountains (southern Poland

    Directory of Open Access Journals (Sweden)

    Krzysztof Piksa

    2006-03-01

    Full Text Available Riassunto Distribuzione altitudinale di Orecchione bruno (<em>Plecotus auritusem> e Orecchione meridionale (<em>Plecotus austriacusem> nei Monti Tatra (Polonia meridionale. Vengono riportati nuovi dati relativi alla distribuzione altitudinale nei Monti Tatra (Polonia meridionale di <em>Plecotus auritusem> e <em>P. austriacusem>. Tali segnalazioni incrementano le conoscenze relative alla presenza di questi chirotteri a quote elevate, in particolare per la Polonia. In inverno <em>P. auritusem> è stato rinvenuto a 1921 m s.l.m. mentre in estate è stato rinvenuto a 2250 m s.l.m.; in aggiunta, sono stati ritrovati resti ossei a 1929 m s.l.m. <em>P. austriacusem> è stato segnalato in ibernazione a 1294 m s.l.m.

  1. Action of Chitosan Against <em>Xanthomonas> Pathogenic Bacteria Isolated from <em>Euphorbia pulcherrimaem>

    Directory of Open Access Journals (Sweden)

    Yanli Wang

    2012-06-01

    Full Text Available The antibacterial activity and mechanism of two kinds of chitosan were investigated against twelve <em>Xanthomonas> strains recovered from <em>Euphorbia pulcherrimaem>. Results indicated that both chitosans markedly inhibited bacterial growth based on OD loss. Furthermore, the release of DNA and RNA from three selected strains was increased by both chitosans. However, the release of intracellular proteins was inhibited by both chitosans at different concentration and incubation times, except chitosan A at 0.1 mg/mL for 0.5 h incubation and 0.2 mg/mL for 2.0 h incubation increased the release of proteins, indicating the complexity of the interaction and cell membranes, which was affected by incubation time, bacterial species, chitosan type and concentration. Transmission electron microscopic observations revealed that chitosan caused changes in protoplast concentration and surface morphology. In some cells, the membranes and walls were badly distorted and disrupted, while other cells were enveloped by a thick and compact ribbon-like layer. The contrary influence on cell morphology may explain the differential effect in the release of material. In addition, scanning electron microscope and biofilm formation test revealed that both chitosans removed biofilm biomass. Overall, this study showed that membrane and biofilm play an important role in the antibacterial mechanism of chitosan.

  2. Robot navigation in unknown terrains: Introductory survey of non-heuristic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rao, N.S.V. [Oak Ridge National Lab., TN (US); Kareti, S.; Shi, Weimin [Old Dominion Univ., Norfolk, VA (US). Dept. of Computer Science; Iyengar, S.S. [Louisiana State Univ., Baton Rouge, LA (US). Dept. of Computer Science

    1993-07-01

    A formal framework for navigating a robot in a geometric terrain by an unknown set of obstacles is considered. Here the terrain model is not a priori known, but the robot is equipped with a sensor system (vision or touch) employed for the purpose of navigation. The focus is restricted to the non-heuristic algorithms which can be theoretically shown to be correct within a given framework of models for the robot, terrain and sensor system. These formulations, although abstract and simplified compared to real-life scenarios, provide foundations for practical systems by highlighting the underlying critical issues. First, the authors consider the algorithms that are shown to navigate correctly without much consideration given to the performance parameters such as distance traversed, etc. Second, they consider non-heuristic algorithms that guarantee bounds on the distance traversed or the ratio of the distance traversed to the shortest path length (computed if the terrain model is known). Then they consider the navigation of robots with very limited computational capabilities such as finite automata, etc.

  3. Generation of Length Distribution, Length Diagram, Fibrogram, and Statistical Characteristics by Weight of Cotton Blends

    Directory of Open Access Journals (Sweden)

    B. Azzouz

    2007-01-01

    Full Text Available The textile fibre mixture as a multicomponent blend of variable fibres imposes regarding the proper method to predict the characteristics of the final blend. The length diagram and the fibrogram of cotton are generated. Then the length distribution, the length diagram, and the fibrogram of a blend of different categories of cotton are determined. The length distributions by weight of five different categories of cotton (Egyptian, USA (Pima, Brazilian, USA (Upland, and Uzbekistani are measured by AFIS. From these distributions, the length distribution, the length diagram, and the fibrogram by weight of four binary blends are expressed. The length parameters of these cotton blends are calculated and their variations are plotted against the mass fraction x of one component in the blend .These calculated parameters are compared to those of real blends. Finally, the selection of the optimal blends using the linear programming method, based on the hypothesis that the cotton blend parameters vary linearly in function of the components rations, is proved insufficient.

  4. Study of the <em>in Vitroem> Antiplasmodial, Antileishmanial and Antitrypanosomal Activities of Medicinal Plants from Saudi Arabia

    Directory of Open Access Journals (Sweden)

    Nawal M. Al-Musayeib

    2012-09-01

    Full Text Available The present study investigated the <em>in vitroem> antiprotozoal activity of sixteen selected medicinal plants. Plant materials were extracted with methanol and screened <em>in vitroem> against erythrocytic schizonts of <em>Plasmodium falciparumem>, intracellular amastigotes of <em>Leishmania infantum em>and <em>Trypanosoma cruzi em>and free trypomastigotes of<em> T. bruceiem>. Cytotoxic activity was determined against MRC-5 cells to assess selectivity<em>. em>The criterion for activity was an IC50 < 10 µg/mL (4. Antiplasmodial activity was found in the<em> em>extracts of<em> em>>Prosopis julifloraem> and <em>Punica granatumem>. Antileishmanial activity<em> em>against <em>L. infantumem> was demonstrated in <em>Caralluma sinaicaem> and <em>Periploca aphylla.em> Amastigotes of<em> T. cruzi em>were affected by the methanol extract of<em> em>>Albizia lebbeckem>> em>pericarp, <em>Caralluma sinaicaem>,> Periploca aphylla em>and <em>Prosopius julifloraem>. Activity against<em> T. brucei em>was obtained in<em> em>>Prosopis julifloraem>. Cytotoxicity (MRC-5 IC50 < 10 µg/mL and hence non-specific activities were observed for<em> em>>Conocarpus lancifoliusem>.>

  5. Progress on a detection algorithm for longer lived gravitational wave bursts

    International Nuclear Information System (INIS)

    Torres, Charlie; Anderson, Warren G

    2005-01-01

    Tracksearch is an algorithm to detect unmodelled gravitational wave signals in interferometric data which was first proposed almost ten years ago by Anderson and Balasubramanian. It is one of the only methods proposed which is well suited to look for unmodelled gravitational wave signals which have hundreds of cycles or more. This paper continues the work they began. In particular, we introduce a new trigger statistic for tracksearch, the integrated power, and compare it to the track length statistic used by Anderson and Balasubramanian. Our initial findings suggest that the integrated power will perform equivalently to or better than track length in almost every case. Furthermore, the integrated power statistic appears to be far less sensitive to suboptimal parameter choices, indicating that it may be more suitable for use on real gravitational wave data

  6. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  7. Word length, set size, and lexical factors: Re-examining what causes the word length effect.

    Science.gov (United States)

    Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian

    2018-04-19

    The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  8. Genetic algorithm trajectory plan optimization for EAMA: EAST Articulated Maintenance Arm

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Jing, E-mail: wujing@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd., Hefei, Anhui (China); Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland); Wu, Huapeng [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland); Song, Yuntao; Cheng, Yong; Zhao, Wenglong [Institute of Plasma Physics, Chinese Academy of Sciences, 350 Shushanhu Rd., Hefei, Anhui (China); Wang, Yongbo [Lappeenranta University of Technology, Skinnarilankatu 34, Lappeenranta (Finland)

    2016-11-01

    Highlights: • A redundant 10-DOF serial-articulated robot for EAST assembly and maintains is presented. • A trajectory optimization algorithm of the robot is developed. • A minimum jerk objective is presented to suppress machining vibration of the robot. - Abstract: EAMA (EAST Articulated Maintenance Arm) is an articulated serial manipulator with 7 degrees of freedom (DOF) articulated arm followed by 3-DOF gripper, total length is 8.867 m, works in experimental advanced superconductor tokamak (EAST) vacuum vessel (VV) to perform blanket inspection and remote maintenance tasks. This paper presents a trajectory optimization method which aims to pursue the 7-DOF articulated arm a stable movement, which keeps the mounted inspection camera anti-vibration. Based on dynamics analysis, trajectory optimization algorithm adopts multi-order polynomial interpolation in joint space and high order geometry Jacobian transform. The object of optimization algorithm is to suppress end-effector movement vibration by minimizing jerk RMS (root mean square) value. The proposed solution has such characteristics which can satisfy kinematic constraints of EAMA’s motion and ensure the arm running under the absolute values of velocity, acceleration and jerk boundaries. GA (genetic algorithm) is employed to find global and robust solution for this problem.

  9. <em>Trichoderma harzianumem> como promotor del crecimiento vegetal del maracuyá (<em>Passiflora edulisem> var. <em>flavicarpa> Degener

    Directory of Open Access Journals (Sweden)

    Cubillos-Hinojosa Juan

    2009-04-01

    Full Text Available

    Se realizó un experimento en condiciones de laboratorio e invernadero, con el propósito de evaluar el efecto de la cepa nativa TCN-014 y la cepa comercial TCC-005 de <em>Trichoderma harzianumem> sobre la germinación y el crecimiento temprano del maracuyá. Se adecuaron inóculos de 104, 106 y 108 conidias/mL para cada cepa y se aplicaron a semillas de maracuyá; se evaluó el número de semillas germinadas durante 15 días; se calculó el porcentaje de germinación, el índice de velocidad de germinación y el tiempo medio de germinación. Posteriormente las semillas germinadas se llevaron a condiciones de invernadero y transcurridos dos meses se midió la altura de las plántulas, el grosor del tallo, el número de hojas, la longitud de la raíz y el peso seco total. Todos los tratamientos estimularon la germinación de las semillas y el desarrollo de las plántulas; sin embargo la cepa nativa en concentraciones 106 y 108 conidias/mL mostró resultados superiores frente a la cepa comercial. Los resultados sugieren una acción efectiva de <em>T. harzianumem> como promotor de crecimiento vegetal, mostrando que tiene potencial para la elaboración de un bioproducto útil para el manejo ecológico del cultivo de maracuyá.

  10. Deconvolution of continuous paleomagnetic data from pass-through magnetometer: A new algorithm to restore geomagnetic and environmental information based on realistic optimization

    Science.gov (United States)

    Oda, Hirokuni; Xuan, Chuang

    2014-10-01

    development of pass-through superconducting rock magnetometers (SRM) has greatly promoted collection of paleomagnetic data from continuous long-core samples. The output of pass-through measurement is smoothed and distorted due to convolution of magnetization with the magnetometer sensor response. Although several studies could restore high-resolution paleomagnetic signal through deconvolution of pass-through measurement, difficulties in accurately measuring the magnetometer sensor response have hindered the application of deconvolution. We acquired reliable sensor response of an SRM at the Oregon State University based on repeated measurements of a precisely fabricated magnetic point source. In addition, we present an improved deconvolution algorithm based on Akaike's Bayesian Information Criterion (ABIC) minimization, incorporating new parameters to account for errors in sample measurement position and length. The new algorithm was tested using synthetic data constructed by convolving "true" paleomagnetic signal containing an "excursion" with the sensor response. Realistic noise was added to the synthetic measurement using Monte Carlo method based on measurement noise distribution acquired from 200 repeated measurements of a u-channel sample. Deconvolution of 1000 synthetic measurements with realistic noise closely resembles the "true" magnetization, and successfully restored fine-scale magnetization variations including the "excursion." Our analyses show that inaccuracy in sample measurement position and length significantly affects deconvolution estimation, and can be resolved using the new deconvolution algorithm. Optimized deconvolution of 20 repeated measurements of a u-channel sample yielded highly consistent deconvolution results and estimates of error in sample measurement position and length, demonstrating the reliability of the new deconvolution algorithm for real pass-through measurements.

  11. An Enhanced Run-Length Encoding Compression Method for Telemetry Data

    Directory of Open Access Journals (Sweden)

    Shan Yanhu

    2017-09-01

    Full Text Available The telemetry data are essential in evaluating the performance of aircraft and diagnosing its failures. This work combines the oversampling technology with the run-length encoding compression algorithm with an error factor to further enhance the compression performance of telemetry data in a multichannel acquisition system. Compression of telemetry data is carried out with the use of FPGAs. In the experiments there are used pulse signals and vibration signals. The proposed method is compared with two existing methods. The experimental results indicate that the compression ratio, precision, and distortion degree of the telemetry data are improved significantly compared with those obtained by the existing methods. The implementation and measurement of the proposed telemetry data compression method show its effectiveness when used in a high-precision high-capacity multichannel acquisition system.

  12. Gap length distributions by PEPR

    International Nuclear Information System (INIS)

    Warszawer, T.N.

    1980-01-01

    Conditions guaranteeing exponential gap length distributions are formulated and discussed. Exponential gap length distributions of bubble chamber tracks first obtained on a CRT device are presented. Distributions of resulting average gap lengths and their velocity dependence are discussed. (orig.)

  13. Effect of β,β-Dimethylacrylshikonin on Inhibition of Human Colorectal Cancer Cell Growth <em>in Vitro em>and <em>in Vivoem>

    Directory of Open Access Journals (Sweden)

    Ting Feng

    2012-07-01

    Full Text Available In traditional Chinese medicine, shikonin and its derivatives, has been used in East Asia for several years for the prevention and treatment of several diseases, including cancer. We previously identified that β,β-dimethylacrylshikonin (DA could inhibit hepatocellular carcinoma growth. In the present study, we investigated the inhibitory effects of DA on human colorectal cancer (CRC cell line HCT-116 <em>in vitroem> and <em>in vivoem>. A viability assay showed that DA could inhibit tumor cell growth in a time- and dose-dependent manner. Flow cytometry showed that DA blocks the cell cycle at G0/G1 phase. Western blotting results demonstrated that the induction of apoptosis by DA correlated with the induction of pro-apoptotic proteins Bax, and Bid, and a decrease in the expression of anti-apoptotic proteins Bcl-2 and Bcl-xl. Furthermore, treatment of HCT-116 bearing nude mice with DA significantly retarded the growth of xenografts. Consistent with the results <em>in vitroem>, the DA-mediated suppression of HCT-116 xenografts correlated with Bax and Bcl-2. Taken together, these results suggest that DA could be a novel and promising approach to the treatment of CRC.

  14. Cutting Whole Length or Partial Length of Internal Anal Sphincter in Managementof Fissure in Ano

    Directory of Open Access Journals (Sweden)

    Furat Shani Aoda

    2017-12-01

    Full Text Available A chronic anal fissure is a common painful perianal condition.The main operative procedure to treat this painful condition is a lateral internal sphincteretomy (LIS.The aim of study is to compare the outcome and complications of closed LIS up to the dentate line (whole length of internal sphincter or up to the fissure apex (partial length of internal sphincter in the treatment of anal fissure.It is a prospective comparativestudy including 100 patients with chronic fissure in ano. All patients assigned to undergo closed LIS. Those patients were randomly divided into two groups: 50 patients underwent LIS to the level of dentate line (whole length and other 50 patients underwent LIS to the level of fissure apex (partial length. Patients were followed up weekly in the 1st month, twice monthly in the second month then monthly   for next 2 months and finally after 1 year. There was satisfactory relief of pain in all patients in both groups & complete healing of the fissure occurred. Regarding post operative incontinence no major degree of incontinence occur in both group but minor degree of incontinence persists In 7 patients after whole length LIS after one year. In conclusion, both whole length & partial length LIS associated with improvement of pain, good chance of healing but whole length LIS associated with more chance of long term  flatus incontinence. Hence,we recommend partial length LIS as treatment forchronic anal fissure.

  15. Enrichment and Purification of Syringin, Eleutheroside E and Isofraxidin from <em>Acanthopanax> <em>senticosus> by Macroporous Resin

    Directory of Open Access Journals (Sweden)

    Yuangang Zu

    2012-07-01

    Full Text Available In order to screen a suitable resin for the preparative simultaneous separation and purification of syringin, eleutheroside E and isofraxidin from <em>Acanthopanax> <em>senticosus>, the adsorption and desorption properties of 17 widely used commercial macroporous resins were evaluated. According to our results, HPD100C, which adsorbs by the molecular tiers model, was the best macroporous resin, offering higher adsorption and desorption capacities and higher adsorption speed for syringin, eleutheroside E and isofraxidin than other resins. Dynamic adsorption and desorption tests were carried out to optimize the process parameters. The optimal conditions were as follows: for adsorption, processing volume: 24 BV, flow rate: 2 BV/h; for desorption, ethanol–water solution: 60:40 (v/v, eluent volume: 4 BV, flow rate: 3 BV/h. Under the above conditions, the contents of syringin, eleutheroside E and isofraxidin increased 174-fold, 20-fold and 5-fold and their recoveries were 80.93%, 93.97% and 93.79%, respectively.

  16. Telomere length and depression

    DEFF Research Database (Denmark)

    Wium-Andersen, Marie Kim; Ørsted, David Dynnes; Rode, Line

    2017-01-01

    BACKGROUND: Depression has been cross-sectionally associated with short telomeres as a measure of biological age. However, the direction and nature of the association is currently unclear. AIMS: We examined whether short telomere length is associated with depression cross-sectionally as well...... as prospectively and genetically. METHOD: Telomere length and three polymorphisms, TERT, TERC and OBFC1, were measured in 67 306 individuals aged 20-100 years from the Danish general population and associated with register-based attendance at hospital for depression and purchase of antidepressant medication....... RESULTS: Attendance at hospital for depression was associated with short telomere length cross-sectionally, but not prospectively. Further, purchase of antidepressant medication was not associated with short telomere length cross-sectionally or prospectively. Mean follow-up was 7.6 years (range 0...

  17. Kmer-SSR: a fast and exhaustive SSR search algorithm.

    Science.gov (United States)

    Pickett, Brandon D; Miller, Justin B; Ridge, Perry G

    2017-12-15

    One of the main challenges with bioinformatics software is that the size and complexity of datasets necessitate trading speed for accuracy, or completeness. To combat this problem of computational complexity, a plethora of heuristic algorithms have arisen that report a 'good enough' solution to biological questions. However, in instances such as Simple Sequence Repeats (SSRs), a 'good enough' solution may not accurately portray results in population genetics, phylogenetics and forensics, which require accurate SSRs to calculate intra- and inter-species interactions. We present Kmer-SSR, which finds all SSRs faster than most heuristic SSR identification algorithms in a parallelized, easy-to-use manner. The exhaustive Kmer-SSR option has 100% precision and 100% recall and accurately identifies every SSR of any specified length. To identify more biologically pertinent SSRs, we also developed several filters that allow users to easily view a subset of SSRs based on user input. Kmer-SSR, coupled with the filter options, accurately and intuitively identifies SSRs quickly and in a more user-friendly manner than any other SSR identification algorithm. The source code is freely available on GitHub at https://github.com/ridgelab/Kmer-SSR. perry.ridge@byu.edu. © The Author(s) 2017. Published by Oxford University Press.

  18. Training for Defense? From Stochastic Traits to Synchrony in Giant Honey Bees (<em>Apis dorsataem>

    Directory of Open Access Journals (Sweden)

    Gerald Kastberger

    2012-08-01

    Full Text Available In Giant Honey Bees, abdomen flipping happens in a variety of contexts. It can be either synchronous or cascaded, such as in the collective defense traits of shimmering and rearing-up, or it can happen as single-agent behavior. Abdomen flipping is also involved in flickering behavior, which occurs regularly under quiescent colony state displaying singular or collective traits, with stochastic, and (semi- synchronized properties. It presumably acts via visual, mechanoceptive, and pheromonal pathways and its goals are still unknown. This study questions whether flickering is preliminary to shimmering which is subject of the <em>fs em>(flickering-shimmering-transition> hypothesis? We tested the respective prediction that trigger sites (<em>ts> at the nest surface (where shimmering waves had been generated show higher flickering activity than the alternative non-trigger sites (<em>nts>. We measured the flickering activity of <em>ts>- and <em>nts>-surface bees from two experimental nests, before and after the colony had been aroused by a dummy wasp. Arousal increased rate and intensity of the flickering activity of both <em>ts>- and <em>nts> cohorts (P < 0.05, whereby the flickering intensity of <em>ts>-bees were higher than that of <em>nts>-bees (P < 0.05. Under arousal, the colonies also increased the number of flickering-active <em>ts>- and <em>nts>-cohorts (P < 0.05. This provides evidence that cohorts which are specialist at launching shimmering waves are found across the quiescent nest zone. It also proves that arousal may reinforce the responsiveness of quiescent curtain bees for participating in shimmering, practically by recruiting additional trigger site bees for expanding repetition of rate and intensity of shimmering waves. This finding confirms the <em>fs-transition> hypothesis and constitutes evidence that flickering is part of a basal colony-intrinsic information system

  19. A new algorithm for DNS of turbulent polymer solutions using the FENE-P model

    Science.gov (United States)

    Vaithianathan, T.; Collins, Lance; Robert, Ashish; Brasseur, James

    2004-11-01

    Direct numerical simulations (DNS) of polymer solutions based on the finite extensible nonlinear elastic model with the Peterlin closure (FENE-P) solve for a conformation tensor with properties that must be maintained by the numerical algorithm. In particular, the eigenvalues of the tensor are all positive (to maintain positive definiteness) and the sum is bounded by the maximum extension length. Loss of either of these properties will give rise to unphysical instabilities. In earlier work, Vaithianathan & Collins (2003) devised an algorithm based on an eigendecomposition that allows you to update the eigenvalues of the conformation tensor directly, making it easier to maintain the necessary conditions for a stable calculation. However, simple fixes (such as ceilings and floors) yield results that violate overall conservation. The present finite-difference algorithm is inherently designed to satisfy all of the bounds on the eigenvalues, and thus restores overall conservation. New results suggest that the earlier algorithm may have exaggerated the energy exchange at high wavenumbers. In particular, feedback of the polymer elastic energy to the isotropic turbulence is now greatly reduced.

  20. The Prediction of Length-of-day Variations Based on Gaussian Processes

    Science.gov (United States)

    Lei, Y.; Zhao, D. N.; Gao, Y. P.; Cai, H. B.

    2015-01-01

    Due to the complicated time-varying characteristics of the length-of-day (LOD) variations, the accuracies of traditional strategies for the prediction of the LOD variations such as the least squares extrapolation model, the time-series analysis model, and so on, have not met the requirements for real-time and high-precision applications. In this paper, a new machine learning algorithm --- the Gaussian process (GP) model is employed to forecast the LOD variations. Its prediction precisions are analyzed and compared with those of the back propagation neural networks (BPNN), general regression neural networks (GRNN) models, and the Earth Orientation Parameters Prediction Comparison Campaign (EOP PCC). The results demonstrate that the application of the GP model to the prediction of the LOD variations is efficient and feasible.