Regier, Michael D; Moodie, Erica E M
2016-05-01
We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.
International Nuclear Information System (INIS)
Zeng, G.L.; Gullberg, G.T.
1990-01-01
Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries
A Trust Region Aggressive Space Mapping Algorithm for EM
DEFF Research Database (Denmark)
Bakr., M.; Bandler, J. W.; Biernacki, R.
1998-01-01
A robust new algorithm for electromagnetic (EM) optimization of microwave circuits is presented. The algorithm (TRASM) integrates a trust region methodology with the aggressive space mapping (ASM). The trust region ensures that each iteration results in improved alignment between the coarse....... This suggested step exploits all the available EM simulations for improving the uniqueness of parameter extraction. The new algorithm was successfully used to design a number of microwave circuits. Examples include the EM optimization of a double-folded stub filter and of a high-temperature superconducting (HTS...
Application of the EM algorithm to radiographic images.
Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J
1992-01-01
The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.
Automatic Derivation of Statistical Algorithms: The EM Family and Beyond
Gray, Alexander G.; Fischer, Bernd; Schumann, Johann; Buntine, Wray
2003-01-01
Machine learning has reached a point where many probabilistic methods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized for different models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer alge...
The relationship between randomness and power-law distributed move lengths in random walk algorithms
Sakiyama, Tomoko; Gunji, Yukio-Pegio
2014-05-01
Recently, we proposed a new random walk algorithm, termed the REV algorithm, in which the agent alters the directional rule that governs it using the most recent four random numbers. Here, we examined how a non-bounded number, i.e., "randomness" regarding move direction, was important for optimal searching and power-law distributed step lengths in rule change. We proposed two algorithms: the REV and REV-bounded algorithms. In the REV algorithm, one of the four random numbers used to change the rule is non-bounded. In contrast, all four random numbers in the REV-bounded algorithm are bounded. We showed that the REV algorithm exhibited more consistent power-law distributed step lengths and flexible searching behavior.
Performance evaluation of the EM algorithm applied to radiographic images
International Nuclear Information System (INIS)
Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.
1990-01-01
In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms
A computational algorithm addressing how vessel length might depend on vessel diameter
Jing Cai; Shuoxin Zhang; Melvin T. Tyree
2010-01-01
The objective of this method paper was to examine a computational algorithm that may reveal how vessel length might depend on vessel diameter within any given stem or species. The computational method requires the assumption that vessels remain approximately constant in diameter over their entire length. When this method is applied to three species or hybrids in the...
Noise properties of the EM algorithm. Pt. 1
International Nuclear Information System (INIS)
Barrett, H.H.; Wilson, D.W.; Tsui, B.M.W.
1994-01-01
The expectation-maximisation (EM) algorithm is an important tool for maximum-likelihood (ML) estimation and image reconstruction, especially in medical imaging. It is a non-linear iterative algorithm that attempts to find the ML estimate of the object that produced a data set. The convergence of the algorithm and other deterministic properties are well established, but relatively little is known about how noise in the data influences noise in the final reconstructed image. In this paper we present a detailed treatment of these statistical properties. The specific application we have in mind is image reconstruction in emission tomography, but the results are valid for any application of the EM algorithm in which the data set can be described by Poisson statistics. We show that the probability density function for the grey level at a pixel in the image is well approximated by a log-normal law. An expression is derived for the variance of the grey level and for pixel-to-pixel covariance. The variance increases rapidly with iteration number at first, but eventually saturates as the ML estimate is approached. Moreover, the variance at any iteration number has a factor proportional to the square of the mean image (though other factors may also depend on the mean image), so a map of the standard deviation resembles the object itself. Thus low-intensity regions of the image tend to have low noise. (author)
Bakar, Sumarni Abu; Ibrahim, Milbah
2017-08-01
The shortest path problem is a popular problem in graph theory. It is about finding a path with minimum length between a specified pair of vertices. In any network the weight of each edge is usually represented in a form of crisp real number and subsequently the weight is used in the calculation of shortest path problem using deterministic algorithms. However, due to failure, uncertainty is always encountered in practice whereby the weight of edge of the network is uncertain and imprecise. In this paper, a modified algorithm which utilized heuristic shortest path method and fuzzy approach is proposed for solving a network with imprecise arc length. Here, interval number and triangular fuzzy number in representing arc length of the network are considered. The modified algorithm is then applied to a specific example of the Travelling Salesman Problem (TSP). Total shortest distance obtained from this algorithm is then compared with the total distance obtained from traditional nearest neighbour heuristic algorithm. The result shows that the modified algorithm can provide not only on the sequence of visited cities which shown to be similar with traditional approach but it also provides a good measurement of total shortest distance which is lesser as compared to the total shortest distance calculated using traditional approach. Hence, this research could contribute to the enrichment of methods used in solving TSP.
A leaf sequencing algorithm to enlarge treatment field length in IMRT
International Nuclear Information System (INIS)
Xia Ping; Hwang, Andrew B.; Verhey, Lynn J.
2002-01-01
With MLC-based IMRT, the maximum usable field size is often smaller than the maximum field size for conventional treatments. This is due to the constraints of the overtravel distances of MLC leaves and/or jaws. Using a new leaf sequencing algorithm, the usable IMRT field length (perpendicular to the MLC motion) can be mostly made equal to the full length of the MLC field without violating the upper jaw overtravel limit. For any given intensity pattern, a criterion was proposed to assess whether an intensity pattern can be delivered without violation of the jaw position constraints. If the criterion is met, the new algorithm will consider the jaw position constraints during the segmentation for the step and shoot delivery method. The strategy employed by the algorithm is to connect the intensity elements outside the jaw overtravel limits with those inside the jaw overtravel limits. Several methods were used to establish these connections during segmentation by modifying a previously published algorithm (areal algorithm), including changing the intensity level, alternating the leaf-sequencing direction, or limiting the segment field size. The algorithm was tested with 1000 random intensity patterns with dimensions of 21x27 cm2, 800 intensity patterns with higher intensity outside the jaw overtravel limit, and three different types of clinical treatment plans that were undeliverable using a segmentation method from a commercial treatment planning system. The new algorithm achieved a success rate of 100% with these test patterns. For the 1000 random patterns, the new algorithm yields a similar average number of segments of 36.9±2.9 in comparison to 36.6±1.3 when using the areal algorithm. For the 800 patterns with higher intensities outside the jaw overtravel limits, the new algorithm results in an increase of 25% in the average number of segments compared to the areal algorithm. However, the areal algorithm fails to create deliverable segments for 90% of these
Application of the Region-Time-Length algorithm to study of ...
Indian Academy of Sciences (India)
51
analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 ...
A novel gene network inference algorithm using predictive minimum description length approach.
Chaitankar, Vijender; Ghosh, Preetam; Perkins, Edward J; Gong, Ping; Deng, Youping; Zhang, Chaoyang
2010-05-28
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold which defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we proposed a new inference algorithm which incorporated mutual information (MI), conditional mutual information (CMI) and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm was evaluated using both synthetic time series data sets and a biological time series data set for the yeast Saccharomyces cerevisiae. The benchmark quantities precision and recall were used as performance measures. The results show that the proposed algorithm produced less false edges and significantly improved the precision, as compared to the existing algorithm. For further analysis the performance of the algorithms was observed over different sizes of data. We have proposed a new algorithm that implements the PMDL principle for inferring gene regulatory networks from time series DNA microarray data that eliminates the need of a fine tuning parameter. The evaluation results obtained from both synthetic and actual biological data sets show that the
Directory of Open Access Journals (Sweden)
Аndriy V. Sadchenko
2015-12-01
Full Text Available Digital television systems need to ensure that all digital signals processing operations are performed simultaneously and consistently. Frame synchronization dictated by the need to match phases of transmitter and receiver so that it would be possible to identify the start of a frame. As a frame synchronization signals are often used long length binary sequence with good aperiodic autocorrelation function. Aim: This work is dedicated to the development of the algorithm of random length sequences synthesis. Materials and Methods: The paper provides a comparative analysis of the known sequences, which can be used at present as synchronization ones, revealed their advantages and disadvantages. This work proposes the algorithm for the synthesis of binary synchronization sequences of random length with good autocorrelation properties based on noise generator with a uniform distribution law of probabilities. A "white noise" semiconductor generator is proposed to use as the initial material for the synthesis of binary sequences with desired properties. Results: The statistical analysis of the initial implementations of the "white noise" and synthesized sequences for frame synchronization of digital television is conducted. The comparative analysis of the synthesized sequences with known ones was carried out. The results show the benefits of obtained sequences in compare with known ones. The performed simulations confirm the obtained results. Conclusions: Thus, the search algorithm of binary synchronization sequences with desired autocorrelation properties received. According to this algorithm, the sequence can be longer in length and without length limitations. The received sync sequence can be used for frame synchronization in modern digital communication systems that will increase their efficiency and noise immunity.
An algorithm for the design and tuning of RF accelerating structures with variable cell lengths
Lal, Shankar; Pant, K. K.
2018-05-01
An algorithm is proposed for the design of a π mode standing wave buncher structure with variable cell lengths. It employs a two-parameter, multi-step approach for the design of the structure with desired resonant frequency and field flatness. The algorithm, along with analytical scaling laws for the design of the RF power coupling slot, makes it possible to accurately design the structure employing a freely available electromagnetic code like SUPERFISH. To compensate for machining errors, a tuning method has been devised to achieve desired RF parameters for the structure, which has been qualified by the successful tuning of a 7-cell buncher to π mode frequency of 2856 MHz with field flatness algorithm and tuning method have demonstrated the feasibility of developing an S-band accelerating structure for desired RF parameters with a relatively relaxed machining tolerance of ∼ 25 μm. This paper discusses the algorithm for the design and tuning of an RF accelerating structure with variable cell lengths.
Tracking of Multiple Moving Sources Using Recursive EM Algorithm
Directory of Open Access Journals (Sweden)
Böhme Johann F
2005-01-01
Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.
<em>DCAF4em>, a novel gene associated with leucocyte telomere length
DEFF Research Database (Denmark)
Mangino, Massimo; Christiansen, Lene; Stone, Rivka
2015-01-01
BACKGROUND: Leucocyte telomere length (LTL), which is fashioned by multiple genes, has been linked to a host of human diseases, including sporadic melanoma. A number of genes associated with LTL have already been identified through genome-wide association studies. The main aim of this study was t...
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
EM algorithm for one-shot device testing with competing risks under exponential distribution
International Nuclear Information System (INIS)
Balakrishnan, N.; So, H.Y.; Ling, M.H.
2015-01-01
This paper provides an extension of the work of Balakrishnan and Ling [1] by introducing a competing risks model into a one-shot device testing analysis under an accelerated life test setting. An Expectation Maximization (EM) algorithm is then developed for the estimation of the model parameters. An extensive Monte Carlo simulation study is carried out to assess the performance of the EM algorithm and then compare the obtained results with the initial estimates obtained by the Inequality Constrained Least Squares (ICLS) method of estimation. Finally, we apply the EM algorithm to a clinical data, ED01, to illustrate the method of inference developed here. - Highlights: • ALT data analysis for one-shot devices with competing risks is considered. • EM algorithm is developed for the determination of the MLEs. • The estimations of lifetime under normal operating conditions are presented. • The EM algorithm improves the convergence rate
Linear array implementation of the EM algorithm for PET image reconstruction
International Nuclear Information System (INIS)
Rajan, K.; Patnaik, L.M.; Ramakrishna, J.
1995-01-01
The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance
Word-length algorithm for language identification of under-resourced languages
Directory of Open Access Journals (Sweden)
Ali Selamat
2016-10-01
Full Text Available Language identification is widely used in machine learning, text mining, information retrieval, and speech processing. Available techniques for solving the problem of language identification do require large amount of training text that are not available for under-resourced languages which form the bulk of the World’s languages. The primary objective of this study is to propose a lexicon based algorithm which is able to perform language identification using minimal training data. Because language identification is often the first step in many natural language processing tasks, it is necessary to explore techniques that will perform language identification in the shortest possible time. Hence, the second objective of this research is to study the effect of the proposed algorithm on the run-time performance of language identification. Precision, recall, and F1 measures were used to determine the effectiveness of the proposed word length algorithm using datasets drawn from the Universal Declaration of Human Rights Act in 15 languages. The experimental results show good accuracy on language identification at the document level and at the sentence level based on the available dataset. The improved algorithm also showed significant improvement in run time performance compared with the spelling checker approach.
Length-Bounded Hybrid CPU/GPU Pattern Matching Algorithm for Deep Packet Inspection
Directory of Open Access Journals (Sweden)
Yi-Shan Lin
2017-01-01
Full Text Available Since frequent communication between applications takes place in high speed networks, deep packet inspection (DPI plays an important role in the network application awareness. The signature-based network intrusion detection system (NIDS contains a DPI technique that examines the incoming packet payloads by employing a pattern matching algorithm that dominates the overall inspection performance. Existing studies focused on implementing efficient pattern matching algorithms by parallel programming on software platforms because of the advantages of lower cost and higher scalability. Either the central processing unit (CPU or the graphic processing unit (GPU were involved. Our studies focused on designing a pattern matching algorithm based on the cooperation between both CPU and GPU. In this paper, we present an enhanced design for our previous work, a length-bounded hybrid CPU/GPU pattern matching algorithm (LHPMA. In the preliminary experiment, the performance and comparison with the previous work are displayed, and the experimental results show that the LHPMA can achieve not only effective CPU/GPU cooperation but also higher throughput than the previous method.
Estimation of tool wear length in finish milling using a fuzzy inference algorithm
Ko, Tae Jo; Cho, Dong Woo
1993-10-01
The geometric accuracy and surface roughness are mainly affected by the flank wear at the minor cutting edge in finish machining. A fuzzy estimator obtained by a fuzzy inference algorithm with a max-min composition rule to evaluate the minor flank wear length in finish milling is introduced. The features sensitive to minor flank wear are extracted from the dispersion analysis of a time series AR model of the feed directional acceleration of the spindle housing. Linguistic rules for fuzzy estimation are constructed using these features, and then fuzzy inferences are carried out with test data sets under various cutting conditions. The proposed system turns out to be effective for estimating minor flank wear length, and its mean error is less than 12%.
On the use of successive data in the ML-EM algorithm in Positron Emission Tomography
Energy Technology Data Exchange (ETDEWEB)
Desmedt, P; Lemahieu, I [University of Ghent, ELIS Department, SInt-Pietersnieuwstraat 41, B-9000 Gent, (Belgium)
1994-12-31
The Maximum Likelihood-Expectation Maximization (ML-EM) algorithm is the most popular statistical reconstruction technique for Positron Emission Tomography (PET). The ML-EM algorithm is however also renowned for its long reconstruction times. An acceleration technique for this algorithm is studied in this paper. The proposed technique starts the ML-EM algorithm before the measurement process is completed. Since the reconstruction is initiated during the scan of the patient, the time elapsed before a reconstruction becomes available is reduced. Experiments with software phantoms indicate that the quality of the reconstructed image using successive data is comparable to the quality of the reconstruction with the normal ML-EM algorithm. (authors). 7 refs, 3 figs.
A quantitative performance evaluation of the EM algorithm applied to radiographic images
International Nuclear Information System (INIS)
Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.
1991-01-01
In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering
High-speed computation of the EM algorithm for PET image reconstruction
International Nuclear Information System (INIS)
Rajan, K.; Patnaik, L.M.; Ramakrishna, J.
1994-01-01
The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs
A QoS-Based Dynamic Queue Length Scheduling Algorithm in Multiantenna Heterogeneous Systems
Directory of Open Access Journals (Sweden)
Verikoukis Christos
2010-01-01
Full Text Available The use of real-time delay-sensitive applications in wireless systems has significantly grown during the last years. Therefore the designers of wireless systems have faced a challenging issue to guarantee the required Quality of Service (QoS. On the other hand, the recent advances and the extensive use of multiple antennas have already been included in several commercial standards, where the multibeam opportunistic transmission beamforming strategies have been proposed to improve the performance of the wireless systems. A cross-layer-based dynamically tuned queue length scheduler is presented in this paper, for the Downlink of multiuser and multiantenna WLAN systems with heterogeneous traffic requirements. To align with modern wireless systems transmission strategies, an opportunistic scheduling algorithm is employed, while a priority to the different traffic classes is applied. A tradeoff between the maximization of the throughput of the system and the guarantee of the maximum allowed delay is obtained. Therefore, the length of the queue is dynamically adjusted to select the appropriate conditions based on the operator requirements.
Directory of Open Access Journals (Sweden)
Kim Jae H
2005-01-01
Full Text Available In this paper, we consider the issue of blind detection of Alamouti-type differential space-time (ST modulation in static Rayleigh fading channels. We focus our attention on a -shifted BPSK constellation, introducing a novel transformation to the received signal such that this binary ST modulation, which has a second-order transmit diversity, is equivalent to QPSK modulation with second-order receive diversity. This equivalent representation allows us to apply a low-complexity detection technique specifically designed for receive diversity, namely, scalar multiple-symbol differential detection (MSDD. To further increase receiver performance, we apply an iterative expectation-maximization (EM algorithm which performs joint channel estimation and sequence detection. This algorithm uses minimum mean square estimation to obtain channel estimates and the maximum-likelihood principle to detect the transmitted sequence, followed by differential decoding. With receiver complexity proportional to the observation window length, our receiver can achieve the performance of a coherent maximal ratio combining receiver (with differential decoding in as few as a single EM receiver iteration, provided that the window size of the initial MSDD is sufficiently long. To further demonstrate that the MSDD is a vital part of this receiver setup, we show that an initial ST conventional differential detector would lead to strange convergence behavior in the EM algorithm.
Directory of Open Access Journals (Sweden)
Liu Yu-Sun
2011-01-01
Full Text Available Abstract The performance of the wrap-around Viterbi decoding algorithm with finite truncation depth and fixed decoding trellis length is investigated for tail-biting convolutional codes in the mobile WiMAX standard. Upper bounds on the error probabilities induced by finite truncation depth and the uncertainty of the initial state are derived for the AWGN channel. The truncation depth and the decoding trellis length that yield negligible performance loss are obtained for all transmission rates over the Rayleigh channel using computer simulations. The results show that the circular decoding algorithm with an appropriately chosen truncation depth and a decoding trellis just a fraction longer than the original received code words can achieve almost the same performance as the optimal maximum likelihood decoding algorithm in mobile WiMAX. A rule of thumb for the values of the truncation depth and the trellis tail length is also proposed.
Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.
Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong
2016-01-01
In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.
International Nuclear Information System (INIS)
Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.
1996-01-01
We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly
Al-Jabr, Ahmad Ali; Alsunaidi, Mohammad A.; Ng, Tien Khee; Ooi, Boon S.
2013-01-01
In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.
Al-Jabr, Ahmad Ali
2013-03-01
In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Energy Technology Data Exchange (ETDEWEB)
Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
Li, X Y; Yang, G W; Zheng, D S; Guo, W S; Hung, W N N
2015-04-28
Genetic regulatory networks are the key to understanding biochemical systems. One condition of the genetic regulatory network under different living environments can be modeled as a synchronous Boolean network. The attractors of these Boolean networks will help biologists to identify determinant and stable factors. Existing methods identify attractors based on a random initial state or the entire state simultaneously. They cannot identify the fixed length attractors directly. The complexity of including time increases exponentially with respect to the attractor number and length of attractors. This study used the bounded model checking to quickly locate fixed length attractors. Based on the SAT solver, we propose a new algorithm for efficiently computing the fixed length attractors, which is more suitable for large Boolean networks and numerous attractors' networks. After comparison using the tool BooleNet, empirical experiments involving biochemical systems demonstrated the feasibility and efficiency of our approach.
High-dimensional cluster analysis with the Masked EM Algorithm
Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.
2014-01-01
Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694
Adachi, Kohei
2013-01-01
Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…
An Efficient Algorithm for the Discrete Gabor Transform using full length Windows
DEFF Research Database (Denmark)
Søndergaard, Peter Lempel
2007-01-01
This paper extends the efficient factorization of the Gabor frame operator developed by Strohmer in [1] to the Gabor analysis/synthesis operator. This provides a fast method for computing the discrete Gabor transform (DGT) and several algorithms associated with it. The algorithm is used...
An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks
Bayer, Christian
2016-01-06
In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.
Banda, Jorge A; Haydel, K Farish; Davila, Tania; Desai, Manisha; Bryson, Susan; Haskell, William L; Matheson, Donna; Robinson, Thomas N
2016-01-01
To examine the effects of accelerometer epoch lengths, wear time (WT) algorithms, and activity cut-points on estimates of WT, sedentary behavior (SB), and physical activity (PA). 268 7-11 year-olds with BMI ≥ 85th percentile for age and sex wore accelerometers on their right hips for 4-7 days. Data were processed and analyzed at epoch lengths of 1-, 5-, 10-, 15-, 30-, and 60-seconds. For each epoch length, WT minutes/day was determined using three common WT algorithms, and minutes/day and percent time spent in SB, light (LPA), moderate (MPA), and vigorous (VPA) PA were determined using five common activity cut-points. ANOVA tested differences in WT, SB, LPA, MPA, VPA, and MVPA when using the different epoch lengths, WT algorithms, and activity cut-points. WT minutes/day varied significantly by epoch length when using the NHANES WT algorithm (p algorithms. Minutes/day and percent time spent in SB, LPA, MPA, VPA, and MVPA varied significantly by epoch length for all sets of activity cut-points tested with all three WT algorithms (all p algorithms (all p algorithms and activity cut-point definitions to match different epoch lengths may introduce significant errors. Estimates of SB and PA from studies that process and analyze data using different epoch lengths, WT algorithms, and/or activity cut-points are not comparable, potentially leading to very different results, interpretations, and conclusions, misleading research and public policy.
International Nuclear Information System (INIS)
Viana, R.S.; Yoriyaz, H.; Santos, A.
2011-01-01
The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)
Energy Technology Data Exchange (ETDEWEB)
Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)
2011-07-01
The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)
Effective calculation algorithm for nuclear chains of arbitrary length and branching
International Nuclear Information System (INIS)
Chirkov, V.A.; Mishanin, B.V.
1994-01-01
An effective algorithm for calculation of the isotope concentration in the spent nuclear fuel when it is kept in storage, is presented. Using the superposition principle and representing the transfer function in a rather compact form it becomes possible achieve high calculation speed and a moderate computer code size. The algorithm is applied for the calculation of activity, energy release and toxicity of heavy nuclides and products of their decay when the fuel is kept in storage. (authors). 1 ref., 4 tabs
Simulating Evolution of <em>Drosophila melanogaster Ebonyem> Mutants Using a Genetic Algorithm
DEFF Research Database (Denmark)
Helles, Glennie
2009-01-01
Genetic algorithms are generally quite easy to understand and work with, and they are a popular choice in many cases. One area in which genetic algorithms are widely and successfully used is artificial life where they are used to simulate evolution of artificial creatures. However, despite...... their suggestive name, simplicity and popularity in artificial life, they do not seem to have gained a footing within the field of population genetics to simulate evolution of real organisms --- possibly because genetic algorithms are based on a rather crude simplification of the evolutionary mechanisms known...
Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography
Directory of Open Access Journals (Sweden)
Kiyoko Tateishi
2017-01-01
Full Text Available The maximum-likelihood expectation-maximization (ML-EM algorithm is used for an iterative image reconstruction (IIR method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.
A Linear Time Algorithm for the <em>k> Maximal Sums Problem
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Jørgensen, Allan Grønlund
2007-01-01
k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....
Statistical trajectory of an approximate EM algorithm for probabilistic image processing
International Nuclear Information System (INIS)
Tanaka, Kazuyuki; Titterington, D M
2007-01-01
We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations
Ning, Jing; Chen, Yong; Piao, Jin
2017-07-01
Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Mean field theory of EM algorithm for Bayesian grey scale image restoration
International Nuclear Information System (INIS)
Inoue, Jun-ichi; Tanaka, Kazuyuki
2003-01-01
The EM algorithm for the Bayesian grey scale image restoration is investigated in the framework of the mean field theory. Our model system is identical to the infinite range random field Q-Ising model. The maximum marginal likelihood method is applied to the determination of hyper-parameters. We calculate both the data-averaged mean square error between the original image and its maximizer of posterior marginal estimate, and the data-averaged marginal likelihood function exactly. After evaluating the hyper-parameter dependence of the data-averaged marginal likelihood function, we derive the EM algorithm which updates the hyper-parameters to obtain the maximum likelihood estimate analytically. The time evolutions of the hyper-parameters and so-called Q function are obtained. The relation between the speed of convergence of the hyper-parameters and the shape of the Q function is explained from the viewpoint of dynamics
Application and performance of an ML-EM algorithm in NEXT
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
Finite sample performance of the E-M algorithm for ranks data modelling
Directory of Open Access Journals (Sweden)
Angela D'Elia
2007-10-01
Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.
Xue, Yu; Liu, Zexian; Cao, Jun; Ma, Qian; Gao, Xinjiao; Wang, Qingqi; Jin, Changjiang; Zhou, Yanhong; Wen, Longping; Ren, Jian
2011-03-01
As the most important post-translational modification of proteins, phosphorylation plays essential roles in all aspects of biological processes. Besides experimental approaches, computational prediction of phosphorylated proteins with their kinase-specific phosphorylation sites has also emerged as a popular strategy, for its low-cost, fast-speed and convenience. In this work, we developed a kinase-specific phosphorylation sites predictor of GPS 2.1 (Group-based Prediction System), with a novel but simple approach of motif length selection (MLS). By this approach, the robustness of the prediction system was greatly improved. All algorithms in GPS old versions were also reserved and integrated in GPS 2.1. The online service and local packages of GPS 2.1 were implemented in JAVA 1.5 (J2SE 5.0) and freely available for academic researches at: http://gps.biocuckoo.org.
Puangjaktha, P.; Pailoplee, S.
2018-04-01
In order to examine the precursory seismic quiescence of upcoming hazardous earthquakes, the seismicity data available in the vicinity of the Thailand-Laos-Myanmar borders was analyzed using the Region-Time-Length (RTL) algorithm based statistical technique. The utilized earthquake data were obtained from the International Seismological Centre. Thereafter, the homogeneity and completeness of the catalogue were improved. After performing iterative tests with different values of the r0 and t0 parameters, those of r0 = 120 km and t0 = 2 yr yielded reasonable estimates of the anomalous RTL scores, in both temporal variation and spatial distribution, of a few years prior to five out of eight strong-to-major recognized earthquakes. Statistical evaluation of both the correlation coefficient and stochastic process for the RTL were checked and revealed that the RTL score obtained here excluded artificial or random phenomena. Therefore, the prospective earthquake sources mentioned here should be recognized and effective mitigation plans should be provided.
Energy Technology Data Exchange (ETDEWEB)
Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)
2009-10-15
The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries
International Nuclear Information System (INIS)
Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung
2009-01-01
The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries
DEFF Research Database (Denmark)
Christensen, Lars P.B.; Larsen, Jan
2006-01-01
A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...
A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm
DEFF Research Database (Denmark)
Bork, Lasse
This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...
Indian Academy of Sciences (India)
polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.
A fast EM algorithm for BayesA-like prediction of genomic breeding values.
Directory of Open Access Journals (Sweden)
Xiaochen Sun
Full Text Available Prediction accuracies of estimated breeding values for economically important traits are expected to benefit from genomic information. Single nucleotide polymorphism (SNP panels used in genomic prediction are increasing in density, but the Markov Chain Monte Carlo (MCMC estimation of SNP effects can be quite time consuming or slow to converge when a large number of SNPs are fitted simultaneously in a linear mixed model. Here we present an EM algorithm (termed "fastBayesA" without MCMC. This fastBayesA approach treats the variances of SNP effects as missing data and uses a joint posterior mode of effects compared to the commonly used BayesA which bases predictions on posterior means of effects. In each EM iteration, SNP effects are predicted as a linear combination of best linear unbiased predictions of breeding values from a mixed linear animal model that incorporates a weighted marker-based realized relationship matrix. Method fastBayesA converges after a few iterations to a joint posterior mode of SNP effects under the BayesA model. When applied to simulated quantitative traits with a range of genetic architectures, fastBayesA is shown to predict GEBV as accurately as BayesA but with less computing effort per SNP than BayesA. Method fastBayesA can be used as a computationally efficient substitute for BayesA, especially when an increasing number of markers bring unreasonable computational burden or slow convergence to MCMC approaches.
International Nuclear Information System (INIS)
Matsumoto, Keiichi; Ohnishi, Hideo; Niida, Hideharu; Nishimura, Yoshihiro; Wada, Yasuhiro; Kida, Tetsuo
2003-01-01
The maximum likelihood expectation maximization (ML-EM) algorithm has become available as an alternative to filtered back projection in SPECT. The actual physical performance may be different depending on the manufacturer and model, because of differences in computational details. The purpose of this study was to investigate the characteristics of seven different types of ML-EM algorithms using simple simulation data. Seven ML-EM algorithm programs were used: Genie (GE), esoft (Siemens), HARP-III (Hitachi), GMS-5500UI (Toshiba), Pegasys (ADAC), ODYSSEY-FX (Marconi), and Windows-PC (original software). Projection data of a 2-pixel-wide line source in the center of the field of view were simulated without attenuation or scatter. Images were reconstructed with ML-EM by changing the number of iterations from 1 to 45 for each algorithm. Image quality was evaluated after a reconstruction using full width at half maximum (FWHM), full width at tenth maximum (FWTM), and the total counts of the reconstructed images. In the maximum number of iterations, the difference in the FWHM value was up to 1.5 pixels, and that of FWTM, no less than 2.0 pixels. The total counts of the reconstructed images in the initial few iterations were larger or smaller than the converged value depending on the initial values. Our results for the simplest simulation data suggest that each ML-EM algorithm itself provides a simulation image. We should keep in mind which algorithm is being used and its computational details, when physical and clinical usefulness are compared. (author)
International Nuclear Information System (INIS)
Kim, Dae Won
2005-01-01
Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances
Indian Academy of Sciences (India)
to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...
International Nuclear Information System (INIS)
Chen, C.M.; Lee, S.Y.
1995-01-01
The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication
Han, Yishi; Luo, Zhixiao; Wang, Jianhua; Min, Zhixuan; Qin, Xinyu; Sun, Yunlong
2014-09-01
In general, context-based adaptive variable length coding (CAVLC) decoding in H.264/AVC standard requires frequent access to the unstructured variable length coding tables (VLCTs) and significant memory accesses are consumed. Heavy memory accesses will cause high power consumption and time delays, which are serious problems for applications in portable multimedia devices. We propose a method for high-efficiency CAVLC decoding by using a program instead of all the VLCTs. The decoded codeword from VLCTs can be obtained without any table look-up and memory access. The experimental results show that the proposed algorithm achieves 100% memory access saving and 40% decoding time saving without degrading video quality. Additionally, the proposed algorithm shows a better performance compared with conventional CAVLC decoding, such as table look-up by sequential search, table look-up by binary search, Moon's method, and Kim's method.
DEFF Research Database (Denmark)
Kamchevska, Valerija; Cristofori, Valentina; Da Ros, Francesco
2016-01-01
We propose and demonstrate an algorithm that allows for automatic synchronization of SDN-controlled all-optical TDM switching nodes connected in a ring network. We experimentally show successful WDM-SDM transmission of data bursts between all ring nodes.......We propose and demonstrate an algorithm that allows for automatic synchronization of SDN-controlled all-optical TDM switching nodes connected in a ring network. We experimentally show successful WDM-SDM transmission of data bursts between all ring nodes....
Indian Academy of Sciences (India)
ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...
Park, Sun-Youp; Choi, Jin; Roh, Dong-Goo; Park, Maru; Jo, Jung Hyun; Yim, Hong-Suh; Park, Young-Sik; Bae, Young-Ho; Park, Jang-Hyun; Moon, Hong-Kyu; Choi, Young-Jun; Cho, Sungki; Choi, Eun-Jung
2016-09-01
As described in the previous paper (Park et al. 2013), the detector subsystem of optical wide-field patrol (OWL) provides many observational data points of a single artificial satellite or space debris in the form of small streaks, using a chopper system and a time tagger. The position and the corresponding time data are matched assuming that the length of a streak on the CCD frame is proportional to the time duration of the exposure during which the chopper blades do not obscure the CCD window. In the previous study, however, the length was measured using the diagonal of the rectangle of the image area containing the streak; the results were quite ambiguous and inaccurate, allowing possible matching error of positions and time data. Furthermore, because only one (position, time) data point is created from one streak, the efficiency of the observation decreases. To define the length of a streak correctly, it is important to locate the endpoints of a streak. In this paper, a method using a differential convolution mask pattern is tested. This method can be used to obtain the positions where the pixel values are changed sharply. These endpoints can be regarded as directly detected positional data, and the number of data points is doubled by this result.
A system for the 3D reconstruction of retracted-septa PET data using the EM algorithm
International Nuclear Information System (INIS)
Johnson, C.A.; Yan, Y.; Carson, R.E.; Martino, R.L.; Daube-Witherspoon, M.E.
1995-01-01
The authors have implemented the EM reconstruction algorithm for volume acquisition from current generation retracted-septa PET scanners. Although the software was designed for a GE Advance scanner, it is easily adaptable to other 3D scanners. The reconstruction software was written for an Intel iPSC/860 parallel computer with 128 compute nodes. Running on 32 processors, the algorithm requires approximately 55 minutes per iteration to reconstruct a 128 x 128 x 35 image. No projection data compression schemes or other approximations were used in the implementation. Extensive use of EM system matrix (C ij ) symmetries (including the 8-fold in-plane symmetries, 2-fold axial symmetries, and axial parallel line redundancies) reduces the storage cost by a factor of 188. The parallel algorithm operates on distributed projection data which are decomposed by base-symmetry angles. Symmetry operators copy and index the C ij chord to the form required for the particular symmetry. The use of asynchronous reads, lookup tables, and optimized image indexing improves computational performance
Indian Academy of Sciences (India)
algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).
Indian Academy of Sciences (India)
algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...
Indian Academy of Sciences (India)
will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...
A Local Scalable Distributed EM Algorithm for Large P2P Networks
National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks
Bayer, Christian; Moraes, Alvaro; Tempone, Raul; Vilanova, Pedro
2016-01-01
In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem
Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1
Park, Thomas; Smith, Austin; Oliver, T. Emerson
2018-01-01
The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.
A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm
Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing
2018-01-01
To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.
Directory of Open Access Journals (Sweden)
Jianzhong Zhou
2018-04-01
Full Text Available With the fast development of artificial intelligence techniques, data-driven modeling approaches are becoming hotspots in both academic research and engineering practice. This paper proposes a novel data-driven T-S fuzzy model to precisely describe the complicated dynamic behaviors of pumped storage generator motor (PSGM. In premise fuzzy partition of the proposed T-S fuzzy model, a novel variable-length tree-seed algorithm based competitive agglomeration (VTSA-CA algorithm is presented to determine the optimal number of clusters automatically and improve the fuzzy clustering performances. Besides, in order to promote modeling accuracy of PSGM, the input and output formats in the T-S fuzzy model are selected by an economical parameter controlled auto-regressive (CAR model derived from a high-order transfer function of PSGM considering the distributed components in the water diversion system of the power plant. The effectiveness and superiority of the T-S fuzzy model for PSGM under different working conditions are validated by performing comparative studies with both practical data and the conventional mechanistic model.
Directory of Open Access Journals (Sweden)
Maurisrael de Moura Rocha
2009-03-01
Full Text Available O objetivo deste trabalho foi estudar o controle genético do caráter comprimento do pedúnculo em feijão-caupi (Vigna unguiculata. Para isso, foi realizado um cruzamento entre os parentais TVx-5058-09C, de pedúnculo curto, e TE96-282-22G, de pedúnculo longo. Os parentais e as gerações F1, F2, RC1 (P1xF1 e RC2 (P2xF1 foram avaliados em delineamento de blocos ao acaso, com quatro repetições. Foram estimados: variâncias fenotípica, genotípica, ambiental, aditiva e de dominância; herdabilidades no sentido amplo e restrito; grau médio de dominância e número mínimo de genes que determinam o caráter. O modelo aditivo-dominante foi adequado para explicar a variação observada. O efeito gênico aditivo foi o mais importante no controle do comprimento do pedúnculo, que é, aparentemente, controlado por cinco genes.The objective of this work was to investigate the genetic control of peduncle length in cowpea (Vigna unguiculata L.. A short peduncle cowpea line (TVx-5058-09C was crossed with a long peduncle line (TE 96-282-22G. The parents and the F1, F2, RC1 (P1xF1, and RC2 (P2xF1 generations were evaluated in randomized block design with four replications. Genotypic, phenotypic, environmental, additive, and dominance variances for peduncle length were determined. Narrow and broad sense heritability, the degree of dominance, and the minimum number of genes determining peduncle length were estimated. The additive-dominant model was adequate to explain the observed variation. The additive gene effect was the most important in controlling peduncle length, which appeared to be controlled by five genes.
Directory of Open Access Journals (Sweden)
Enrique Calderín-Ojeda
2017-11-01
Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.
EMHP: an accurate automated hole masking algorithm for single-particle cryo-EM image processing.
Berndsen, Zachary; Bowman, Charles; Jang, Haerin; Ward, Andrew B
2017-12-01
The Electron Microscopy Hole Punch (EMHP) is a streamlined suite of tools for quick assessment, sorting and hole masking of electron micrographs. With recent advances in single-particle electron cryo-microscopy (cryo-EM) data processing allowing for the rapid determination of protein structures using a smaller computational footprint, we saw the need for a fast and simple tool for data pre-processing that could run independent of existing high-performance computing (HPC) infrastructures. EMHP provides a data preprocessing platform in a small package that requires minimal python dependencies to function. https://www.bitbucket.org/chazbot/emhp Apache 2.0 License. bowman@scripps.edu. Supplementary data are available at Bioinformatics online. © The Author(s) 2017. Published by Oxford University Press.
Suppression of EM Fields using Active Control Algorithms and MIMO Antenna System
Directory of Open Access Journals (Sweden)
A. Mohammed
2004-09-01
Full Text Available Active methods for attenuating acoustic pressure fields have beensuccessfully used in many applications. In this paper we investigatesome of these active control methods in combination with a MIMO antennasystem in order to assess their validity and performance when appliedto electromagnetic fields. The application that we evaluated in thispaper is a model of a mobile phone equipped with one ordinarytransmitting antenna and two actuator-antennas which purpose is toreduce the electromagnetic field at a specific area in space (e.g. atthe human head. Simulation results show the promise of using theadaptive active control algorithms and MIMO system to attenuate theelectromagnetic field power density.
Directory of Open Access Journals (Sweden)
César Gomes Victora
1985-02-01
Full Text Available Tentou-se acompanhar a morbi-mortalidade e o crescimento de uma coorte de 6.011 crianças urbanas nascidas em 1982 nos hospitais de Pelotas, RS, através de visitas domiciliares aos 12 meses (para uma amostra de 30% das crianças e aos 20 meses (para toda a população. Estas crianças representaram mais de 99% de todos os nascimentos urbanos naquele ano. Foi possível localizar 81% das crianças aos 12 meses e 86% aos 20 meses, devido a uma mudança na estratégia de trabalho de campo. A metodologia empregada e as principais dificuldades encontradas são descritas e as características ao nascer das crianças localizadas no acompanhamento são comparadas com as características das crianças não localizadas. A potencialidade de uso dos dados coletados é exemplificada através de alguns resultados preliminares mostrando as associações entre o peso ao nascer, a renda familiar e o estado nutricional aos 12 meses. O estudo mostra que é possível acompanhar, com uma perda relativamente pequena, uma coorte de crianças com base populacional em uma cidade brasileira de tamanho médio.A cohort of 6,011 urban children born in 1982 in the hospitals of Pelotas, Rio Grande do Sul, was followed up so that their morbidity, mortality and growth could be assessed. These children accounted for over 99% of all births in the city. A 30% sample of the children were visited at home when approximately 12 months old, and the whole population was visited at about 20 months of age. It was possible to locate 81% of the children at 12 months. This proportion increased to 86% at 24 months, due to a change in the logistics of the field work which then included visiting all 69,000 households in the city to locate children whose families had moved within the urban area. The methodology and main difficulties encountered are discussed, and the characteristics at birth of children who were located at the first follow-up visit was compared to those of children lost to
Energy Technology Data Exchange (ETDEWEB)
Pereira, Claudio M.N.A. [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil); Schirru, Roberto; Martinez, Aquilino S. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia
1997-12-01
This work presents a prototype of a system for nuclear reactor core design optimization based on genetic algorithms and artificial neural networks. A neural network is modeled and trained in order to predict the flux and the neutron multiplication factor values based in the enrichment, network pitch and cladding thickness, with average error less than 2%. The values predicted by the neural network are used by a genetic algorithm in this heuristic search, guided by an objective function that rewards the high flux values and penalizes multiplication factors far from the required value. Associating the quick prediction - that may substitute the reactor physics calculation code - with the global optimization capacity of the genetic algorithm, it was obtained a quick and effective system for nuclear reactor core design optimization. (author). 11 refs., 8 figs., 3 tabs.
Surface Length 3D: Plugin do OsiriX para cálculo da distância em superfícies
Directory of Open Access Journals (Sweden)
Alexandre Campos Moraes Amato
Full Text Available Resumo Softwares tradicionais de avaliação de imagens médicas, como DICOM, possuem diversas ferramentas para mensuração de distância, área e volume. Nenhuma delas permite medir distâncias entre pontos em superfícies. O menor trajeto entre pontos possibilita o cálculo entre óstios de vasos, como no caso de aneurismas aórticos, e a avaliação dos vasos viscerais para planejamento cirúrgico. O desenvolvimento de um plugin para OsiriX para mensuração de distâncias em superfícies mostrou-se factível. A validação da ferramenta ainda se faz necessária.
Fundamental length and relativistic length
International Nuclear Information System (INIS)
Strel'tsov, V.N.
1988-01-01
It si noted that the introduction of fundamental length contradicts the conventional representations concerning the contraction of the longitudinal size of fast-moving objects. The use of the concept of relativistic length and the following ''elongation formula'' permits one to solve this problem
Earth Data Analysis Center, University of New Mexico — Flame length was modeled using FlamMap, an interagency fire behavior mapping and analysis program that computes potential fire behavior characteristics. The tool...
Mino, H
2007-01-01
To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.
Al-Jabr, Ahmad Ali; Alsunaidi, Mohammad A.; Ooi, Boon S.
2013-01-01
This paper presents methods of simulating gain media in the finite difference time-domain (FDTD) algorithm utilizing a generalized polarization formulation. The gain can be static or dynamic. For static gain, Lorentzian and non-Lorentzian models are presented and tested. For the dynamic gain, rate equations for two-level and four-level models are incorporated in the FDTD scheme. The simulation results conform with the expected behavior of wave amplification and dynamic population inversion.
International Nuclear Information System (INIS)
Pradhan, T.
1975-01-01
The concept of fundamental length was first put forward by Heisenberg from purely dimensional reasons. From a study of the observed masses of the elementary particles known at that time, it is sumrised that this length should be of the order of magnitude 1 approximately 10 -13 cm. It was Heisenberg's belief that introduction of such a fundamental length would eliminate the divergence difficulties from relativistic quantum field theory by cutting off the high energy regions of the 'proper fields'. Since the divergence difficulties arise primarily due to infinite number of degrees of freedom, one simple remedy would be the introduction of a principle that limits these degrees of freedom by removing the effectiveness of the waves with a frequency exceeding a certain limit without destroying the relativistic invariance of the theory. The principle can be stated as follows: It is in principle impossible to invent an experiment of any kind that will permit a distintion between the positions of two particles at rest, the distance between which is below a certain limit. A more elegant way of introducing fundamental length into quantum theory is through commutation relations between two position operators. In quantum field theory such as quantum electrodynamics, it can be introduced through the commutation relation between two interpolating photon fields (vector potentials). (K.B.)
Length and coverage of inhibitory decision rules
Alsolami, Fawaz
2012-01-01
Authors present algorithms for optimization of inhibitory rules relative to the length and coverage. Inhibitory rules have a relation "attribute ≠ value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. Paper contains also comparison of length and coverage of inhibitory rules constructed by a greedy algorithm and by the dynamic programming algorithm. © 2012 Springer-Verlag.
Directory of Open Access Journals (Sweden)
Tarcisio Abreu Saurin
2012-04-01
Full Text Available Este trabalho tem como objetivo principal desenvolver melhorias em um método de classificação de tipos de erros humanos de operadores de linha de frente. Tais melhorias foram desenvolvidas com base no teste do método em canteiros de obras, um ambiente no qual ele ainda não havia sido aplicado. Assim, foram investigados 19 acidentes de trabalho ocorridos em uma construtora de pequeno porte, sendo classificados os tipos de erros dos trabalhadores lesionados e de colegas de equipe que se encontravam no cenário do acidente. Os resultados indicaram que não houve nenhum erro em 70,5% das 34 vezes em que o método foi aplicado, evidenciando que as causas dos acidentes estavam fortemente associadas a fatores organizacionais. O estudo apresenta ainda recomendações para a interpretação das perguntas que constituem o método, bem como modificações em algumas dessas perguntas em comparação às versões anteriores.The objective of this study is to propose improvements in the algorithm for classifying error types of front-line workers. The improvements have been identified on the basis of testing the algorithm in construction sites, an environment where it had not been implemented it. To this end, 19 occupational accidents which occurred in a small construction company were investigated, and the error types of both injured workers and team members were classified. The results indicated that there was no error in 70.5% of the 34 times the algorithm was applied, providing evidence that the causes were strongly linked to organizational factors. Moreover, the study presents not only recommendations to facilitate the interpretation of the questions that constitute the algorithm, but also changes in some questions in comparison to the previous versions of the tool.
Directory of Open Access Journals (Sweden)
Rodrigo Bomeny de Paulo
2009-01-01
Full Text Available OBJETIVOS: Os objetivos deste trabalho foram: avaliar as complicações e o tempo de internação de doentes com acidente vascular cerebral isquêmico (AVCI na fase aguda ou subaguda em uma enfermaria de Neurologia geral em São Paulo; investigar a influência de idade, fatores de risco para doença vascular, território arterial acometido e etiologia sobre as complicações e o tempo de internação. MÉTODOS: Foram coletados prospectivamente dados de 191 doentes com AVCI e posteriormente analisados. RESULTADOS: Cinquenta e um doentes (26,7% apresentaram alguma complicação clínica durante a internação. A pneumonia foi a complicação mais frequente. O tempo médio de internação na enfermaria foi de 16,8±13,8 dias. Na análise multivariável, o único fator que se correlacionou significativamente com menor taxa de complicações foi idade mais jovem (OR=0,92-0,97, p INTRODUCTION: Purposes of this study were: evaluate complications and length of stay of patients admitted with diagnosis of ischemic stroke (IS in the acute or subacute phase, in a general Neurology ward in São paulo, Brazil; investigate the influence of age, risk factors for vascular disease, arterial territory and etiology. METHODS: Data from 191 IS patients were collected prospectively. RESULTS: Fifty-one patients (26.7% presented at least one clinical complication during stay. pneumonia was the most frequent complication. Mean length of stay was 16.8+-13.8 days. Multivariate analysis revealed a correlation between younger age and lower complication rates (OR=0.92-0.97, p < 0.001. presence of complications was the only factor that independently influenced length of stay (OR=4.20; CI=1.928.84; p<0.0001. CONCLUSION: These results should be considered in the planning and organization of IS care in Brazil.
Directory of Open Access Journals (Sweden)
Ana Beatriz Francioso de Oliveira
2010-09-01
Full Text Available OBJETIVO: A unidade de terapia intensiva é sinônimo de gravidade e apresenta taxa de mortalidade entre 5,4% e 33%. Com o aperfeiçoamento de novas tecnologias, o paciente pode ser mantido por longo período nessa unidade, ocasionando altos custos financeiros, morais e psicológicos para todos os envolvidos. O objetivo do presente estudo foi avaliar os fatores associados à maior mortalidade e tempo de internação prolongado em uma unidade de terapia intensiva adulto. MÉTODOS: Participaram deste estudo todos os pacientes admitidos consecutivamente na unidade de terapia intensiva de adultos, clínica/cirúrgica do Hospital das Clínicas da Universidade Estadual de Campinas, no período de seis meses. Foram coletados dados como: sexo, idade, diagnóstico, antecedentes pessoais, APACHE II, dias de ventilação mecânica invasiva, reintubação orotraqueal, traqueostomia, dias de internação na unidade de terapia intensiva, alta ou óbito na unidade de terapia intensiva. RESULTADOS: Foram incluídos no estudo 401 pacientes, sendo 59,6% homens e 40,4% mulheres, com idade média de 53,8±18,0 anos. A média de internação na unidade de terapia intensiva foi de 8,2±10,8 dias, com taxa de mortalidade de 13,46%. Dados significativos para mortalidade e tempo de internação prolongado em unidade de terapia intensiva (p11, traqueostomia e reintubação. CONCLUSÃO: APACHE >11, traqueostomia e reintubação estiveram associados, neste estudo, à maior taxa de mortalidade e tempo de permanência prolongado em unidade de terapia intensiva.OBJECTIVE: The intensive care unit is synonymous of high severity, and its mortality rates are between 5.4 and 33%. With the development of new technologies, a patient can be maintained for long time in the unit, causing high costs, psychological and moral for all involved. This study aimed to evaluate the risk factors for mortality and prolonged length of stay in an adult intensive care unit. METHODS: The study
Dybvik, Lisa; Skraastad, Erlend; Yeltayeva, Aigerim; Konkayev, Aidos; Musaeva, Tatiana; Zabolotskikh, Igor; Bjertnaes, Lars; Dahl, Vegard; Raeder, Johan; Kuklin, Vladimir
2017-01-01
We recently introduced the efficacy safety score (ESS) as a new "call-out algorithm" for management of postoperative pain and side effects. In this study, we report the influence of ESS recorded hourly during the first 8 hours after surgery on the mobility degree, postoperative nonsurgical complications, and length of hospital stay (LOS). We randomized 1152 surgical patients into three groups for postoperative observation: (1) ESS group ( n = 409), (2) Verbal Numeric Rate Scale (VNRS) for pain group ( n = 417), and (3) an ordinary qualitative observation (Control) group ( n = 326). An ESS > 10 or VNRS > 4 at rest or a nurse's observation of pain or adverse reaction to analgesic treatment in the Control group served as a "call-out alarm" for an anaesthesiologist. We found no significant differences in the mobility degree and number of postoperative nonsurgical complications between the groups. LOS was significantly shorter with 12.7 ± 6.3 days (mean ± SD) in the ESS group versus 14.2 ± 6.2 days in the Control group ( P < 0.001). Postoperative ESS recording in combination with the possibility to call upon an anaesthesiologist when exceeding the threshold score might have contributed to the reductions of LOS in this two-centre study. This trial is registered with NCT02143128.
Directory of Open Access Journals (Sweden)
Taciana D. de A. Braga
2002-01-01
Full Text Available Objetivo: uma avaliação acurada do estado nutricional da criança ao nascer tem importância na identificação precoce de eventos mórbidos relacionados com a aceleração ou desaceleração do crescimento intra-útero. Os índices antropométricos resultantes das razões entre o peso e o comprimento e aqueles que resultam da razão entre o perímetro braquial e o perímetro cefálico podem ser uma alternativa, dentro da antropometria, para esse fim. O objetivo principal deste estudo foi verificar o grau de correlação entre a prega cutânea triciptal e os índices antropométricos - índice ponderal de Rohrer, índice de massa corpórea, razão peso/comprimento, razão perímetro braquial/perímetro cefálico - utilizados como critério de avaliação do estado nutricional de recém-nascidos. Métodos: realizou-se um estudo de corte transversal em 390 recém-nascidos a termo na maternidade do Centro de Atenção à Saúde da Mulher do Instituto Materno-Infantil de Pernambuco, no período de maio a julho de 1999. Os recém-nascidos foram avaliados quanto ao peso, comprimento, perímetros cefálico e braquial e prega cutânea triciptal. Resultados: os resultados mostraram que os índices antropométricos apresentaram correlação significante com a prega cutânea triciptal, tendo sido a razão simples entre o peso e o comprimento aquele que se correlacionou mais fortemente (r = 0,63; p Objective: An accurate assessment of the nutritional status of an infant at birth is very important, since it provides information on early identification of pathological events related to intrauterine growth acceleration or retardation. Anthropometric ratios such as weight/length and mid-arm circumference/head circumference may be used as alternative tools for this purpose. The main objective of this study was to verify the correlation between triceps skinfold thickness with Rohrer Index, Body Mass Index, weight/length ratio, and mid-arm circumference
Directory of Open Access Journals (Sweden)
Ana Paula Iannoni
2006-04-01
Full Text Available O modelo hipercubo, conhecido na literatura de problemas de localização de sistemas servidor para cliente, é um modelo baseado em teoria de filas espacialmente distribuídas e aproximações Markovianas. O modelo pode ser modificado para analisar os sistemas de atendimentos emergenciais (SAEs em rodovias, considerando as particularidades da política de despacho destes sistemas. Neste estudo, combinou-se o modelo hipercubo com um algoritmo genético para otimizar a configuração e operação de SAEs em rodovias. A abordagem é efetiva para apoiar decisões relacionadas ao planejamento e operação destes sistemas, por exemplo, em determinar o tamanho ideal para as áreas de cobertura de cada ambulância, de forma a minimizar o tempo médio de resposta aos usuários e o desbalanceamento das cargas de trabalho das ambulâncias. Os resultados computacionais desta abordagem foram analisados utilizando dados reais do sistema Anjos do Asfalto (rodovia Presidente Dutra.The hypercube model, well-known in the literature on problems of server-to-customer localization systems, is based on the spatially distributed queuing theory and Markovian analysis approximations. The model can be modified to analyze Emergency Medical Systems (EMSs on highways, considering the particularities of these systems' dispatching policies. In this study, we combine the hypercube model with a genetic algorithm to optimize the configuration and operation of EMSs on highways. This approach is effective to support planning and operation decisions, such as determining the ideal size of the area each ambulance should cover to minimize not only the average time of response to the user but also ambulance workload imbalances, as well as generating a Pareto efficient boundary between these measures. The computational results of this approach were analyzed using real data Anjos do Asfalto EMS (which covers the Presidente Dutra highway.
DEFF Research Database (Denmark)
Kimura, Masayuki; Hjelmborg, Jacob V B; Gardner, Jeffrey P
2008-01-01
Leukocyte telomere length, representing the mean length of all telomeres in leukocytes, is ostensibly a bioindicator of human aging. The authors hypothesized that shorter telomeres might forecast imminent mortality in elderly people better than leukocyte telomere length. They performed mortality...
Directory of Open Access Journals (Sweden)
J.C.M.C. Rocha
2005-12-01
Full Text Available Variance components were estimated for gestation length fitting the additive direct effect of calf, maternal genetic effect and sire effect as random effects. The statistical models also included the fixed effects of contemporary group, that included the date of breeding (AI, date of birth, and sex of calf, and the covariate age of dam at calving (linear and quadratic. Two different models were used, model 1 considering GL as a trait of the calf, and model 2 considering GL as a trait of the dam. The means of gestation length for the purebred animals were 294.55 days (males and 293.34 days (females, while for the crossbred animals they were 292.49 days (males and 292.55 days (females. Variance components for the purebred animals, fitting model 1, were 14.47, 72.78 and 57.31, for the additive genetic (sigma2a, total phenotypic (sigma2p and residual (sigma2e effects, respectively, with a heritability estimate of 0.21. For the crossbred animals, variance components for sigma2a, sigma2p, sigma2e were 90.40, 127.35 and 36.95, respectively, with a heritability of 0.71. Fitting model 2, the estimated variance components for the purebred animals were 12.78, 5.01, 74.84 and 57.05 for sigma2a , sire of calf (sigma ²asire, sigma2p, and sigma2e , respectively. The sire effect accounted for 0.07 (c² of the phenotypic variance and the coefficient of repeatability was 0.17. For the crossbred animals, the variance components were 22.11 (sigma2a , 22.97 (sigma ²asire , 127.70 (sigma2p and 82.61 (sigma2e, while c² was 0.18 and repeatability was 0.17. Therefore, regarding selection of beef cattle, it is suggested to use the heritability estimate obtained by model 1, where GL is considered as a trait of the calf.Estimaram-se os componentes de variância do período de gestação (PG considerando-se o efeito direto do bezerro e os efeitos direto da vaca e aleatório do touro (pai do bezerro. Além dos efeitos aleatórios, os modelos estatísticos incluíram os
Directory of Open Access Journals (Sweden)
Felipe Montes Pena
2010-12-01
Full Text Available OBJETIVOS: O tempo de internação prolongado após cirurgia cardíaca é associado a resultados imediatos ruins e aumento dos custos. O objetivo deste estudo foi analisar o poder preditor do escore de Ambler na previsão do tempo de internação em unidade de terapia intensiva. MÉTODOS: Estudo de coorte retrospectiva com dados coletados de 110 pacientes submetidos à cirurgia de troca valvar isolada ou associada. Os valores do escore aditivo e logístico do escore de Ambler e as performances preditivas do escore de Ambler foram obtidos por meio da curva ROC. A estadia em unidade de terapia intensiva definiu-se como normal 3 dias. A área sobre as curvas dos modelos aditivo e logístico foram comparadas por meio do teste de Hanley-MacNeil. RESULTADOS: A média de permanência em unidade de terapia intensiva foi de 4,2 dias. Sessenta e três pacientes pertenciam ao sexo masculino. O modelo logístico apresentou área sob a curva ROC de 0,73 e 0,79 para internação >3 dias e 3 dias e OBJECTIVES: The length of stay after prolonged cardiac surgery has been associated with poor immediate outcomes and increased costs. This study aimed to evaluate the predictive power of the Ambler Score to anticipate the length of stay in the intensive care unit. METHODS: This was a retrospective cohort study based on data collected from 110 patients undergoing valve replacement surgery alone or in combination with other procedures. Additive and logistic Ambler Scores were obtained and their predictive performances calculated using the Receiver Operating Characteristic curve. The normal length stay in the intensive care unit was assumed to be 3 days. The areas under the receiver operating curves for both the additive and logistic models were compared using the Hanley-MacNeil test. RESULTS: The mean intensive care unit length of stay was 4.2 days. Sixty-three patients were male. The logistic model showed areas under the receiver operating characteristic curve of 0
Energy Technology Data Exchange (ETDEWEB)
Leite, Vinicius Freitas
2012-07-01
This work analyzes, through the study of the interaction of electromagnetic radiation with matter, two schemes of heterogeneous phantoms schematised to simulate real cases of planning with different electronic densities through the Pencil Beam, Collapsed Cone and Analytical Anisotropic Algorithm algorithms and compare with measurements Of relative absorbed dose in an IBA CC13 ionization chamber and Gafchromic® EBT2 radiochromic film. Epichlorohydrin rubber and its compatibility in comparison with human bone has also been evaluated. The assembly of the heterogeneous phantoms was feasible and the results regarding the density and attenuation of the rubber presented consistent values. However, the study of PDPs in constructed phantoms showed a considerable percentage discrepancy between measurements and planning.
Energy Technology Data Exchange (ETDEWEB)
Siqueira, Newton Norat
2006-12-15
This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)
Energy Technology Data Exchange (ETDEWEB)
Souza, Claudio Eduardo Scriptori de
1996-02-01
In the Operating Center of Electrical Energy System has been every time more and more important the understanding of the difficulties related to the electrical power behavior. In order to have adequate operation of the system the state estimation process is very important. However before performing the system state estimation owe needs to know if the system is observable otherwise the estimation will no be possible. This work has a main objective, to develop a software that allows one to visualize the whole network in case that network is observable or the observable island of the entire network. As theoretical background the theories and algorithm using the triangular factorization of gain matrix as well as the concepts contained on factorization path developed by Bretas et alli were used. The algorithm developed by him was adapted to the Windows graphical form so that the numerical results of the network observability are shown in the computer screen in graphical form. This algorithm is essentially instead of numerical as the one based on the factorization of gain matrix only. To implement the algorithm previously referred it was used the Borland C++ compiler for windows version 4.0 due to the facilities for sources generation it presents. The results of the tests in the network with 6, 14 and 30 bus leads to: (1) the simplification of observability analysis, using sparse vectors and triangular factorization of the gain matrix; (2) the behavior similarity of the three testes systems with effective clues that the routine developed works well for any systems mainly for systems with bigger quantities of bus and lines; (3) the alternative way of presenting numerical results using the program developed here in graphical forms. (author)
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...
Vector Network Coding Algorithms
Ebrahimi, Javad; Fragouli, Christina
2010-01-01
We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...
Energy Technology Data Exchange (ETDEWEB)
Almeida, Adino Americo Heimlich
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)
Energy Technology Data Exchange (ETDEWEB)
Barbosa, Diego R.; Silva, Alessandro L. da; Luciano, Edson Jose Rezende; Nepomuceno, Leonardo [Universidade Estadual Paulista (UNESP), Bauru, SP (Brazil). Dept. de Engenharia Eletrica], Emails: diego_eng.eletricista@hotmail.com, alessandrolopessilva@uol.com.br, edson.joserl@uol.com.br, leo@feb.unesp.br
2009-07-01
Problems of DC Optimal Power Flow (OPF) have been solved by various conventional optimization methods. When the modeling of DC OPF involves discontinuous functions or not differentiable, the use of solution methods based on conventional optimization is often not possible because of the difficulty in calculating the gradient vectors at points of discontinuity/non-differentiability of these functions. This paper proposes a method for solving the DC OPF based on Genetic Algorithms (GA) with real coding. The proposed GA has specific genetic operators to improve the quality and viability of the solution. The results are analyzed for an IEEE test system, and its solutions are compared, when possible, with those obtained by a method of interior point primal-dual logarithmic barrier. The results highlight the robustness of the method and feasibility of obtaining the solution to real systems.
De Götzen , Amalia; Mion , Luca; Tache , Olivier
2007-01-01
International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
String matching with variable length gaps
DEFF Research Database (Denmark)
Bille, Philip; Gørtz, Inge Li; Vildhøj, Hjalte Wedel
2012-01-01
primitive in computational biology applications. Let m and n be the lengths of P and T, respectively, and let k be the number of strings in P. We present a new algorithm achieving time O(nlogk+m+α) and space O(m+A), where A is the sum of the lower bounds of the lengths of the gaps in P and α is the total...... number of occurrences of the strings in P within T. Compared to the previous results this bound essentially achieves the best known time and space complexities simultaneously. Consequently, our algorithm obtains the best known bounds for almost all combinations of m, n, k, A, and α. Our algorithm...
A fast fractional difference algorithm
DEFF Research Database (Denmark)
Jensen, Andreas Noack; Nielsen, Morten Ørregaard
2014-01-01
We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...
A Fast Fractional Difference Algorithm
DEFF Research Database (Denmark)
Jensen, Andreas Noack; Nielsen, Morten Ørregaard
We provide a fast algorithm for calculating the fractional difference of a time series. In standard implementations, the calculation speed (number of arithmetic operations) is of order T 2, where T is the length of the time series. Our algorithm allows calculation speed of order T log...
Canela, Andrés; Klatt, Peter; Blasco, María A
2007-01-01
Most somatic cells of long-lived species undergo telomere shortening throughout life. Critically short telomeres trigger loss of cell viability in tissues, which has been related to alteration of tissue function and loss of regenerative capabilities in aging and aging-related diseases. Hence, telomere length is an important biomarker for aging and can be used in the prognosis of aging diseases. These facts highlight the importance of developing methods for telomere length determination that can be employed to evaluate telomere length during the human aging process. Telomere length quantification methods have improved greatly in accuracy and sensitivity since the development of the conventional telomeric Southern blot. Here, we describe the different methodologies recently developed for telomere length quantification, as well as their potential applications for human aging studies.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Accelerated EM-based clustering of large data sets
Verbeek, J.J.; Nunnink, J.R.J.; Vlassis, N.
2006-01-01
Motivated by the poor performance (linear complexity) of the EM algorithm in clustering large data sets, and inspired by the successful accelerated versions of related algorithms like k-means, we derive an accelerated variant of the EM algorithm for Gaussian mixtures that: (1) offers speedups that
Telomere length and depression
DEFF Research Database (Denmark)
Wium-Andersen, Marie Kim; Ørsted, David Dynnes; Rode, Line
2017-01-01
BACKGROUND: Depression has been cross-sectionally associated with short telomeres as a measure of biological age. However, the direction and nature of the association is currently unclear. AIMS: We examined whether short telomere length is associated with depression cross-sectionally as well...... as prospectively and genetically. METHOD: Telomere length and three polymorphisms, TERT, TERC and OBFC1, were measured in 67 306 individuals aged 20-100 years from the Danish general population and associated with register-based attendance at hospital for depression and purchase of antidepressant medication....... RESULTS: Attendance at hospital for depression was associated with short telomere length cross-sectionally, but not prospectively. Further, purchase of antidepressant medication was not associated with short telomere length cross-sectionally or prospectively. Mean follow-up was 7.6 years (range 0...
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Directory of Open Access Journals (Sweden)
Xuemei Liu
2012-09-01
Full Text Available Cellulose synthase (CESA, which is an essential catalyst for the generation of plant cell wall biomass, is mainly encoded by the <em>CesA> gene family that contains ten or more members. In this study; four full-length cDNAs encoding CESA were isolated from<em> Betula platyphyllaem> Suk., which is an important timber species, using RT-PCR combined with the RACE method and were named as <em>BplCesA3em>, <em>−4em>,> −7 em>and> −8em>. These deduced CESAs contained the same typical domains and regions as their <em>Arabidopsis> homologs. The cDNA lengths differed among these four genes, as did the locations of the various protein domains inferred from the deduced amino acid sequences, which shared amino acid sequence identities ranging from only 63.8% to 70.5%. Real-time RT-PCR showed that all four <em>BplCesAs> were expressed at different levels in diverse tissues. Results indicated that BplCESA8 might be involved in secondary cell wall biosynthesis and floral development. BplCESA3 appeared in a unique expression pattern and was possibly involved in primary cell wall biosynthesis and seed development; it might also be related to the homogalacturonan synthesis. BplCESA7 and BplCESA4 may be related to the formation of a cellulose synthase complex and participate mainly in secondary cell wall biosynthesis. The extremely low expression abundance of the four BplCESAs in mature pollen suggested very little involvement of them in mature pollen formation in <em>Betula>. The distinct expression pattern of the four <em>BplCesAs> suggested they might participate in developments of various tissues and that they are possibly controlled by distinct mechanisms in <em>Betula.>
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
Efficient sequential and parallel algorithms for finding edit distance based motifs.
Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar
2016-08-18
Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in
Directory of Open Access Journals (Sweden)
Luciana Kazue Otutumi
2008-07-01
Full Text Available O trabalho objetivou avaliar o efeito do probiótico associado a diferentes níveis de proteína bruta (PB sobre o comprimento e a morfometria da mucosa do intestino delgado de codornas de corte. Foram utilizadas 2.304 codornas de corte, distribuídas em umdelineamento experimental inteiramente casualizado em esquema fatorial 2 x 4 (com e sem probiótico; quatro níveis de PB – 15, 20, 25 e 30%, com duas repetições por tratamento, em dois períodos experimentais. Aos sete, 14, 21 e 35 dias de idade, foram abatidas duas aves de cada repetição para avaliação do comprimento do intestino delgado (CID e a morfometria da mucosa do duodeno e íleo. O probiótico não influenciou o CID e a morfometria da mucosa do intestino delgado. O comprimento do intestino aumentou de maneira linear com a elevação dos níveis de PB aos sete, 14 e 21 dias, e a morfometria da mucosa aumentou de forma linear somente para altura vilo do íleo. Pode-se concluir que, nas condições ambientais em que as codornas foram criadas, apenas o nível de proteína influenciou o comprimento do intestino delgado e a altura de vilo do íleo, não sendoobservado efeito do probiótico sobre estes parâmetros.The aim of this study was to evaluate the effect of probiotic associated to different levels of crude protein (CP on thelength and mucous morphometry of the small intestine of meat quails. The study used 2,304 meat quails, distributed in a completely randomized experimental design in a 2 x 4 factorial scheme (with and without probiotic; four levels of CP – 15, 20, 25 and 30%, withtwo replications per treatment, in two experimental periods. At seven, 14, 21 and 35 days of age, two quails of each replication were slaughtered in order to evaluate the length of the small intestine (LSI, as well as duodenum and ileum mucous morphometry. LSI and smallintestine mucous morphometry were not influenced by probiotic. Intestine length increased in a linear fashion with the increase
International Nuclear Information System (INIS)
Bruyere, M.; Vallee, A.; Collette, C.
1986-09-01
Extended fuel cycle length and burnup are currently offered by Framatome and Fragema in order to satisfy the needs of the utilities in terms of fuel cycle cost and of overall systems cost optimization. We intend to point out the consequences of an increased fuel cycle length and burnup on reactor safety, in order to determine whether the bounding safety analyses presented in the Safety Analysis Report are applicable and to evaluate the effect on plant licensing. This paper presents the results of this examination. The first part indicates the consequences of increased fuel cycle length and burnup on the nuclear data used in the bounding accident analyses. In the second part of this paper, the required safety reanalyses are presented and the impact on the safety margins of different fuel management strategies is examined. In addition, systems modifications which can be required are indicated
Directory of Open Access Journals (Sweden)
Fábio de Azevedo
2012-03-01
Full Text Available AIM: This study presents length-weight regressions adjusted for the most representative microcrustacean species and young stages of copepods from tropical lakes, together with a comparison of these results with estimates from the literature for tropical and temperate regions; METHODS: Samples were taken from six isolated lakes, in summer and winter, using a motorized pump and plankton net. The dry weight of each size class (for cladocerans or developmental stage (for copepods was measured using an electronic microbalance; RESULTS: Adjusted regressions were significant. We observed a trend of under-estimating the weights of smaller species and overestimating those of larger species, when using regressions obtained from temperate regions; CONCLUSION: We must be cautious about using pooled regressions from the literature, preferring models of similar species, or weighing the organisms and building new models.OBJETIVO: Este estudo apresenta as regressões peso-comprimento elaboradas para as espécies mais representativas de microcrustáceos e formas jovens de copépodes em lagos tropicais, bem como a comparação desses resultados com as estimativas da literatura para as regiões tropical e temperada; MÉTODOS: As amostragens foram realizadas em seis lagoas isoladas, no verão e no inverno, usando moto-bomba e rede de plâncton. O peso seco de cada classe de tamanho (para cladóceros e estágio de desenvolvimento (copépodes foi medido em microbalança eletrônica; RESULTADOS: As regressões ajustadas foram significativas. Observamos uma tendência em subestimar o peso das espécies de menor porte e superestimar as espécies de maior porte, quando se utiliza regressões peso-comprimento obtidas para a região de clima temperado; CONCLUSÃO: Devemos ter cautela no uso de regressões peso-comprimento existentes na literatura, preferindo modelos para as mesmas espécies, ou pesar os organismos e construir os próprios modelos.
Relativistic distances, sizes, lengths
International Nuclear Information System (INIS)
Strel'tsov, V.N.
1992-01-01
Such notion as light or retarded distance, field size, formation way, visible size of a body, relativistic or radar length and wave length of light from a moving atom are considered. The relation between these notions is cleared up, their classification is given. It is stressed that the formation way is defined by the field size of a moving particle. In the case of the electromagnetic field, longitudinal sizes increase proportionally γ 2 with growing charge velocity (γ is the Lorentz-factor). 18 refs
Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail
2013-01-01
This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach
Learning Path Recommendation Based on Modified Variable Length Genetic Algorithm
Dwivedi, Pragya; Kant, Vibhor; Bharadwaj, Kamal K.
2018-01-01
With the rapid advancement of information and communication technologies, e-learning has gained a considerable attention in recent years. Many researchers have attempted to develop various e-learning systems with personalized learning mechanisms for assisting learners so that they can learn more efficiently. In this context, curriculum sequencing…
Blind sequence-length estimation of low-SNR cyclostationary sequences
CSIR Research Space (South Africa)
Vlok, JD
2014-06-01
Full Text Available Several existing direct-sequence spread spectrum (DSSS) detection and estimation algorithms assume prior knowledge of the symbol period or sequence length, although very few sequence-length estimation techniques are available in the literature...
Directory of Open Access Journals (Sweden)
Giovanni Amori
1986-12-01
Full Text Available Abstract In Italy there are two species of <em>Apodemus> (<em>Sylvaemus>: <em>Apodemus sylvaticusem> on the mainland and the main island, and <em>Apodemus flavicollisem> only on the mainland. The trend of some morphometric characters of the skull (incisive foramen length - FI; interorbital breadth = IO; length of palatal bridge = PP; upper alveolar length = $M^1M^3$ was analized and some theoretical models verified for <em>A. sylvaticusem>. If one considers the sympatric population of <em>A. sylvaticusem> and <em>A. flavicollisem> simultaneously the characters PP, IO and $M^1M^3$ appear significantly isometric being directly correlated ($P leq O.O1$, while FI character results allometric with respect to the previous ones, as expected. If one considers the sympatric populations of each of the species separately, the scenario is different. For <em>A. sylvaticusem> only PP and $M^1M^3$ are isometric ($P leq 0.05$. For <em>A. flavicollisem> only $M^1M^3$ and FI appear to be correlated, although not as significantly as for <em>A. sylvaticusem> ($P le 0.05$; one tail. The insular populations of <em>A. sylvaticusem> do not show significant correlations, except for FI and $M^1M^3$ ($P le 0.05$. On the contrary, considering all populations, sympatric and allopatric, of <em>A. sylvaticusem> at the same time are significant correlations ($P le 0.05$ in all combinations of characters, except for those involving the IO. We suggest that the isometric relations in sympatric assemblages are confined within a morphological range available to the genus <em>Apodemus>. In such a space, the two species are split in two different and innerly homogeneous distributions. We found no evidence to confirm the niche variation hypothesis. On the contrary, the variability expressed as SO or CV's appears higher in the sympatric populations than in the allopatric ones, for three of the four characters, confirming previous results
Pion nucleus scattering lengths
International Nuclear Information System (INIS)
Huang, W.T.; Levinson, C.A.; Banerjee, M.K.
1971-09-01
Soft pion theory and the Fubini-Furlan mass dispersion relations have been used to analyze the pion nucleon scattering lengths and obtain a value for the sigma commutator term. With this value and using the same principles, scattering lengths have been predicted for nuclei with mass number ranging from 6 to 23. Agreement with experiment is very good. For those who believe in the Gell-Mann-Levy sigma model, the evaluation of the commutator yields the value 0.26(m/sub σ//m/sub π/) 2 for the sigma nucleon coupling constant. The large dispersive corrections for the isosymmetric case implies that the basic idea behind many of the soft pion calculations, namely, slow variation of matrix elements from the soft pion limit to the physical pion mass, is not correct. 11 refs., 1 fig., 3 tabs
Extending electronic length frequency analysis in R
DEFF Research Database (Denmark)
Taylor, M. H.; Mildenberger, Tobias K.
2017-01-01
VBGF (soVBGF) requires a more intensive search due to two additional parameters. This work describes the implementation of two optimisation approaches ("simulated annealing" and "genetic algorithm") for growth function fitting using the open-source software "R." Using a generated LFQ data set......Electronic length frequency analysis (ELEFAN) is a system of stock assessment methods using length-frequency (LFQ) data. One step is the estimation of growth from the progression of LFQ modes through time using the von Bertalanffy growth function (VBGF). The option to fit a seasonally oscillating...... of the asymptotic length parameter (L-infinity) are found to have significant effects on parameter estimation error. An outlook provides context as to the significance of the R-based implementation for further testing and development, as well as the general relevance of the method for data-limited stock assessment....
Directory of Open Access Journals (Sweden)
Régis Eric Maia Barros
2010-06-01
Full Text Available OBJECTIVE: Characterize and compare acute psychiatric admissions to the psychiatric wards of a general hospital (22 beds, a psychiatric hospital (80 and of an emergency psychiatry unit (6. METHOD: Survey of the ratios and shares of the demographic, diagnostic and hospitalization variables involved in all acute admissions registered in a catchment area in Brazil between 1998 and 2004. RESULTS: From the 11,208 admissions, 47.8% of the patients were admitted to a psychiatric hospital and 14.1% to a general hospital. The emergency psychiatry unit accounted for 38.1% of all admissions during the period, with a higher variability in occupancy rate and bed turnover during the years. Around 80% of the hospital stays lasted less than 20 days and in almost half of these cases, patients were discharged in 2 days. Although the total number of admissions remained stable during the years, in 2004, a 30% increase was seen compared to 2003. In 2004, bed turnover and occupancy rate at the emergency psychiatry unit increased. CONCLUSION: The increase in the number of psychiatric admissions in 2004 could be attributed to a lack of new community-based services available in the area beginning in 1998. Changes in the health care network did affect the emergency psychiatric service and the limitations of the community-based network could influence the rate of psychiatric admissions.OBJETIVO: Caracterizar e comparar internações psiquiátricas agudas em alas psiquiátricas no hospital geral (22 leitos, hospital psiquiátrico (80 e emergência psiquiátrica (6. MÉTODO: Foram analisadas todas as internações agudas entre 1998 e 2004 na região do estudo, com razões e proporções de variáveis demográficas, diagnósticas e das hospitalizações. RESULTADOS: Das 11.208 internações, 47,8% foram no hospital psiquiátrico e 14,1% no hospital geral. A emergência psiquiátrica realizou 38,1% das internações no período, com grande variabilidade da taxa de ocupa
Gap length distributions by PEPR
International Nuclear Information System (INIS)
Warszawer, T.N.
1980-01-01
Conditions guaranteeing exponential gap length distributions are formulated and discussed. Exponential gap length distributions of bubble chamber tracks first obtained on a CRT device are presented. Distributions of resulting average gap lengths and their velocity dependence are discussed. (orig.)
Relativistic length agony continued
Directory of Open Access Journals (Sweden)
Redžić D.V.
2014-01-01
Full Text Available We made an attempt to remedy recent confusing treatments of some basic relativistic concepts and results. Following the argument presented in an earlier paper (Redžić 2008b, we discussed the misconceptions that are recurrent points in the literature devoted to teaching relativity such as: there is no change in the object in Special Relativity, illusory character of relativistic length contraction, stresses and strains induced by Lorentz contraction, and related issues. We gave several examples of the traps of everyday language that lurk in Special Relativity. To remove a possible conceptual and terminological muddle, we made a distinction between the relativistic length reduction and relativistic FitzGerald-Lorentz contraction, corresponding to a passive and an active aspect of length contraction, respectively; we pointed out that both aspects have fundamental dynamical contents. As an illustration of our considerations, we discussed briefly the Dewan-Beran-Bell spaceship paradox and the ‘pole in a barn’ paradox. [Projekat Ministarstva nauke Republike Srbije, br. 171028
Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm
Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad
2018-01-01
Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.
Directory of Open Access Journals (Sweden)
Rogério Dias
2000-10-01
Full Text Available Objetivo: estudar as repercussões da hipertensão arterial sobre o peso e comprimento corpóreo e sobre o peso do fígado e do cérebro de recém-nascidos (RN. Métodos: foram utilizadas 82 ratas virgens da raça Wistar em idade de reprodução. Após a indução da hipertensão arterial experimental (modelo Goldblatt I: 1 rim - 1 clipe as ratas foram sorteadas para compor os quatro grandes grupos experimentais (controle (C, manipulação (M, nefrectomia (N e hipertensão (H. A seguir, as ratas foram distribuídas por sorteio em 8 subgrupos, sendo quatro grupos prenhes e quatro grupos não-prenhes. Após acasalamento dos quatro grupos prenhes, obtivemos com o nascimento dos recém-nascidos os seguintes grupos: RN-C, RN-M, RN-N e RN-H, respectivamente controle, manipulação, nefrectomia e hipertensão. Resultados: quanto ao peso e comprimento corpóreo dos recém-nascidos observamos que os grupos RN-N e RN-H apresentaram os menores pesos ( = 3,64 ± 0,50 e ou = 3,37 ± 0,44, respectivamente e comprimentos ( = 3,89 ± 0,36 e ou = 3,68 ± 0,32, respectivamente em relação ao seus controles ( = 5,40 ± 0,51 e ou = 4,95 ± 0,23, respectivamente. Quanto ao peso do fígado os RN-H apresentaram os menores pesos ( = 0,22 ± 0,03 em relação a todos os demais grupos em estudo, e quanto ao peso do encéfalo os RN-N e RN-H apresentaram os menores pesos ( = 0,16 ± 0,01 e ou = 0,16 ± 0,05, respectivamente em relação aos seus controles ( = 0,22 ± 0,04. Conclusão: a hipertensão arterial determinou redução no peso corpóreo, no comprimento, no peso do fígado e no peso do encéfalo dos recém-nascidos.Purpose: to study the repercussion of arterial hypertension regarding body weight gain and body length, as well as liver and brain weight of offspring. Methods: a total of 82 animals in reproductive age were used. They were randomly assigned to 4 different groups (control, handled, nephrectomized and hypertensive. Renal hypertension was produced by a
Decoding Interleaved Gabidulin Codes using Alekhnovich's Algorithm
DEFF Research Database (Denmark)
Puchinger, Sven; Müelich, Sven; Mödinger, David
2017-01-01
We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent.......We prove that Alekhnovich's algorithm can be used for row reduction of skew polynomial matrices. This yields an O(ℓ3n(ω+1)/2log(n)) decoding algorithm for ℓ-Interleaved Gabidulin codes of length n, where ω is the matrix multiplication exponent....
Smarandache, Florentin
2013-09-01
Let's denote by VE the speed of the Earth and byVR the speed of the rocket. Both travel in the same direction on parallel trajectories. We consider the Earth as a moving (at a constant speed VE -VR) spacecraft of almost spherical form, whose radius is r and thus the diameter 2r, and the rocket as standing still. The non-proper length of Earth's diameter, as measured by the astronaut is: L = 2 r√{ 1 -|/VE -VR|2 c2 } rocket! Also, let's assume that the astronaut is laying down in the direction of motion. Therefore, he would also shrink, or he would die!
Directory of Open Access Journals (Sweden)
P. R. Parthasarathy
2001-01-01
Full Text Available The transient solution is obtained analytically using continued fractions for a state-dependent birth-death queue in which potential customers are discouraged by the queue length. This queueing system is then compared with the well-known infinite server queueing system which has the same steady state solution as the model under consideration, whereas their transient solutions are different. A natural measure of speed of convergence of the mean number in the system to its stationarity is also computed.
On algorithm for building of optimal α-decision trees
Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail
2010-01-01
The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic
Online learning algorithm for ensemble of decision rules
Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2011-01-01
We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach
A very fast implementation of 2D iterative reconstruction algorithms
DEFF Research Database (Denmark)
Toft, Peter Aundal; Jensen, Peter James
1996-01-01
that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...
A new simple iterative reconstruction algorithm for SPECT transmission measurement
International Nuclear Information System (INIS)
Hwang, D.S.; Zeng, G.L.
2005-01-01
This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases
Directory of Open Access Journals (Sweden)
Min Huang
2012-05-01
Full Text Available An apoptosis correlated molecule—protein Latcripin-1 of <em>Lentinula> edodesem> C_{91}_{-}_{3}—was expressed and characterized in <em>Pichia pastorisem> GS115. The total RNA was obtained from <em>Lentinula edodesem> C_{91–3}. According to the transcriptome, the full-length gene of Latcripin-1 was isolated with 3'-Full Rapid Amplification of cDNA Ends (RACE and 5'-Full RACE methods. The full-length gene was inserted into the secretory expression vector pPIC9K. The protein Latcripin-1 was expressed in <em>Pichia pastorisem> GS115 and analyzed by Sodium Dodecylsulfonate Polyacrylate Gel Electrophoresis (SDS-PAGE and Western blot. The Western blot showed that the protein was expressed successfully. The biological function of protein Latcripin-1 on A549 cells was studied with flow cytometry and the 3-(4,5-Dimethylthiazol-2-yl-2,5-Diphenyl-tetrazolium Bromide (MTT method. The toxic effect of protein Latcripin-1 was detected with the MTT method by co-culturing the characterized protein with chick embryo fibroblasts. The MTT assay results showed that there was a great difference between protein Latcripin-1 groups and the control group (<em>p em>< 0.05. There was no toxic effect of the characterized protein on chick embryo fibroblasts. The flow cytometry showed that there was a significant difference between the protein groups of interest and the control group according to apoptosis function (<em>p em>< 0.05. At the same time, cell ultrastructure observed by transmission electron microscopy supported the results of flow cytometry. The work demonstrates that protein Latcripin-1 can induce apoptosis of human lung cancer cells A549 and brings new insights into and advantages to finding anti-tumor proteins.
International Nuclear Information System (INIS)
Creutz, M.
1987-11-01
A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Track length estimation applied to point detectors
International Nuclear Information System (INIS)
Rief, H.; Dubi, A.; Elperin, T.
1984-01-01
The concept of the track length estimator is applied to the uncollided point flux estimator (UCF) leading to a new algorithm of calculating fluxes at a point. It consists essentially of a line integral of the UCF, and although its variance is unbounded, the convergence rate is that of a bounded variance estimator. In certain applications, involving detector points in the vicinity of collimated beam sources, it has a lower variance than the once-more-collided point flux estimator, and its application is more straightforward
Land-cover classification with an expert classification algorithm using digital aerial photographs
Directory of Open Access Journals (Sweden)
José L. de la Cruz
2010-05-01
Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (<em>Zea maysem> L., oats (<em>Avena sativaem> L., rye (<em>Secale cereale em>L., wheat (<em>Triticum aestivum em>L. and barley (<em>Hordeun vulgareem> L., (3 high protein crops, such as peas (<em>Pisum sativumem> L. and beans (<em>Vicia fabaem> L., (4 alfalfa (<em>Medicago sativaem> L., (5 woodlands and scrublands, including holly oak (<em>Quercus ilexem> L. and common retama (<em>Retama sphaerocarpaem> L., (6 urban soil, (7 olive groves (<em>Olea europaeaem> L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.
Optimization of inhibitory decision rules relative to length and coverage
Alsolami, Fawaz
2012-01-01
The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation "attribute = value" on the right-hand side. The considered algorithms are based on extensions of dynamic programming. © 2012 Springer-Verlag.
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring
Energy Technology Data Exchange (ETDEWEB)
Chandola, Varun [ORNL; Vatsavai, Raju [ORNL
2011-01-01
Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a
Directory of Open Access Journals (Sweden)
Juliana Savini Wey Berti
2012-08-01
Full Text Available OBJETIVO: Embora a hiperinsuflação manual (HM seja largamente usada para a remoção de secreções pulmonares, não há evidências para sua recomendação como rotina na prática clínica. O objetivo do estudo foi avaliar o efeito da HM combinada com compressão torácica expiratória (CTE na duração de internação em UTI e no tempo de ventilação mecânica (VM em pacientes sob VM. MÉTODOS: Ensaio clínico prospectivo, randomizado e controlado com pacientes de UTI sob VM em um hospital acadêmico terciário entre janeiro de 2004 e janeiro de 2005. Dentre os 49 pacientes que preencheram os critérios do estudo, 24 e 25 foram randomicamente alocados nos grupos fisioterapia respiratória (FR e controle, respectivamente, sendo que 6 e 8 foram retirados do estudo. Durante o período de observação de 5 dias, os pacientes do grupo FR receberam HM combinada com CTE, enquanto os controles receberam o tratamento padrão de enfermagem. RESULTADOS: Os dois grupos apresentaram características basais semelhantes. A intervenção teve efeito positivo na duração de VM, alta da UTI e escore de Murray. Houve diferenças significativas entre os grupos controle e FR em relação à taxa de sucesso no desmame nos dias 2 (0,0% vs. 37,5%, 3 (0,0% vs. 37,5%, 4 (5,3 vs. 37,5% e 5 (15,9% vs. 37,5%, assim como à taxa de alta da UTI nos dias 3 (0% vs. 25%, 4 (0% vs. 31% e 5 (0% vs. 31%. No grupo FR, houve uma melhora significante no escore de Murray no dia 5. CONCLUSÕES: Nossos resultados mostraram que o uso combinado de HM e CTE por 5 dias acelerou o processo de desmame e de alta da UTI.OBJECTIVE: Although manual hyperinflation (MH is widely used for pulmonary secretion clearance, there is no evidence to support its routine use in clinical practice. Our objective was to evaluate the effect that MH combined with expiratory rib cage compression (ERCC has on the length of ICU stay and duration of mechanical ventilation (MV. METHODS: This was a prospective
emMAW: computing minimal absent words in external memory.
Héliou, Alice; Pissis, Solon P; Puglisi, Simon J
2017-09-01
The biological significance of minimal absent words has been investigated in genomes of organisms from all domains of life. For instance, three minimal absent words of the human genome were found in Ebola virus genomes. There exists an O(n) -time and O(n) -space algorithm for computing all minimal absent words of a sequence of length n on a fixed-sized alphabet based on suffix arrays. A standard implementation of this algorithm, when applied to a large sequence of length n , requires more than 20 n bytes of RAM. Such memory requirements are a significant hurdle to the computation of minimal absent words in large datasets. We present emMAW, the first external-memory algorithm for computing minimal absent words. A free open-source implementation of our algorithm is made available. This allows for computation of minimal absent words on far bigger data sets than was previously possible. Our implementation requires less than 3 h on a standard workstation to process the full human genome when as little as 1 GB of RAM is made available. We stress that our implementation, despite making use of external memory, is fast; indeed, even on relatively smaller datasets when enough RAM is available to hold all necessary data structures, it is less than two times slower than state-of-the-art internal-memory implementations. https://github.com/solonas13/maw (free software under the terms of the GNU GPL). alice.heliou@lix.polytechnique.fr or solon.pissis@kcl.ac.uk. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Lauw, Y.; Leermakers, F.A.M.; Cohen Stuart, M.A.
2007-01-01
The persistence length of a wormlike micelle composed of ionic surfactants CnEmXk in an aqueous solvent is predicted by means of the self-consistent-field theory where CnEm is the conventional nonionic surfactant and X-k is an additional sequence of k weakly charged (pH-dependent) segments. By
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Directory of Open Access Journals (Sweden)
Paulo de Tarso Veras Farinatti
2004-10-01
Full Text Available O estudo teve por objetivo verificar a associação de amplitude e cadência do passo com componentes da aptidão muscular (CAM (flexibilidade, força e resistência muscular de membros inferiores, em 25 mulheres de 60 a 86 anos de idade (média = 79 ± 7 anos, fisicamente independentes e cujas condições clínicas não contra-indicassem a realização dos testes propostos. As seguintes variáveis foram estudadas: a amplitude e cadência do passo (AMP e CAP; b peso, estatura e altura sentada em um banco com medida padronizada (44cm; c marcha estacionária de dois minutos (número de repetições (RESISR; d força máxima relativa de extensão de joelhos (carga/peso corporal (FORCAR; e flexibilidade de tornozelo e quadril (graus (FLEXT e FLEXQ. A AMP e CAP foram comparadas com as variáveis dos CAM por meio de técnicas de correlação simples e multivariada. Os resultados indicaram que: a AMP e CAP associaram-se significativamente com o conjunto das variáveis de força e flexibilidade, conforme sugerido pela boa correlação canônica (r can = 0,79; p Este estudio tiene por objetivo verificar la asociación entre la amplitud y cadencia del paso y los componentes de la capacidad muscular (CAM (flexibilidad, fuerza y resistencia muscular de miembros inferiores, en 25 mujeres de 60 a 86 años de edad (promedio = 79 ± 7 anos, físicamente independientes y cuyas condiciones clínicas no impusieron restricciones a la realización de las pruebas sugeridas. Se estudiaron las siguientes variables: a amplitud y cadencia del paso (AMP y CAP; b peso, estatura y altura sentada en un banco con medida estandarizada (44cm; c marcha estacionaria de dos minutos (número de repeticiones (RESISR; d fuerza máxima relativa de extensión de las rodillas (carga / peso corporal (FORCAR; e flexibilidad del tobillo y cuadril (grados (FLEXT y FLEXQ. La AMP y CAP se compararon a las variables de los CAM por medio de técnicas de correlación simple y multivariada. Los
Energy Technology Data Exchange (ETDEWEB)
Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N., E-mail: lucasdelbem1@gmail.com [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Instituto de Radiologia; Weltman, Eduardo; Braga, Henrique F. [Instituto do Cancer do Estado de Sao Paulo, Sao Paulo, SP (Brazil). Servico de Radioterapia
2013-12-15
The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)
Optimization of inhibitory decision rules relative to length and coverage
Alsolami, Fawaz; Chikalov, Igor; Moshkov, Mikhail; Zielosko, Beata
2012-01-01
The paper is devoted to the study of algorithms for optimization of inhibitory rules relative to the length and coverage. In contrast with usual rules that have on the right-hand side a relation "attribute ≠ value", inhibitory rules have a relation
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
An improved affine projection algorithm for active noise cancellation
Zhang, Congyan; Wang, Mingjiang; Han, Yufei; Sun, Yunzhuo
2017-08-01
Affine projection algorithm is a signal reuse algorithm, and it has a good convergence rate compared to other traditional adaptive filtering algorithm. There are two factors that affect the performance of the algorithm, which are step factor and the projection length. In the paper, we propose a new variable step size affine projection algorithm (VSS-APA). It dynamically changes the step size according to certain rules, so that it can get smaller steady-state error and faster convergence speed. Simulation results can prove that its performance is superior to the traditional affine projection algorithm and in the active noise control (ANC) applications, the new algorithm can get very good results.
Hardware modules of the RSA algorithm
Directory of Open Access Journals (Sweden)
Škobić Velibor
2014-01-01
Full Text Available This paper describes basic principles of data protection using the RSA algorithm, as well as algorithms for its calculation. The RSA algorithm is implemented on FPGA integrated circuit EP4CE115F29C7, family Cyclone IV, Altera. Four modules of Montgomery algorithm are designed using VHDL. Synthesis and simulation are done using Quartus II software and ModelSim. The modules are analyzed for different key lengths (16 to 1024 in terms of the number of logic elements, the maximum frequency and speed.
Directory of Open Access Journals (Sweden)
Egoitz Salsamendi
2006-03-01
Full Text Available Abstract <em>Rhinolophus euryaleem> and <em>R. mehelyiem> are morphologically very similar species and their distributions overlap extensively in the Mediterranean basin. We modelled their foraging behaviour using echolocation calls and wing morphology and, assuming niche segregation occurs between the two species, we explored how it is shaped by these factors. Resting frequency of echolocation calls was recorded and weight, forearm length, wing loading, aspect ratio and wing tip shape index were measured. <em>R. mehelyiem> showed a significantly higher resting frequency than <em>R. euryaleem>, but differences are deemed insufficient for dietary niche segregation. Weight and forearm length were significantly larger in <em>R. mehelyiem>. The higher values of aspect ratio and wing loading and a lower value of wing tip shape index in <em>R. melehyiem> restrict its flight manoeuvrability and agility. Therefore, the flight ability of <em>R. mehelyiem> may decrease as habitat complexity increases. Thus, the principal mechanism for resource partitioning seems to be based on differing habitat use arising from differences in wing morphology. Riassunto Ecolocalizzazione e morfologia nei rinolofi di Mehely (<em>Rhinolophus mehelyiem> e euriale (<em>R. euryaleem>: implicazioni nella segregazione delle risorse trofiche. <em>Rhinolophus euryaleem> e <em>R. mehelyiem> sono specie morfologicamente molto simili, la cui distribuzione risulta largamente coincidente in area mediterranea. Il comportamento di foraggiamento delle due specie è stato analizzato in funzione delle caratteristiche dei segnali di ecolocalizzazione e della morfologia alare, ed è stata valutata l’incidenza di questi fattori nell’ipotesi di una segregazione delle nicchie. È stata rilevata la frequenza a riposo dei segnali ultrasonori, così come il peso, la lunghezza dell’avambraccio, il carico alare, e due
Universal algorithm of time sharing
International Nuclear Information System (INIS)
Silin, I.N.; Fedyun'kin, E.D.
1979-01-01
Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed
Directory of Open Access Journals (Sweden)
Xuefeng Zhang
2012-05-01
Full Text Available Fluorescence<em> em>>in situ em>hybridization (FISH assay is considered the “gold standard” in evaluating <em>HER2/neu (HER2em> gene status. However, FISH detection is costly and time consuming. Thus, we established nuclei microarray with extracted intact nuclei from paraffin embedded breast cancer tissues for FISH detection. The nuclei microarray FISH (NMFISH technology serves as a useful platform for analyzing <em>HER2em> gene/chromosome 17 centromere ratio. We examined <em>HER2em> gene status in 152 cases of invasive ductal carcinomas of the breast that were resected surgically with FISH and NMFISH. <em>HER2em> gene amplification status was classified according to the guidelines of the American Society of Clinical Oncology and College of American Pathologists (ASCO/CAP. Comparison of the cut-off values for <em>HER2em>/chromosome 17 centromere copy number ratio obtained by NMFISH and FISH showed that there was almost perfect agreement between the two methods (κ coefficient 0.920. The results of the two methods were almost consistent for the evaluation of <em>HER2em> gene counts. The present study proved that NMFISH is comparable with FISH for evaluating <em>HER2em> gene status. The use of nuclei microarray technology is highly efficient, time and reagent conserving and inexpensive.
Energy Technology Data Exchange (ETDEWEB)
Gewehr, Diego N.; Vargas, Ricardo B.; Melo, Eduardo D. de; Paschoareli Junior, Dionizio [Universidade Estadual Paulista (DEE/UNESP), Ilha Solteira, SP (Brazil). Dept. de Engenharia Eletrica. Grupo de Pesquisa em Fontes Alternativas e Aproveitamento de Energia
2008-07-01
This paper presents a methodology for electric power sources location in isolated direct current micro grids, using genetic algorithm. In this work, photovoltaic panels are considered, although the methodology can be extended for any kind of DC sources. A computational tool is developed using the Matlab simulator, to obtain the best dc system configuration for reduction of panels quantity and costs, and to improve the system performance. (author)
A New Natural Lactone from <em>Dimocarpus> <em>longan> Lour. Seeds
Directory of Open Access Journals (Sweden)
Zhongjun Li
2012-08-01
Full Text Available A new natural product named longanlactone was isolated from <em>Dimocarpus> <em>longan> Lour. seeds. Its structure was determined as 3-(2-acetyl-1<em>H>-pyrrol-1-yl-5-(prop-2-yn-1-yldihydrofuran-2(3H-one by spectroscopic methods and HRESIMS.
Models and Algorithms for Tracking Target with Coordinated Turn Motion
Directory of Open Access Journals (Sweden)
Xianghui Yuan
2014-01-01
Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Directory of Open Access Journals (Sweden)
Igor Pavlinić
2008-07-01
Full Text Available Abstract After the discovery of two different phonic types within the common pipistrelle (<em>Pipistrellus pipistrellusem>, mtDNA analysis confirmed the existence of two separate species named as common pipistrelle (<em>P. pipistrellusem> and soprano pipistrelle (<em>P. pygmaeusem>. The discrimination of these two cryptic species using external characters and measures has proved to be somewhat problematic. We examined two colonies of soprano pipistrelle from Donji Miholjac, Croatia. As a result, only two characters proved to be of help for field identification: wing venation (89% of cases and penis morphology and colour for males. The difference in length between the 2^{nd} and 3^{rd} phalanxes of the 3^{rd} finger should be discarded as diagnostic trait between <em>P. pipistrellusem> and <em>P. pygmaeusem> in Croatia. Riassunto Identificazione su basi morfologiche del pipistrello pigmeo (<em>Pipistrellus pygmeausem>, Leach, 1825 in Croazia. A seguito della descrizione di due differenti "tipi fonici" nel pipistrello nano (<em>Pipistrellus pipistrellusem> e della successiva conferma su basi genetiche dell'esistenza di due specie distinte, designate come pipistrello nano (<em>P. pipistrellusem> e pipistrello pigmeo (<em>P. pygmaeusem>, la distinzione delle due specie in base a caratteristiche morfologiche esterne si è dimostrata un problema di difficile soluzione. Sulla base delle caratteristiche distintive e delle differenze biometriche proposte da altri Autori, sono state esaminate due colonie di pipistrello pigmeo a Donji Miholjac, in Croazia. I risultati ottenuti evidenziano che, tra tutti i potenziali caratteri sinora proposti, solo due risultano utili per un'identificazione diretta sul campo: la venatura delle ali, risultata utile alla discriminazione nell'89% degli esemplari analizzati, e la colorazione e morfologia del pene nei maschi. La
Reference Gene Selection in the Desert Plant <em>Eremosparton songoricuem>m>
Directory of Open Access Journals (Sweden)
Dao-Yuan Zhang
2012-06-01
Full Text Available <em>Eremosparton songoricum em>(Litv. Vass. (<em>E. songoricumem> is a rare and extremely drought-tolerant desert plant that holds promise as a model organism for the identification of genes associated with water deficit stress. Here, we cloned and evaluated the expression of eight candidate reference genes using quantitative real-time reverse transcriptase polymerase chain reactions. The expression of these candidate reference genes was analyzed in a diverse set of 20 samples including various <em>E. songoricumem> plant tissues exposed to multiple environmental stresses. GeNorm analysis indicated that expression stability varied between the reference genes in the different experimental conditions, but the two most stable reference genes were sufficient for normalization in most conditions.<em> EsEFem> and <em>Esα-TUB> were sufficient for various stress conditions, <em>EsEF> and <em>EsACT> were suitable for samples of differing germination stages, and <em>EsGAPDH>and <em>Es>UBQ em>were most stable across multiple adult tissue samples. The <em>Es18Sem> gene was unsuitable as a reference gene in our analysis. In addition, the expression level of the drought-stress related transcription factor <em>EsDREB2em>> em>verified the utility of<em> E. songoricumem> reference genes and indicated that no single gene was adequate for normalization on its own. This is the first systematic report on the selection of reference genes in <em>E. songoricumem>, and these data will facilitate future work on gene expression in this species.
PEG Enhancement for EM1 and EM2+ Missions
Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt
2018-01-01
NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG
Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction
International Nuclear Information System (INIS)
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-01-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)
Energy Technology Data Exchange (ETDEWEB)
Vieira, Jose Wilson; Leal Neto, Viriato; Lima Filho, Jose de Melo, E-mail: jose.wilson59@uol.com.br [Instituto Federal de Educacao Ciencia e Tecnologia de Pernambuco (IFPE), Recife, PE (Brazil); Lima, Fernando Roberto de Andrade [Centro Regional de Ciencias Nucleares do Nordeste (CRCN-NE/CNEN-PE), Recife, PE (Brazil)
2013-07-01
This paper presents as algorithm of a planar and isotropic radioactive source and by rotating the probability density function (PDF) Gaussian standard subjected to a translatory method which displaces its maximum throughout its field changes its intensity and makes the dispersion around the mean right asymmetric. The algorithm was used to generate samples of photons emerging from a plane and reach a semicircle involving a phantom voxels. The PDF describing this problem is already known, but the generating function of random numbers (FRN) associated with it can not be deduced by direct MC techniques. This is a significant problem because it can be adjusted to simulations involving natural terrestrial radiation or accidents in medical establishments or industries where the radioactive material spreads in a plane. Some attempts to obtain a FRN for the PDF of the problem have already been implemented by the Research Group in Numerical Dosimetry (GND) from Recife-PE, Brazil, always using the technique rejection sampling MC. This article followed methodology of previous work, except on one point: The problem of the PDF was replaced by a normal PDF transferred. To perform dosimetric comparisons, we used two MCES: the MSTA (Mash standing, composed by the adult male voxel phantom in orthostatic position, MASH (male mesh), available from the Department of Nuclear Energy (DEN) of the Federal University of Pernambuco (UFPE), coupled to MC EGSnrc code and the GND planar source based on the rejection technique) and MSTA{sub N}T. The two MCES are similar in all but FRN used in planar source. The results presented and discussed in this paper establish the new algorithm for a planar source to be used by GND.
Energy Technology Data Exchange (ETDEWEB)
Lapa, Celso M. Franklin; Pereira, Claudio M.N.A.; Mol, Antonio C. de Abreu [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)
1999-07-01
This paper presents a solution based on genetic algorithm and probabilistic safety analysis that can be applied in the optimization of the preventive maintenance politic of nuclear power plant safety systems. The goal of this approach is to improve the average availability of the system through the optimization of the preventive maintenance scheduling politic. The auxiliary feed water system of a two loops pressurized water reactor is used as a sample case, in order to demonstrate the effectiveness of the proposed method. The results, when compared to those obtained by some standard maintenance politics, reveal quantitative gains and operational safety levels. (author)
Energy Technology Data Exchange (ETDEWEB)
Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)
2017-12-01
This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.
Does length or neighborhood size cause the word length effect?
Jalbert, Annie; Neath, Ian; Surprenant, Aimée M
2011-10-01
Jalbert, Neath, Bireta, and Surprenant (2011) suggested that past demonstrations of the word length effect, the finding that words with fewer syllables are recalled better than words with more syllables, included a confound: The short words had more orthographic neighbors than the long words. The experiments reported here test two predictions that would follow if neighborhood size is a more important factor than word length. In Experiment 1, we found that concurrent articulation removed the effect of neighborhood size, just as it removes the effect of word length. Experiment 2 demonstrated that this pattern is also found with nonwords. For Experiment 3, we factorially manipulated length and neighborhood size, and found only effects of the latter. These results are problematic for any theory of memory that includes decay offset by rehearsal, but they are consistent with accounts that include a redintegrative stage that is susceptible to disruption by noise. The results also confirm the importance of lexical and linguistic factors on memory tasks thought to tap short-term memory.
Keeping disease at arm's length
DEFF Research Database (Denmark)
Lassen, Aske Juul
2015-01-01
active ageing change everyday life with chronic disease, and how do older people combine an active life with a range of chronic diseases? The participants in the study use activities to keep their diseases at arm’s length, and this distancing of disease at the same time enables them to engage in social...... and physical activities at the activity centre. In this way, keeping disease at arm’s length is analysed as an ambiguous health strategy. The article shows the importance of looking into how active ageing is practised, as active ageing seems to work well in the everyday life of the older people by not giving...... emphasis to disease. The article is based on ethnographic fieldwork and uses vignettes of four participants to show how they each keep diseases at arm’s length....
Continuously variable focal length lens
Adams, Bernhard W; Chollet, Matthieu C
2013-12-17
A material preferably in crystal form having a low atomic number such as beryllium (Z=4) provides for the focusing of x-rays in a continuously variable manner. The material is provided with plural spaced curvilinear, optically matched slots and/or recesses through which an x-ray beam is directed. The focal length of the material may be decreased or increased by increasing or decreasing, respectively, the number of slots (or recesses) through which the x-ray beam is directed, while fine tuning of the focal length is accomplished by rotation of the material so as to change the path length of the x-ray beam through the aligned cylindrical slows. X-ray analysis of a fixed point in a solid material may be performed by scanning the energy of the x-ray beam while rotating the material to maintain the beam's focal point at a fixed point in the specimen undergoing analysis.
CEBAF Upgrade Bunch Length Measurements
Energy Technology Data Exchange (ETDEWEB)
Ahmad, Mahmoud [Old Dominion Univ., Norfolk, VA (United States)
2016-05-01
Many accelerators use short electron bunches and measuring the bunch length is important for efficient operations. CEBAF needs a suitable bunch length because bunches that are too long will result in beam interruption to the halls due to excessive energy spread and beam loss. In this work, bunch length is measured by invasive and non-invasive techniques at different beam energies. Two new measurement techniques have been commissioned; a harmonic cavity showed good results compared to expectations from simulation, and a real time interferometer is commissioned and first checkouts were performed. Three other techniques were used for measurements and comparison purposes without modifying the old procedures. Two of them can be used when the beam is not compressed longitudinally while the other one, the synchrotron light monitor, can be used with compressed or uncompressed beam.
Alsolami, Fawaz
2013-01-01
This paper is devoted to the study of algorithms for sequential optimization of approximate inhibitory rules relative to the length, coverage and number of misclassifications. Theses algorithms are based on extensions of dynamic programming approach. The results of experiments for decision tables from UCI Machine Learning Repository are discussed. © 2013 Springer-Verlag.
Kondo length in bosonic lattices
Giuliano, Domenico; Sodano, Pasquale; Trombettoni, Andrea
2017-09-01
Motivated by the fact that the low-energy properties of the Kondo model can be effectively simulated in spin chains, we study the realization of the effect with bond impurities in ultracold bosonic lattices at half filling. After presenting a discussion of the effective theory and of the mapping of the bosonic chain onto a lattice spin Hamiltonian, we provide estimates for the Kondo length as a function of the parameters of the bosonic model. We point out that the Kondo length can be extracted from the integrated real-space correlation functions, which are experimentally accessible quantities in experiments with cold atoms.
Continuous lengths of oxide superconductors
Kroeger, Donald M.; List, III, Frederick A.
2000-01-01
A layered oxide superconductor prepared by depositing a superconductor precursor powder on a continuous length of a first substrate ribbon. A continuous length of a second substrate ribbon is overlaid on the first substrate ribbon. Sufficient pressure is applied to form a bound layered superconductor precursor powder between the first substrate ribbon and the second substrate ribbon. The layered superconductor precursor is then heat treated to establish the oxide superconducting phase. The layered oxide superconductor has a smooth interface between the substrate and the oxide superconductor.
Summary of neutron scattering lengths
International Nuclear Information System (INIS)
Koester, L.
1981-12-01
All available neutron-nuclei scattering lengths are collected together with their error bars in a uniform way. Bound scattering lengths are given for the elements, the isotopes, and the various spin-states. They are discussed in the sense of their use as basic parameters for many investigations in the field of nuclear and solid state physics. The data bank is available on magnetic tape, too. Recommended values and a map of these data serve for an uncomplicated use of these quantities. (orig.)
Overview of bunch length measurements
International Nuclear Information System (INIS)
Lumpkin, A. H.
1999-01-01
An overview of particle and photon beam bunch length measurements is presented in the context of free-electron laser (FEL) challenges. Particle-beam peak current is a critical factor in obtaining adequate FEL gain for both oscillators and self-amplified spontaneous emission (SASE) devices. Since measurement of charge is a standard measurement, the bunch length becomes the key issue for ultrashort bunches. Both time-domain and frequency-domain techniques are presented in the context of using electromagnetic radiation over eight orders of magnitude in wavelength. In addition, the measurement of microbunching in a micropulse is addressed
Diet, nutrition and telomere length.
Paul, Ligi
2011-10-01
The ends of human chromosomes are protected by DNA-protein complexes termed telomeres, which prevent the chromosomes from fusing with each other and from being recognized as a double-strand break by DNA repair proteins. Due to the incomplete replication of linear chromosomes by DNA polymerase, telomeric DNA shortens with repeated cell divisions until the telomeres reach a critical length, at which point the cells enter senescence. Telomere length is an indicator of biological aging, and dysfunction of telomeres is linked to age-related pathologies like cardiovascular disease, Parkinson disease, Alzheimer disease and cancer. Telomere length has been shown to be positively associated with nutritional status in human and animal studies. Various nutrients influence telomere length potentially through mechanisms that reflect their role in cellular functions including inflammation, oxidative stress, DNA integrity, DNA methylation and activity of telomerase, the enzyme that adds the telomeric repeats to the ends of the newly synthesized DNA. Copyright © 2011 Elsevier Inc. All rights reserved.
Directory of Open Access Journals (Sweden)
Ben Ruktantichoke
2011-06-01
Full Text Available In this study water flowed through a straight horizontal plastic tube placed at the bottom of a large tank of water. The effect of changing the length of tubing on the velocity of flow was investigated. It was found that the Hagen-Poiseuille Equation is valid when the effect of water entering the tube is accounted for.
Finite length Taylor Couette flow
Streett, C. L.; Hussaini, M. Y.
1987-01-01
Axisymmetric numerical solutions of the unsteady Navier-Stokes equations for flow between concentric rotating cylinders of finite length are obtained by a spectral collocation method. These representative results pertain to two-cell/one-cell exchange process, and are compared with recent experiments.
A generalized global alignment algorithm.
Huang, Xiaoqiu; Chao, Kun-Mao
2003-01-22
Homologous sequences are sometimes similar over some regions but different over other regions. Homologous sequences have a much lower global similarity if the different regions are much longer than the similar regions. We present a generalized global alignment algorithm for comparing sequences with intermittent similarities, an ordered list of similar regions separated by different regions. A generalized global alignment model is defined to handle sequences with intermittent similarities. A dynamic programming algorithm is designed to compute an optimal general alignment in time proportional to the product of sequence lengths and in space proportional to the sum of sequence lengths. The algorithm is implemented as a computer program named GAP3 (Global Alignment Program Version 3). The generalized global alignment model is validated by experimental results produced with GAP3 on both DNA and protein sequences. The GAP3 program extends the ability of standard global alignment programs to recognize homologous sequences of lower similarity. The GAP3 program is freely available for academic use at http://bioinformatics.iastate.edu/aat/align/align.html.
Detecting Scareware by Mining Variable Length Instruction Sequences
Shahzad, Raja Khurram; Lavesson, Niklas
2011-01-01
Scareware is a recent type of malicious software that may pose financial and privacy-related threats to novice users. Traditional countermeasures, such as anti-virus software, require regular updates and often lack the capability of detecting novel (unseen) instances. This paper presents a scareware detection method that is based on the application of machine learning algorithms to learn patterns in extracted variable length opcode sequences derived from instruction sequences of binary files....
Synthesis, Crystal Structure and Luminescent Property of Cd (II Complex with <em>N-Benzenesulphonyl-L>-leucine
Directory of Open Access Journals (Sweden)
Xishi Tai
2012-09-01
Full Text Available A new trinuclear Cd (II complex [Cd_{3}(L_{6}(2,2-bipyridine_{3}] [L =<em> Nem>-phenylsulfonyl-L>-leucinato] has been synthesized and characterized by elemental analysis, IR and X-ray single crystal diffraction analysis. The results show that the complex belongs to the orthorhombic, space group<em> Pem>2_{1}2_{1}2_{1} with<em> aem> = 16.877(3 Å, <em>b> em>= 22.875(5 Å, <em>c em>= 29.495(6 Å, <em>α> em>= <em>β em>= <em>γ em>= 90°, <em>V> em>= 11387(4 Å^{3}, <em>Z> em>= 4, <em>D_{c}>= 1.416 μg·m^{−3}, <em>μ em>= 0.737 mm^{−1}, <em>F> em>(000 = 4992, and final <em>R>_{1} = 0.0390, <em>ωR>_{2} = 0.0989. The complex comprises two seven-coordinated Cd (II atoms, with a N_{2}O_{5} distorted pengonal bipyramidal coordination environment and a six-coordinated Cd (II atom, with a N_{2}O_{4} distorted octahedral coordination environment. The molecules form one dimensional chain structure by the interaction of bridged carboxylato groups, hydrogen bonds and p-p interaction of 2,2-bipyridine. The luminescent properties of the Cd (II complex and <em>N-Benzenesulphonyl-L>-leucine in solid and in CH_{3}OH solution also have been investigated.
Pseudo-deterministic Algorithms
Goldwasser , Shafi
2012-01-01
International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...
A Learning Algorithm for Multimodal Grammar Inference.
D'Ulizia, A; Ferri, F; Grifoni, P
2011-12-01
The high costs of development and maintenance of multimodal grammars in integrating and understanding input in multimodal interfaces lead to the investigation of novel algorithmic solutions in automating grammar generation and in updating processes. Many algorithms for context-free grammar inference have been developed in the natural language processing literature. An extension of these algorithms toward the inference of multimodal grammars is necessary for multimodal input processing. In this paper, we propose a novel grammar inference mechanism that allows us to learn a multimodal grammar from its positive samples of multimodal sentences. The algorithm first generates the multimodal grammar that is able to parse the positive samples of sentences and, afterward, makes use of two learning operators and the minimum description length metrics in improving the grammar description and in avoiding the over-generalization problem. The experimental results highlight the acceptable performances of the algorithm proposed in this paper since it has a very high probability of parsing valid sentences.
Unsupervised Idealization of Ion Channel Recordings by Minimum Description Length
DEFF Research Database (Denmark)
Gnanasambandam, Radhakrishnan; Nielsen, Morten S; Nicolai, Christopher
2017-01-01
and characterize an idealization algorithm based on Rissanen's Minimum Description Length (MDL) Principle. This method uses minimal assumptions and idealizes ion channel recordings without requiring a detailed user input or a priori assumptions about channel conductance and kinetics. Furthermore, we demonstrate...... that correlation analysis of conductance steps can resolve properties of single ion channels in recordings contaminated by signals from multiple channels. We first validated our methods on simulated data defined with a range of different signal-to-noise levels, and then showed that our algorithm can recover...... channel currents and their substates from recordings with multiple channels, even under conditions of high noise. We then tested the MDL algorithm on real experimental data from human PIEZO1 channels and found that our method revealed the presence of substates with alternate conductances....
Graph run-length matrices for histopathological image segmentation.
Tosun, Akif Burak; Gunduz-Demir, Cigdem
2011-03-01
The histopathological examination of tissue specimens is essential for cancer diagnosis and grading. However, this examination is subject to a considerable amount of observer variability as it mainly relies on visual interpretation of pathologists. To alleviate this problem, it is very important to develop computational quantitative tools, for which image segmentation constitutes the core step. In this paper, we introduce an effective and robust algorithm for the segmentation of histopathological tissue images. This algorithm incorporates the background knowledge of the tissue organization into segmentation. For this purpose, it quantifies spatial relations of cytological tissue components by constructing a graph and uses this graph to define new texture features for image segmentation. This new texture definition makes use of the idea of gray-level run-length matrices. However, it considers the runs of cytological components on a graph to form a matrix, instead of considering the runs of pixel intensities. Working with colon tissue images, our experiments demonstrate that the texture features extracted from "graph run-length matrices" lead to high segmentation accuracies, also providing a reasonable number of segmented regions. Compared with four other segmentation algorithms, the results show that the proposed algorithm is more effective in histopathological image segmentation.
Faster Algorithms for Computing Longest Common Increasing Subsequences
DEFF Research Database (Denmark)
Kutz, Martin; Brodal, Gerth Stølting; Kaligosi, Kanela
2011-01-01
of the alphabet, and Sort is the time to sort each input sequence. For k⩾3 length-n sequences we present an algorithm which improves the previous best bound by more than a factor k for many inputs. In both cases, our algorithms are conceptually quite simple but rely on existing sophisticated data structures......We present algorithms for finding a longest common increasing subsequence of two or more input sequences. For two sequences of lengths n and m, where m⩾n, we present an algorithm with an output-dependent expected running time of and O(m) space, where ℓ is the length of an LCIS, σ is the size....... Finally, we introduce the problem of longest common weakly-increasing (or non-decreasing) subsequences (LCWIS), for which we present an -time algorithm for the 3-letter alphabet case. For the extensively studied longest common subsequence problem, comparable speedups have not been achieved for small...
Directory of Open Access Journals (Sweden)
Eric Costello
2011-01-01
Full Text Available The shape of a cable hanging under its own weight and uniform horizontal tension between two power poles is a catenary. The catenary is a curve which has an equation defined by a hyperbolic cosine function and a scaling factor. The scaling factor for power cables hanging under their own weight is equal to the horizontal tension on the cable divided by the weight of the cable. Both of these values are unknown for this problem. Newton's method was used to approximate the scaling factor and the arc length function to determine the length of the cable. A script was written using the Python programming language in order to quickly perform several iterations of Newton's method to get a good approximation for the scaling factor.
Temperature-dependence of Threshold Current Density-Length Product in Metallization Lines: A Revisit
International Nuclear Information System (INIS)
Duryat, Rahmat Saptono; Kim, Choong-Un
2016-01-01
One of the important phenomena in Electromigration (EM) is Blech Effect. The existence of Threshold Current Density-Length Product or EM Threshold has such fundamental and technological consequences in the design, manufacture, and testing of electronics. Temperature-dependence of Blech Product had been thermodynamically established and the real behavior of such interconnect materials have been extensively studied. The present paper reviewed the temperature-dependence of EM threshold in metallization lines of different materials and structure as found in relevant published articles. It is expected that the reader can see a big picture from the compiled data, which might be overlooked when it was examined in pieces. (paper)
Minimal Length, Measurability and Gravity
Directory of Open Access Journals (Sweden)
Alexander Shalyt-Margolin
2016-03-01
Full Text Available The present work is a continuation of the previous papers written by the author on the subject. In terms of the measurability (or measurable quantities notion introduced in a minimal length theory, first the consideration is given to a quantum theory in the momentum representation. The same terms are used to consider the Markov gravity model that here illustrates the general approach to studies of gravity in terms of measurable quantities.
International Nuclear Information System (INIS)
Volkov, M.K.; Osipov, A.A.
1983-01-01
The msub(π)asub(0)sup(1/2)=0.1, msub(π)asub(0)sup(3/2)=-0.1, msub(π)asub(0)sup((-))=0.07, msub(π)sup(3)asub(1)sup(1/2)=0.018, msub(π)sup(3)asub(1)aup(3/2)=0.002, msub(π)sup(3)asub(1)sup((-))=0.0044, msub(π)sup(5)asub(2)sup(1/2)=2.4x10sup(-4) and msub(π)sup(5)asub(2)sup(3/2)=-1.2x10sup(-4) scattering lengths are calculated in the framework of the composite meson model which is based on four-quark interaction. The decay form factors of (rho, epsilon, S*) → 2π, (K tilde, K*) → Kπ are used. The q 2 -terms of the quark box diagrams are taken into account. It is shown that the q 2 -terms of the box diagrams give the main contribution to the s-wave scattering lengths. The diagrams with the intermediate vector mesons begin to play the essential role at calculation of the p- and d-wave scattering lengths
An empirical study on SAJQ (Sorting Algorithm for Join Queries
Directory of Open Access Journals (Sweden)
Hassan I. Mathkour
2010-06-01
Full Text Available Most queries that applied on database management systems (DBMS depend heavily on the performance of the used sorting algorithm. In addition to have an efficient sorting algorithm, as a primary feature, stability of such algorithms is a major feature that is needed in performing DBMS queries. In this paper, we study a new Sorting Algorithm for Join Queries (SAJQ that has both advantages of being efficient and stable. The proposed algorithm takes the advantage of using the m-way-merge algorithm in enhancing its time complexity. SAJQ performs the sorting operation in a time complexity of O(nlogm, where n is the length of the input array and m is number of sub-arrays used in sorting. An unsorted input array of length n is arranged into m sorted sub-arrays. The m-way-merge algorithm merges the sorted m sub-arrays into the final output sorted array. The proposed algorithm keeps the stability of the keys intact. An analytical proof has been conducted to prove that, in the worst case, the proposed algorithm has a complexity of O(nlogm. Also, a set of experiments has been performed to investigate the performance of the proposed algorithm. The experimental results have shown that the proposed algorithm outperforms other Stable–Sorting algorithms that are designed for join-based queries.
Comparison of turbulence mitigation algorithms
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.
2015-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
Progressive geometric algorithms
Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.
2014-01-01
Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Online learning algorithm for ensemble of decision rules
Chikalov, Igor
2011-01-01
We describe an online learning algorithm that builds a system of decision rules for a classification problem. Rules are constructed according to the minimum description length principle by a greedy algorithm or using the dynamic programming approach. © 2011 Springer-Verlag.
Tap-length optimization of adaptive filters used in stereophonic acoustic echo cancellation
DEFF Research Database (Denmark)
Kar, Asutosh; Swamy, M.N.S.
2017-01-01
An adaptive filter with a large number of weights or taps is necessary for stereophonic acoustic echo cancellation (SAEC), depending on the room impulse response and acoustic path where the cancellation is performed. However, a large tap-length results in slow convergence and increases...... the complexity of the tapped delay line structure for FIR adaptive filters. To overcome this problem, there is a need for an optimum tap-length-estimation algorithm that provides better convergence for the adaptive filters used in SAEC. This paper presents a solution to the problem of balancing convergence...... and steady-state performance of long length adaptive filters used for SAEC by proposing a new tap-length-optimization algorithm. The optimum tap length and step size of the adaptive filter are derived considering an impulse response with an exponentially-decaying envelope, which models a wide range...
A quantum algorithm for Viterbi decoding of classical convolutional codes
Grice, Jon R.; Meyer, David A.
2014-01-01
We present a quantum Viterbi algorithm (QVA) with better than classical performance under certain conditions. In this paper the proposed algorithm is applied to decoding classical convolutional codes, for instance; large constraint length $Q$ and short decode frames $N$. Other applications of the classical Viterbi algorithm where $Q$ is large (e.g. speech processing) could experience significant speedup with the QVA. The QVA exploits the fact that the decoding trellis is similar to the butter...
Does the use of bedside pelvic ultrasound decrease length of stay in the emergency department?
Thamburaj, Ravi; Sivitz, Adam
2013-01-01
Diagnostic ultrasounds by emergency medicine (EM) and pediatric emergency medicine (PEM) physicians have increased because of ultrasonography training during residency and fellowship. The availability of ultrasound in radiology departments is limited or difficult to obtain especially during nighttime hours. Studies have shown that EM physicians can accurately perform goal-directed ultrasound after appropriate training. The goal of this study was to compare the length of stay for patients receiving an ultrasound to confirm intrauterine pregnancies. The hypothesis of this study is that a bedside ultrasound by a trained EM/PEM physician can reduce length of stay in the emergency department (ED) by 1 hour. This was a case cohort retrospective review for patients aged 13 to 21 years who received pelvic ultrasounds in the ED during 2007. Each patient was placed into 1 of 2 groups. Group 1 received bedside ultrasounds done by institutionally credentialed EM/PEM attending physicians. Group 2 received radiology department ultrasound only. Each group had subanalysis done including chief complaint, time of presentation, time to completion of ultrasound, length of stay, diagnosis, and disposition. Daytime was defined as presentation between 7 AM and 9 PM when radiology ultrasound technologists were routinely available. We studied 330 patients, with 244 patients (74%) in the bedside ultrasound group. The demographics of both groups showed no difference in age, presenting complaints, discharge diagnoses, and ultimate disposition. Group 1 had a significant reduction (P ultrasound compared with group 2 (mean, 82 minutes [range, 1-901 minutes] vs 149 minutes [range, 7-506 minutes]) and length of stay (142 [16-2268] vs. 230 [16-844]). Of those presenting during the day (66%), group 1 showed a significant reduction in length of stay (P ultrasound by trained EM/PEM physicians produced a significant reduction in length of stay in the ED, regardless of radiology ultrasound technologist
Predictive minimum description length principle approach to inferring gene regulatory networks.
Chaitankar, Vijender; Zhang, Chaoyang; Ghosh, Preetam; Gong, Ping; Perkins, Edward J; Deng, Youping
2011-01-01
Reverse engineering of gene regulatory networks using information theory models has received much attention due to its simplicity, low computational cost, and capability of inferring large networks. One of the major problems with information theory models is to determine the threshold that defines the regulatory relationships between genes. The minimum description length (MDL) principle has been implemented to overcome this problem. The description length of the MDL principle is the sum of model length and data encoding length. A user-specified fine tuning parameter is used as control mechanism between model and data encoding, but it is difficult to find the optimal parameter. In this work, we propose a new inference algorithm that incorporates mutual information (MI), conditional mutual information (CMI), and predictive minimum description length (PMDL) principle to infer gene regulatory networks from DNA microarray data. In this algorithm, the information theoretic quantities MI and CMI determine the regulatory relationships between genes and the PMDL principle method attempts to determine the best MI threshold without the need of a user-specified fine tuning parameter. The performance of the proposed algorithm is evaluated using both synthetic time series data sets and a biological time series data set (Saccharomyces cerevisiae). The results show that the proposed algorithm produced fewer false edges and significantly improved the precision when compared to existing MDL algorithm.
Genetic algorithms for protein threading.
Yadgari, J; Amir, A; Unger, R
1998-01-01
Despite many years of efforts, a direct prediction of protein structure from sequence is still not possible. As a result, in the last few years researchers have started to address the "inverse folding problem": Identifying and aligning a sequence to the fold with which it is most compatible, a process known as "threading". In two meetings in which protein folding predictions were objectively evaluated, it became clear that threading as a concept promises a real breakthrough, but that much improvement is still needed in the technique itself. Threading is a NP-hard problem, and thus no general polynomial solution can be expected. Still a practical approach with demonstrated ability to find optimal solutions in many cases, and acceptable solutions in other cases, is needed. We applied the technique of Genetic Algorithms in order to significantly improve the ability of threading algorithms to find the optimal alignment of a sequence to a structure, i.e. the alignment with the minimum free energy. A major progress reported here is the design of a representation of the threading alignment as a string of fixed length. With this representation validation of alignments and genetic operators are effectively implemented. Appropriate data structure and parameters have been selected. It is shown that Genetic Algorithm threading is effective and is able to find the optimal alignment in a few test cases. Furthermore, the described algorithm is shown to perform well even without pre-definition of core elements. Existing threading methods are dependent on such constraints to make their calculations feasible. But the concept of core elements is inherently arbitrary and should be avoided if possible. While a rigorous proof is hard to submit yet an, we present indications that indeed Genetic Algorithm threading is capable of finding consistently good solutions of full alignments in search spaces of size up to 10(70).
Quantum Computation and Algorithms
International Nuclear Information System (INIS)
Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.
1999-01-01
It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution
Directory of Open Access Journals (Sweden)
Chad H. Stahl
2012-05-01
Full Text Available Satellite cell activity is necessary for postnatal skeletal muscle growth. Severe phosphate (PO_{4} deficiency can alter satellite cell activity, however the role of neonatal PO_{4} nutrition on satellite cell biology remains obscure. Twenty-one piglets (1 day of age, 1.8 ± 0.2 kg BW were pair-fed liquid diets that were either PO_{4} adequate (0.9% total P, supra-adequate (1.2% total P in PO_{4} requirement or deficient (0.7% total P in PO_{4} content for 12 days. Body weight was recorded daily and blood samples collected every 6 days. At day 12, pigs were orally dosed with BrdU and 12 h later, satellite cells were isolated. Satellite cells were also cultured <em>in vitroem> for 7 days to determine if PO_{4} nutrition alters their ability to proceed through their myogenic lineage. Dietary PO_{4} deficiency resulted in reduced (<em>P> < 0.05 sera PO_{4} and parathyroid hormone (PTH concentrations, while supra-adequate dietary PO_{4} improved (<em>P> < 0.05 feed conversion efficiency as compared to the PO_{4} adequate group. <em>In vivoem> satellite cell proliferation was reduced (<em>P> < 0.05 among the PO_{4} deficient pigs, and these cells had altered <em>in vitroem> expression of markers of myogenic progression. Further work to better understand early nutritional programming of satellite cells and the potential benefits of emphasizing early PO_{4} nutrition for future lean growth potential is warranted.
Solving the SAT problem using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Arunava Bhattacharjee
2017-08-01
Full Text Available In this paper we propose our genetic algorithm for solving the SAT problem. We introduce various crossover and mutation techniques and then make a comparative analysis between them in order to find out which techniques are the best suited for solving a SAT instance. Before the genetic algorithm is applied to an instance it is better to seek for unit and pure literals in the given formula and then try to eradicate them. This can considerably reduce the search space, and to demonstrate this we tested our algorithm on some random SAT instances. However, to analyse the various crossover and mutation techniques and also to evaluate the optimality of our algorithm we performed extensive experiments on benchmark instances of the SAT problem. We also estimated the ideal crossover length that would maximise the chances to solve a given SAT instance.
Upper Bound for Queue length in Regulated Burst Service Scheduling
Directory of Open Access Journals (Sweden)
Mahmood Daneshvar Farzanegan
2016-01-01
Full Text Available Quality of Service (QoS provisioning is very important in next computer/communication networks because of increasing multimedia services. Hence, very investigations are performed in this area. Scheduling algorithms effect QoS provisioning. Lately, a scheduling algorithm called Regulated Burst Service Scheduling (RBSS suggested by author in [1] to provide a better service to bursty and delay sensitive services such as video. One of the most significant feature in RBSS is considering burstiness of arrival traffic in scheduling algorithm. In this paper, an upper bound of queue length or buffer size and service curve are calculated by Network Calculus analysis for RBSS. Because in RBSS queue length is a parameter that is considered in scheduling arbitrator, analysis results a differential inequality to obtain service curve. To simplify, arrival traffic is assumed to be linear that is defined in the paper clearly. This paper help to analysis delay in RBSS for different traffic with different specifications. Therefore, QoS provisioning will be evaluated.
Comparison of reconfigurable structures for flexible word-length multiplication
Directory of Open Access Journals (Sweden)
O. A. Pfänder
2008-05-01
Full Text Available Binary multiplication continues to be one of the essential arithmetic operations in digital circuits. Even though field-programmable gate arrays (FPGAs are becoming more and more powerful these days, the vendors cannot avoid implementing multiplications with high word-lengths using embedded blocks instead of configurable logic. But on the other hand, the circuit's efficiency decreases if the provided word-length of the hard-wired multipliers exceeds the precision requirements of the algorithm mapped into the FPGA. Thus it is beneficial to use multiplier blocks with configurable word-length, optimized for area, speed and power dissipation, e.g. regarding digital signal processing (DSP applications.
In this contribution, we present different approaches and structures for the realization of a multiplication with variable precision and perform an objective comparison. This includes one approach based on a modified Baugh and Wooley algorithm and three structures using Booth's arithmetic operand recoding with different array structures. All modules have the option to compute signed two's complement fix-point numbers either as an individual computing unit or interconnected to a superior array. Therefore, a high throughput at low precision through parallelism, or a high precision through concatenation can be achieved.
Constituents from <em>Vigna em>vexillata> and Their Anti-Inflammatory Activity
Directory of Open Access Journals (Sweden)
Guo-Feng Chen
2012-08-01
Full Text Available The seeds of <em>Vigna em>genus are important food resources and there have already been many reports regarding their bioactivities. In our preliminary bioassay, the chloroform layer of methanol extracts of<em> V. vexillata em>demonstrated significant anti-inflammatory bioactivity. Therefore, the present research is aimed to purify and identify the anti-inflammatory principles of <em>V. vexillataem>. One new sterol (1 and two new isoflavones (2,3 were reported from the natural sources for the first time and their chemical structures were determined by the spectroscopic and mass spectrometric analyses. In addition, 37 known compounds were identified by comparison of their physical and spectroscopic data with those reported in the literature. Among the isolates, daidzein (23, abscisic acid (25, and quercetin (40 displayed the most significant inhibition of superoxide anion generation and elastase release.
International Nuclear Information System (INIS)
Chandrasekharan, Shailesh
2000-01-01
Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Directory of Open Access Journals (Sweden)
Hugo Andrés Ruiz
2012-06-01
Full Text Available En este artículo se presenta un método para resolver el problema de estimación de estado en sistemas eléctricos usando optimización combinatoria. Su objetivo es el estudio de mediciones con errores de difícil detección, que afectan el desempeño y calidad de los resultados cuando se emplea un estimador de estado clásico. Dada su complejidad matemática, se deducen indicadores de sensibilidad de la teoría de puntos de apalancamiento que se usan en el algoritmo de optimización de Chu-Beasley, con el fin de disminuir el esfuerzo computacional y mejorar la calidad de los resultados. El método propuesto se valida en un sistema IEEE de 30 nodos.Neste artigo apresenta-se um método para resolver o problema de estimativa de estado em sistemas elétricos usando otimização combinatória. Seu objetivo é o estudo de medidas com erros de difícil detecção, que afetam o desempenho e qualidade dos resultados quando se emprega um estimador de estado clássico. Dada sua complexidade matemática, deduzem-se indicadores de sensibilidade da teoria de pontos de alavancagem que se usam no algoritmo de otimização de Chu-Beasley, com o fim de diminuir o esforço computacional e melhorar a qualidade dos resultados. O método proposto se valida em um sistema IEEE de 30 nós.In this paper a method to solve the state estimation problem in electric systems applying combinatorial optimization is presented. Its objective is the study of measures with difficult detection errors, which affect the performance and quality of the results when a classic state estimator is used. Due to the mathematical complexity, sensibility indicators are deduced from the theory of leverage points used in the Chu-Beasley optimization algorithm with the purpose of reducing the computational effort and enhance the quality of the results. The proposed method is validated in a 30-node IEEE system.
Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa
2018-01-01
The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,
Directory of Open Access Journals (Sweden)
Josef Jampilek
2012-08-01
Full Text Available A series of twenty-five novel salicylanilide <em>N>-alkylcarbamates were investigated as potential acetylcholinesterase inhibitors. The compounds were tested for their ability to inhibit acetylcholinesterase (AChE from electric eel (<em>Electrophorus electricusem> L.. Experimental lipophilicity was determined, and the structure-activity relationships are discussed. The mode of binding in the active site of AChE was investigated by molecular docking. All the discussed compounds expressed significantly higher AChE inhibitory activity than rivastigmine and slightly lower than galanthamine. Disubstitution by chlorine in C'_{(3,4} of the aniline ring and the optimal length of hexyl-undecyl alkyl chains in the carbamate moiety provided the most active AChE inhibitors. Monochlorination in C'_{(4} exhibited slightly more effective AChE inhibitors than in C'_{(3}. Generally it can be stated that compounds with higher lipophilicity showed higher inhibition, and the activity of the compounds is strongly dependent on the length of the <em>N>-alkyl chain.
Directory of Open Access Journals (Sweden)
Marcelo Mollinari
2008-04-01
Full Text Available O objetivo deste trabalho foi avaliar a eficiência, na construção de mapas genéticos, dos algoritmos seriação e delineação rápida em cadeia, além dos critérios para avaliação de ordens: produto mínimo das frações de recombinação adjacentes, soma mínima das frações de recombinação adjacentes e soma máxima dos LOD Scores adjacentes, quando usados com o algoritmo de verificação de erros " ripple" . Foi simulado um mapa com 24 marcadores, posicionados aleatoriamente a distâncias variadas, com média 10 cM. Por meio do método Monte Carlo, foram obtidas 1.000 populações de retrocruzamento e 1.000 populações F2, com 200 indivíduos cada, e diferentes combinações de marcadores dominantes e co-dominantes (100% co-dominantes, 100% dominantes e mistura com 50% co-dominantes e 50% dominantes. Foi, também, simulada a perda de 25, 50 e 75% dos dados. Observou-se que os dois algoritmos avaliados tiveram desempenho semelhante e foram sensíveis à presença de dados perdidos e à presença de marcadores dominantes; esta última dificultou a obtenção de estimativas com boa acurácia, tanto da ordem quanto da distância. Além disso, observou-se que o algoritmo " ripple" geralmente aumenta o número de ordens corretas e pode ser combinado com os critérios soma mínima das frações de recombinação adjacentes e produto mínimo das frações de recombinação adjacentes.The objective of this work was to evaluate the efficiency for the construction of genetic linkage maps of the algorithms seriation and rapid chain delineation, as well as the criteria: product of adjacent recombination fractions, sum of adjacent recombination fractions, and sum of adjacent LOD Scores, used with the ripple algorithm. A genetic linkage map was simulated containing 24 markers with random distances between them, with an average of 10 cM. Using the Monte Carlo method, 1,000 backcross populations and 1,000 F2 populations were simulated. The
Momentos em freios e em embraiagens
Mimoso, Rui Miguel Pereira
2011-01-01
Dissertação para obtenção do Grau de Mestre em Mestrado Integrado em Engenharia Mecânica Nesta dissertação reúnem-se os modelos de cálculo utilizados na determinação dos momentos em freios e em embraiagens. Neste trabalho consideram-se os casos de freios e embraiagens de atrito seco e atrito viscoso. Nos freios de atrito viscoso são considerados casos em que as características dos fluidos não são induzidas, e outros em que são induzidas modificações a essas mesmas características. São a...
Linac design algorithm with symmetric segments
International Nuclear Information System (INIS)
Takeda, Harunori; Young, L.M.; Nath, S.; Billen, J.H.; Stovall, J.E.
1996-01-01
The cell lengths in linacs of traditional design are typically graded as a function of particle velocity. By making groups of cells and individual cells symmetric in both the CCDTL AND CCL, the cavity design as well as mechanical design and fabrication is simplified without compromising the performance. We have implemented a design algorithm in the PARMILA code in which cells and multi-cavity segments are made symmetric, significantly reducing the number of unique components. Using the symmetric algorithm, a sample linac design was generated and its performance compared with a similar one of conventional design
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
Estimation of ocular volume from axial length.
Nagra, Manbir; Gilmartin, Bernard; Logan, Nicola S
2014-12-01
To determine which biometric parameters provide optimum predictive power for ocular volume. Sixty-seven adult subjects were scanned with a Siemens 3-T MRI scanner. Mean spherical error (MSE) (D) was measured with a Shin-Nippon autorefractor and a Zeiss IOLMaster used to measure (mm) axial length (AL), anterior chamber depth (ACD) and corneal radius (CR). Total ocular volume (TOV) was calculated from T2-weighted MRIs (voxel size 1.0 mm(3)) using an automatic voxel counting and shading algorithm. Each MR slice was subsequently edited manually in the axial, sagittal and coronal plane, the latter enabling location of the posterior pole of the crystalline lens and partitioning of TOV into anterior (AV) and posterior volume (PV) regions. Mean values (±SD) for MSE (D), AL (mm), ACD (mm) and CR (mm) were -2.62±3.83, 24.51±1.47, 3.55±0.34 and 7.75±0.28, respectively. Mean values (±SD) for TOV, AV and PV (mm(3)) were 8168.21±1141.86, 1099.40±139.24 and 7068.82±1134.05, respectively. TOV showed significant correlation with MSE, AL, PV (all p<0.001), CR (p=0.043) and ACD (p=0.024). Bar CR, the correlations were shown to be wholly attributable to variation in PV. Multiple linear regression indicated that the combination of AL and CR provided optimum R(2) values of 79.4% for TOV. Clinically useful estimations of ocular volume can be obtained from measurement of AL and CR. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
VISUALIZATION OF PAGERANK ALGORITHM
Perhaj, Ervin
2013-01-01
The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Directory of Open Access Journals (Sweden)
André Luiz Galo
2009-01-01
Full Text Available We describe the design and tests of a set-up mounted in a conventional double beam spectrophotometer, which allows the determination of optical density of samples confined in a long liquid core waveguide (LCW capillary. Very long optical path length can be achieved with capillary cell, allowing measurements of samples with very low optical densities. The device uses a custom optical concentrator optically coupled to LCW (TEFLON® AF. Optical density measurements, carried out using a LCW of ~ 45 cm, were in accordance with the Beer-Lambert Law. Thus, it was possible to analyze quantitatively samples at concentrations 45 fold lower than that regularly used in spectrophotometric measurements.
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Holst, Glendon
2016-12-01
Serial section electron microscopy (SSEM) image stacks generated using high throughput microscopy techniques are an integral tool for investigating brain connectivity and cell morphology. FIB or 3View scanning electron microscopes easily generate gigabytes of data. In order to produce analyzable 3D dataset from the imaged volumes, efficient and reliable image segmentation is crucial. Classical manual approaches to segmentation are time consuming and labour intensive. Semiautomatic seeded watershed segmentation algorithms, such as those implemented by ilastik image processing software, are a very powerful alternative, substantially speeding up segmentation times. We have used ilastik effectively for small EM stacks – on a laptop, no less; however, ilastik was unable to carve the large EM stacks we needed to segment because its memory requirements grew too large – even for the biggest workstations we had available. For this reason, we refactored the carving module of ilastik to scale it up to large EM stacks on large workstations, and tested its efficiency. We modified the carving module, building on existing blockwise processing functionality to process data in manageable chunks that can fit within RAM (main memory). We review this refactoring work, highlighting the software architecture, design choices, modifications, and issues encountered.
Fast implementation of length-adaptive privacy amplification in quantum key distribution
International Nuclear Information System (INIS)
Zhang Chun-Mei; Li Mo; Huang Jing-Zheng; Li Hong-Wei; Li Fang-Yi; Wang Chuan; Yin Zhen-Qiang; Chen Wei; Han Zhen-Fu; Treeviriyanupab Patcharapong; Sripimanwat Keattisak
2014-01-01
Post-processing is indispensable in quantum key distribution (QKD), which is aimed at sharing secret keys between two distant parties. It mainly consists of key reconciliation and privacy amplification, which is used for sharing the same keys and for distilling unconditional secret keys. In this paper, we focus on speeding up the privacy amplification process by choosing a simple multiplicative universal class of hash functions. By constructing an optimal multiplication algorithm based on four basic multiplication algorithms, we give a fast software implementation of length-adaptive privacy amplification. “Length-adaptive” indicates that the implementation of privacy amplification automatically adapts to different lengths of input blocks. When the lengths of the input blocks are 1 Mbit and 10 Mbit, the speed of privacy amplification can be as fast as 14.86 Mbps and 10.88 Mbps, respectively. Thus, it is practical for GHz or even higher repetition frequency QKD systems. (general)
Convolutional Encoder and Viterbi Decoder Using SOPC For Variable Constraint Length
DEFF Research Database (Denmark)
Kulkarni, Anuradha; Dnyaneshwar, Mantri; Prasad, Neeli R.
2013-01-01
Convolution encoder and Viterbi decoder are the basic and important blocks in any Code Division Multiple Accesses (CDMA). They are widely used in communication system due to their error correcting capability But the performance degrades with variable constraint length. In this context to have...... detailed analysis, this paper deals with the implementation of convolution encoder and Viterbi decoder using system on programming chip (SOPC). It uses variable constraint length of 7, 8 and 9 bits for 1/2 and 1/3 code rates. By analyzing the Viterbi algorithm it is seen that our algorithm has a better...
[Orthogonal Vector Projection Algorithm for Spectral Unmixing].
Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li
2015-12-01
Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.
On algorithm for building of optimal α-decision trees
Alkhalid, Abdulaziz
2010-01-01
The paper describes an algorithm that constructs approximate decision trees (α-decision trees), which are optimal relatively to one of the following complexity measures: depth, total path length or number of nodes. The algorithm uses dynamic programming and extends methods described in [4] to constructing approximate decision trees. Adjustable approximation rate allows controlling algorithm complexity. The algorithm is applied to build optimal α-decision trees for two data sets from UCI Machine Learning Repository [1]. © 2010 Springer-Verlag Berlin Heidelberg.
Directory of Open Access Journals (Sweden)
Adriana Garófolo
2010-10-01
industrializada reduziu o deficit nutricional, principalmente em desnutridos leves. Os resultados sugerem que a suplemento industrializado por sonda favoreceu a recuperação nutricional, principalmente com o uso mais prolongado.Objective This study aimed to describe the algorithm and the global results after its implementation. Methods This was a randomized clinical trial done with malnourished cancer patients. Follow-up followed an algorithm and patients with mild malnutrition were randomized to receive store-bought or homemade oral supplementation. The patients were reassessed after three, eight and twelve weeks. Depending on how the group supplemented with store-bought supplements responded, the supplementation was either continued orally, by tube-feeding or discontinued. The group receiving homemade supplementation either continued on it if the response was positive or received store-bought oral supplementation if the response was negative. The severely malnourished patients either received store-bought supplementation by feeding tube or orally, or it was discontinued if an adequate nutritional status was reached. The patients' responses to supplementation were determined by weight-for-height Z-scores, body mass indices, triceps skinfold thicknesses and circumferences. Results One hundred and seventeen out of 141 patients completed the first three weeks; 58 were severely malnourished and 59 were mildly malnourished. The nutritional status of 41% of the severely malnourished patients and 97% of the mildly malnourished patients receiving store-bought supplement orally improved. The nutritional status of 77% of the mildly malnourished patients receiving homemade supplement orally also improved. Of the 117 patients, 42 had to be tube-fed; of these, 23 accepted and 19 refused tube feeding and continued taking store-bought supplement orally. Consumption of store-bought supplement was higher in tube-fed patients than in orally-fed patients. Consumption also increased as orally
Dermatoses em renais cronicos em terapia dialitica
Directory of Open Access Journals (Sweden)
Luis Alberto Batista Peres
2014-03-01
Full Text Available Objetivo: As desordens cutâneas e das mucosas são comuns em pacientes em hemodiálise a longo prazo. A diálise prolonga a expectativa de vida, dando tempo para a manifestação destas anormalidades. Os objetivos deste estudo foram avaliar a prevalência de problemas dermatológicos em pacientes com doença renal crônica (DRC em hemodiálise. Métodos: Cento e quarenta e cinco pacientes com doença renal crônica em hemodiálise foram estudados. Todos os pacientes foram completamente analisados para as alterações cutâneas, de cabelos, mucosas e unhas por um único examinador e foram coletados dados de exames laboratoriais. Os dados foram armazenados em um banco de dados do Microsolft Excel e analisados por estatística descritiva. As variáveis contínuas foram comparadas pelo teste t de Student e as variáveis categóricas utilizando o teste do qui-quadrado ou o teste Exato de Fischer, conforme adequado. Resultados: O estudo incluiu 145 pacientes, com idade média de 53,6 ± 14,7 anos, predominantemente do sexo masculino (64,1% e caucasianos (90,0%. O tempo médio de diálise foi de 43,3 ± 42,3 meses. As principais doenças subjacentes foram: hipertensão arterial em 33,8%, diabetes mellitus em 29,6% e glomerulonefrite crônica em 13,1%. As principais manifestações dermatológicas observadas foram: xerose em 109 (75,2%, equimose em 87 (60,0%, prurido em 78 (53,8% e lentigo em 33 (22,8% pacientes. Conclusão: O nosso estudo mostrou a presença de mais do que uma dermatose por paciente. As alterações cutâneas são frequentes em pacientes em diálise. Mais estudos são necessários para melhor caracterização e manejo destas dermatoses.
Directory of Open Access Journals (Sweden)
Joab Trajano Silva
2008-12-01
Full Text Available Mycobacterium bovis é membro do complexo Mycobacterium tuberculosis (MTBC, grupo este composto por espécies com grande homologia genética. É o agente etiológico da tuberculose bovina, importante zoonose transmissível ao homem, principalmente através da inalação do bacilo e/ou pelo consumo de leite e derivados não-pasteurizados provenientes de vacas tuberculosas. O objetivo deste estudo foi padronizar a identificação de micobactérias do complexo M. tuberculosis presentes no leite, por metodologia molecular. Fez-se a extração de DNA diretamente do leite contaminado e realizou-se a identificação molecular pela reação em cadeia da polimerase seguida de análise de restrição do fragmento amplificado (PRA. Utilizaram-se inhagens de referência e leite cru artificialmente contaminado com M. bovis IP. Um fragmento de 441pb do gene hsp65 foi amplificado, tratado com BstEII e HaeIII e empregou-se o perfil de restrição enzimática obtido para identificar o complexo M. tuberculosis no leite. Com a PRA foi possível detectar com especificidade e sensibilidade a presença de M. bovis em até 10 UFC/mL de leite. A metodologia padronizada poderá auxiliar os métodos microbiológicos e bioquímicos tradicionalmente usados na identificação do bacilo em alimentos suspeitos de contaminação, como, por exemplo, o leite proveniente de animais suspeitos de infecção por M. bovis.
Palavras-chaves: Análise de perfil de restrição enzimática (PRA, complexo Mycobacterium tuberculosis, leite, Mycobacterium bovis, limite de detecção (PCR. Mycobacterium bovis is a member of the M. tuberculosis complex, a group composed by species with high genetic homology. The pathogen is the etiological agent of bovine tuberculosis, an important zoonosis that is mainly transmitted by inhalation of infectious droplet nuclei or by ingestion of milk and crude milk derivative products from tuberculosis cows. The definitive identification of M. bovis
Semioptimal practicable algorithmic cooling
International Nuclear Information System (INIS)
Elias, Yuval; Mor, Tal; Weinstein, Yossi
2011-01-01
Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.
International Nuclear Information System (INIS)
Rupnik, K.; Asaf, U.; McGlynn, S.P.
1990-01-01
A linear correlation exists between the electron scattering length, as measured by a pressure shift method, and the polarizabilities for He, Ne, Ar, Kr, and Xe gases. The correlative algorithm has excellent predictive capability for the electron scattering lengths of mixtures of rare gases, simple molecular gases such as H 2 and N 2 and even complex molecular entities such as methane, CH 4
Considerations and Algorithms for Compression of Sets
DEFF Research Database (Denmark)
Larsson, Jesper
We consider compression of unordered sets of distinct elements. After a discus- sion of the general problem, we focus on compressing sets of fixed-length bitstrings in the presence of statistical information. We survey techniques from previous work, suggesting some adjustments, and propose a novel...... compression algorithm that allows transparent incorporation of various estimates for probability distribution. Our experimental results allow the conclusion that set compression can benefit from incorporat- ing statistics, using our method or variants of previously known techniques....
Online EM with weight-based forgetting
Celaya, Enric; Agostini, Alejandro
2015-01-01
In the on-line version of the EM algorithm introduced by Sato and Ishii (2000), a time-dependent discount factor is introduced for forgetting the effect of the old posterior values obtained with an earlier, inaccurate estimator. In their approach, forgetting is uniformly applied to the estimators of each mixture component depending exclusively on time, irrespective of the weight attributed to each unit for the observed sample. This causes an excessive forgetting in the less frequently sampled...
Development of regularized expectation maximization algorithms for fan-beam SPECT data
International Nuclear Information System (INIS)
Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min
2005-01-01
SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions
Estimating Hull Coating Thickness Distributions Using the EM Algorithm
National Research Council Canada - National Science Library
Corriere, Michael
2000-01-01
The underwater hull coating system on surface ships is comprised anti-corrosive (AC) and anti-fouling (AF) paint The AF layers are designed to wear away, continuously leaching cuprous oxide to inhibit marine growth...
Motion based segmentation for robot vision using adapted EM algorithm
Zhao, Wei; Roos, Nico
2016-01-01
Robots operate in a dynamic world in which objects are often moving. The movement of objects may help the robot to segment the objects from the background. The result of the segmentation can subsequently be used to identify the objects. This paper investigates the possibility of segmenting objects
A Generalized Partial Credit Model: Application of an EM Algorithm.
Muraki, Eiji
1992-01-01
The partial credit model with a varying slope parameter is developed and called the generalized partial credit model (GPCM). Analysis results for simulated data by this and other polytomous item-response models demonstrate that the rating formulation of the GPCM is adaptable to the analysis of polytomous item responses. (SLD)
Introduction to Evolutionary Algorithms
Yu, Xinjie
2010-01-01
Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Woo, Andrew
2012-01-01
Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Quick fuzzy backpropagation algorithm.
Nikov, A; Stoeva, S
2001-03-01
A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.
Portfolios of quantum algorithms.
Maurer, S M; Hogg, T; Huberman, B A
2001-12-17
Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.
New Algorithm of Automatic Complex Password Generator Employing Genetic Algorithm
Directory of Open Access Journals (Sweden)
Sura Jasim Mohammed
2018-01-01
Full Text Available Due to the occurred increasing in information sharing, internet popularization, E-commerce transactions, and data transferring, security and authenticity become an important and necessary subject. In this paper an automated schema was proposed to generate a strong and complex password which is based on entering initial data such as text (meaningful and simple information or not, with the concept of encoding it, then employing the Genetic Algorithm by using its operations crossover and mutation to generated different data from the entered one. The generated password is non-guessable and can be used in many and different applications and internet services like social networks, secured system, distributed systems, and online services. The proposed password generator achieved diffusion, randomness, and confusions, which are very necessary, required and targeted in the resulted password, in addition to the notice that the length of the generated password differs from the length of initial data, and any simple changing and modification in the initial data produces more and clear modification in the generated password. The proposed work was done using visual basic programing language.
A transport-based condensed history algorithm
International Nuclear Information System (INIS)
Tolar, D. R. Jr.
1999-01-01
Condensed history algorithms are approximate electron transport Monte Carlo methods in which the cumulative effects of multiple collisions are modeled in a single step of (user-specified) path length s 0 . This path length is the distance each Monte Carlo electron travels between collisions. Current condensed history techniques utilize a splitting routine over the range 0 le s le s 0 . For example, the PEnELOPE method splits each step into two substeps; one with length ξs 0 and one with length (1 minusξ)s 0 , where ξ is a random number from 0 0 is fixed (not sampled from an exponential distribution), conventional condensed history schemes are not transport processes. Here the authors describe a new condensed history algorithm that is a transport process. The method simulates a transport equation that approximates the exact Boltzmann equation. The new transport equation has a larger mean free path than, and preserves two angular moments of, the Boltzmann equation. Thus, the new process is solved more efficiently by Monte Carlo, and it conserves both particles and scattering power
Algorithm 426 : Merge sort algorithm [M1
Bron, C.
1972-01-01
Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives
Energy Technology Data Exchange (ETDEWEB)
1993-07-01
It is the intent of EM International to describe the Office of Environmental Restoration and Waste Management`s (EM`s) various roles and responsibilities within the international community. Cooperative agreements and programs, descriptions of projects and technologies, and synopses of visits to international sites are all highlighted in this semiannual journal. Focus on EM programs in this issue is on international collaboration in vitrification projects. Technology highlights covers: in situ sealing for contaminated sites; and remote sensors for toxic pollutants. Section on profiles of countries includes: Arctic contamination by the former Soviet Union, and EM activities with Germany--cooperative arrangements.
Calculating Graph Algorithms for Dominance and Shortest Path
DEFF Research Database (Denmark)
Sergey, Ilya; Midtgaard, Jan; Clarke, Dave
2012-01-01
We calculate two iterative, polynomial-time graph algorithms from the literature: a dominance algorithm and an algorithm for the single-source shortest path problem. Both algorithms are calculated directly from the definition of the properties by fixed-point fusion of (1) a least fixed point...... expressing all finite paths through a directed graph and (2) Galois connections that capture dominance and path length. The approach illustrates that reasoning in the style of fixed-point calculus extends gracefully to the domain of graph algorithms. We thereby bridge common practice from the school...... of program calculation with common practice from the school of static program analysis, and build a novel view on iterative graph algorithms as instances of abstract interpretation...
Short Rayleigh Length Free Electron Lasers
Crooker, P P; Armstead, R L; Blau, J
2004-01-01
Conventional free electron laser (FEL) oscillators minimize the optical mode volume around the electron beam in the undulator by making the resonator Rayleigh length about one third of the undulator length. This maximizes gain and beam-mode coupling. In compact configurations of high-power infrared FELs or moderate power UV FELs, the resulting optical intensity can damage the resonator mirrors. To increase the spot size and thereby reduce the optical intensity at the mirrors below the damage threshold, a shorter Rayleigh length can be used, but the FEL interaction is significantly altered. A new FEL interaction is described and analyzed with a Rayleigh length that is only one tenth the undulator length, or less. The effect of mirror vibration and positioning are more critical in the short Rayleigh length design, but we find that they are still within normal design tolerances.
Length dependent properties of SNS microbridges
International Nuclear Information System (INIS)
Sauvageau, J.E.; Jain, R.K.; Li, K.; Lukens, J.E.; Ono, R.H.
1985-01-01
Using an in-situ, self-aligned deposition scheme, arrays of variable length SNS junctions in the range of 0.05 μm to 1 μm have been fabricated. Arrays of SNS microbridges of lead-copper and niobium-copper fabricated using this technique have been used to study the length dependence, at constant temperature, of the critical current I and bridge resistance R /SUB d/ . For bridges with lengths pounds greater than the normal metal coherence length xi /SUB n/ (T), the dependence of I /SUB c/ on L is consistent with an exponential dependence on the reduced length l=L/xi /SUB n/ (T). For shorter bridges, deviations from this behavior is seen. It was also found that the bridge resistance R /SUB d/ does not vary linearly with the geometric bridge length but appears to approach a finite value as L→O
Energy Technology Data Exchange (ETDEWEB)
Lee, Youngrok [Iowa State Univ., Ames, IA (United States)
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.
Measuring Crack Length in Coarse Grain Ceramics
Salem, Jonathan A.; Ghosn, Louis J.
2010-01-01
Due to a coarse grain structure, crack lengths in precracked spinel specimens could not be measured optically, so the crack lengths and fracture toughness were estimated by strain gage measurements. An expression was developed via finite element analysis to correlate the measured strain with crack length in four-point flexure. The fracture toughness estimated by the strain gaged samples and another standardized method were in agreement.
Dither Cavity Length Controller with Iodine Locking
Directory of Open Access Journals (Sweden)
Lawson Marty
2016-01-01
Full Text Available A cavity length controller for a seeded Q-switched frequency doubled Nd:YAG laser is constructed. The cavity length controller uses a piezo-mirror dither voltage to find the optimum length for the seeded cavity. The piezo-mirror dither also dithers the optical frequency of the output pulse. [1]. This dither in optical frequency is then used to lock to an Iodine absorption line.
Beyond Mixing-length Theory: A Step Toward 321D
Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav
2015-08-01
We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier-Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier-Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier-Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated.
BEYOND MIXING-LENGTH THEORY: A STEP TOWARD 321D
International Nuclear Information System (INIS)
Arnett, W. David; Meakin, Casey; Viallet, Maxime; Campbell, Simon W.; Lattanzio, John C.; Mocák, Miroslav
2015-01-01
We examine the physical basis for algorithms to replace mixing-length theory (MLT) in stellar evolutionary computations. Our 321D procedure is based on numerical solutions of the Navier–Stokes equations. These implicit large eddy simulations (ILES) are three-dimensional (3D), time-dependent, and turbulent, including the Kolmogorov cascade. We use the Reynolds-averaged Navier–Stokes (RANS) formulation to make concise the 3D simulation data, and use the 3D simulations to give closure for the RANS equations. We further analyze this data set with a simple analytical model, which is non-local and time-dependent, and which contains both MLT and the Lorenz convective roll as particular subsets of solutions. A characteristic length (the damping length) again emerges in the simulations; it is determined by an observed balance between (1) the large-scale driving, and (2) small-scale damping. The nature of mixing and convective boundaries is analyzed, including dynamic, thermal and compositional effects, and compared to a simple model. We find that (1) braking regions (boundary layers in which mixing occurs) automatically appear beyond the edges of convection as defined by the Schwarzschild criterion, (2) dynamic (non-local) terms imply a non-zero turbulent kinetic energy flux (unlike MLT), (3) the effects of composition gradients on flow can be comparable to thermal effects, and (4) convective boundaries in neutrino-cooled stages differ in nature from those in photon-cooled stages (different Péclet numbers). The algorithms are based upon ILES solutions to the Navier–Stokes equations, so that, unlike MLT, they do not require any calibration to astronomical systems in order to predict stellar properties. Implications for solar abundances, helioseismology, asteroseismology, nucleosynthesis yields, supernova progenitors and core collapse are indicated
Composite Differential Search Algorithm
Directory of Open Access Journals (Sweden)
Bo Liu
2014-01-01
Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.
Algorithms and Their Explanations
Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.
2014-01-01
By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of
Finite lattice extrapolation algorithms
International Nuclear Information System (INIS)
Henkel, M.; Schuetz, G.
1987-08-01
Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)
Recursive automatic classification algorithms
Energy Technology Data Exchange (ETDEWEB)
Bauman, E V; Dorofeyuk, A A
1982-03-01
A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
8. Algorithm Design Techniques
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...
Information, polarization and term length in democracy
DEFF Research Database (Denmark)
Schultz, Christian
2008-01-01
This paper considers term lengths in a representative democracy where the political issue divides the population on the left-right scale. Parties are ideologically different and better informed about the consequences of policies than voters are. A short term length makes the government more...... accountable, but the re-election incentive leads to policy-distortion as the government seeks to manipulate swing voters' beliefs to make its ideology more popular. This creates a trade-off: A short term length improves accountability but gives distortions. A short term length is best for swing voters when...
On Line Segment Length and Mapping 4-regular Grid Structures in Network Infrastructures
DEFF Research Database (Denmark)
Riaz, Muhammad Tahir; Nielsen, Rasmus Hjorth; Pedersen, Jens Myrup
2006-01-01
The paper focuses on mapping the road network into 4-regular grid structures. A mapping algorithm is proposed. To model the road network GIS data have been used. The Geographic Information System (GIS) data for the road network are composed with different size of line segment lengths...
Geometric approximation algorithms
Har-Peled, Sariel
2011-01-01
Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.
Group leaders optimization algorithm
Daskin, Anmer; Kais, Sabre
2011-03-01
We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.
International Nuclear Information System (INIS)
Noga, M.T.
1984-01-01
This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry
Totally parallel multilevel algorithms
Frederickson, Paul O.
1988-01-01
Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
Where genetic algorithms excel.
Baum, E B; Boneh, D; Garrett, C
2001-01-01
We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
Directory of Open Access Journals (Sweden)
José Gabriel Segarra-Moragues
2012-09-01
Full Text Available Polymorphic microsatellite markers were developed for the Ibero-North African, strict gypsophyte <em>Lepidium subulatumem> to unravel the effects of habitat fragmentation in levels of genetic diversity, genetic structure and gene flow among its populations. Using 454 pyrosequencing 12 microsatellite loci including di- and tri-nucleotide repeats were characterized in <em>L. subulatumem>. They amplified a total of 80 alleles (2–12 alleles per locus in a sample of 35 individuals of <em>L. subulatumem>, showing relatively high levels of genetic diversity, <em>H>_{O} = 0.645, <em>H>_{E} = 0.627. Cross-species transferability of all 12 loci was successful for the Iberian endemics <em>Lepidium cardaminesem>, <em>Lepidium stylatumem>, and the widespread, <em>Lepidium graminifoliumem> and one species each of two related genera, <em>Cardaria drabaem> and <em>Coronopus didymusem>. These microsatellite primers will be useful to investigate genetic diversity, population structure and to address conservation genetics in species of <em>Lepidium>.
International Nuclear Information System (INIS)
Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa
2005-01-01
We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)
Length scale for configurational entropy in microemulsions
Reiss, H.; Kegel, W.K.; Groenewold, J.
1996-01-01
In this paper we study the length scale that must be used in evaluating the mixing entropy in a microemulsion. The central idea involves the choice of a length scale in configuration space that is consistent with the physical definition of entropy in phase space. We show that this scale may be
Proofs of Contracted Length Non-covariance
International Nuclear Information System (INIS)
Strel'tsov, V.N.
1994-01-01
Different proofs of contracted length non covariance are discussed. The way based on the establishment of interval inconstancy (dependence on velocity) seems to be the most convincing one. It is stressed that the known non covariance of the electromagnetic field energy and momentum of a moving charge ('the problem 4/3') is a direct consequence of contracted length non covariance. 8 refs
The length of the male urethra
Directory of Open Access Journals (Sweden)
Tobias. S. Kohler
2008-08-01
Full Text Available PURPOSE: Catheter-based medical devices are an important component of the urologic armamentarium. To our knowledge, there is no population-based data regarding normal male urethral length. We evaluated the length of the urethra in men with normal genitourinary anatomy undergoing either Foley catheter removal or standard cystoscopy. MATERIALS AND METHODS: Male urethral length was obtained in 109 men. After study permission was obtained, the subject's penis was placed on a gentle stretch and the catheter was marked at the tip of the penis. The catheter was then removed and the distance from the mark to the beginning of the re-inflated balloon was measured. Alternatively, urethral length was measured at the time of cystoscopy, on removal of the cystoscope. Data on age, weight, and height was obtained in patients when possible. RESULTS: The mean urethral length was 22.3 cm with a standard deviation of 2.4 cm. Urethral length varied between 15 cm and 29 cm. No statistically significant correlation was found between urethral length and height, weight, body mass index (BMI, or age. CONCLUSIONS: Literature documenting the length of the normal male adult urethra is scarce. Our data adds to basic anatomic information of the male urethra and may be used to optimize genitourinary device design.
Analysis of ureteral length in adult cadavers
Directory of Open Access Journals (Sweden)
Hugo F. F. Novaes
2013-04-01
Full Text Available Introduction In some occasions, correlations between human structures can help planning surgical intra-abdominal interventions. The previous determination of ureteral length helps pre-operatory planning of surgeries, reduces costs of auxiliary exams, the correct choice of double-J catheter with low morbidity and fewer symptoms, and an adequate adhesion to treatment. Objective To evaluate ureteral length in adult cadavers and to analyze its correlation with anthropometric measures. Materials and Methods: From April 2009 to January 2012 we determined ureteral length of adult cadavers submitted to necropsy and obtained the following measures: height, distance from shoulder to wrist, elbow-wrist, xiphoid appendix-umbilicus, umbilicus-pubis, xiphoid appendix-pubis and between iliac spines. We analyzed the correlations between ureteral length and those anthropometric measures. Results We dissected 115 ureters from 115 adult corpses from April 2009 to January 2012. Median ureteral length didn't vary between sexes or according to height. It was observed no correlation among ureteral length and all considered anthropometric measures in all analyzed subgroups and in general population. There were no significant differences between right and left ureteral measures. Conclusions There is no difference of ureteral length in relation to height or gender (male or female. There is no significant correlation among ureteral length and the considered anthropometric measures.
Influence of mandibular length on mouth opening
Dijkstra, PU; Hof, AL; Stegenga, B; De Bont, LGM
Theoretically, mouth opening not only reflects the mobility of the temporomandibular joints (TMJs) but also the mandibular length. Clinically, the exact relationship between mouth opening, mandibular length, and mobility of TMJs is unclear. To study this relationship 91 healthy subjects, 59 women
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
Economic issues of broiler production length
Directory of Open Access Journals (Sweden)
Szőllősi László
2014-01-01
Full Text Available The length of broiler production cycle is also an important factor when profitability is measured. This paper is to determine the effects of different market ages and down-time period, overall broiler production cycle length on performance and economic parameters based on Hungarian production and financial circumstances. A deterministic model was constructed to manage the function-like correlations of age-related daily weight gain, daily feed intake and daily mortality data. The results show that broiler production cycle length has a significant effect on production and economic performance. Cycle length is determined by the length of down-time and grow-out periods. If down-time period is reduced by one day, an average net income of EUR 0.55 per m2 is realizable. However, the production period is not directly proportional either with emerging costs or obtainable revenues. Profit maximization is attainable if the production period is 41-42 days.
Roentgenologic investigations for the anterior tooth length
Energy Technology Data Exchange (ETDEWEB)
Cho, Won Pyo; Ahn, Hyung Kyu [College of Dentistry, Seoul National University , Seoul (Korea, Republic of)
1972-11-15
The author measured the length of crown, root and tooth on the films which was taken by intraoral bisecting technic with mesh plate on the films. The films were taken from the dry skulls, dentiform, same patients who had to be removed their upper incisors, and the other patients who admitted for dental care. From this serial experiment the results were made as follows: 1. By using the film and mesh plate in the oral cavity, the real tooth length can be measured easily on the film surfaces. 2. The film distortion in the oral cavity can be avoided when taking the film using the mesh plate and film together. 3. When measuring the film, length of crown was elongated and length of root was shortened. 4. When using the well-trained bisecting technic, the real tooth length can be measured directly on the intraoral film.
Screening length in dusty plasma crystals
International Nuclear Information System (INIS)
Nikolaev, V S; Timofeev, A V
2016-01-01
Particles interaction and value of the screening length in dusty plasma systems are of great interest in dusty plasma area. Three inter-particle potentials (Debye potential, Gurevich potential and interaction potential in the weakly collisional regime) are used to solve equilibrium equations for two dusty particles suspended in a parabolic trap. The inter-particle distance dependence on screening length, trap parameter and particle charge is obtained. The functional form of inter-particle distance dependence on ion temperature is investigated and compared with experimental data at 200-300 K in order to test used potentials applicability to dusty plasma systems at room temperatures. The preference is given to the Yukawa-type potential including effective values of particle charge and screening length. The estimated effective value of the screening length is 5-15 times larger than the Debye length. (paper)
Microcomputer system for controlling fuel rod length
International Nuclear Information System (INIS)
Meyer, E.R.; Bouldin, D.W.; Bolfing, B.J.
1979-01-01
A system is being developed at the Oak Ridge National Laboratory (ORNL) to automatically measure and control the length of fuel rods for use in a high temperature gas-cooled reactor (HTGR). The system utilizes an LSI-11 microcomputer for monitoring fuel rod length and for adjusting the primary factor affecting length. Preliminary results indicate that the automated system can maintain fuel rod length within the specified limits of 1.940 +- 0.040 in. This system provides quality control documentation and eliminates the dependence of the current fuel rod molding process on manual length control. In addition, the microcomputer system is compatible with planned efforts to extend control to fuel rod fissile and fertile material contents
Improved algorithms for approximate string matching (extended abstract
Directory of Open Access Journals (Sweden)
Papamichail Georgios
2009-01-01
Full Text Available Abstract Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|·min(m, n, s + m + n and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online.
International Nuclear Information System (INIS)
Dinev, D.
1996-01-01
Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)
Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim
2017-01-01
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended
Directory of Open Access Journals (Sweden)
Hye Suck An
2012-08-01
Full Text Available <em>Mytilus coruscusem> (family Mytilidae is one of the most important marine shellfish species in Korea. During the past few decades, this species has become endangered due to the loss of habitats and overfishing. Despite this species’ importance, information on its genetic background is scarce. In this study, we developed microsatellite markers for <em>M.> coruscusem> using next-generation sequencing. A total of 263,900 raw reads were obtained from a quarter-plate run on the 454 GS-FLX titanium platform, and 176,327 unique sequences were generated with an average length of 381 bp; 2569 (1.45% sequences contained a minimum of five di- to tetra-nucleotide repeat motifs. Of the 51 loci screened, 46 were amplified successfully, and 22 were polymorphic among 30 individuals, with seven of trinucleotide repeats and three of tetranucleotide repeats. All loci exhibited high genetic variability, with an average of 17.32 alleles per locus, and the mean observed and expected heterozygosities were 0.67 and 0.90, respectively. In addition, cross-amplification was tested for all 22 loci in another congener species, <em>M.> <em>galloprovincialis.> None of the primer pairs resulted in effective amplification, which might be due to their high mutation rates. Our work demonstrated the utility of next-generation 454 sequencing as a method for the rapid and cost-effective identification of microsatellites. The high degree of polymorphism exhibited by the 22 newly developed microsatellites will be useful in future conservation genetic studies of this species.
Dynamic programming algorithms for biological sequence comparison.
Pearson, W R; Miller, W
1992-01-01
Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.
Plasma influence on the dispersion properties of finite-length, corrugated waveguides
Shkvarunets, A.; Kobayashi, S.; Weaver, J.; Carmel, Y.; Rodgers, J.; Antonsen, T.; Granatstein, V.L.; Destler, W.W.; Ogura, K.; Minami, K.
1996-01-01
We present an experimental study of the electromagnetic properties of transverse magnetic modes in a corrugated-wall cavity filled with a radially inhomogeneous plasma. The shifts of the .resonant frequencies of a finite-length, corrugated cavity were measured as a function of the background plasma density and the dispersion diagram was reconstructed up to a peak plasma density of 1012 em - 3. Good agreement with a calculated dispersion diagram is obtained for plasma densities below 5 X 1011 ...
On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm
Budiman, M. A.; Rachmawati, D.
2017-12-01
The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.
Kidney Length in Normal Korean Children
International Nuclear Information System (INIS)
Kim, In One; Cheon, Jung Eun; Lee, Young Seok; Lee, Sun Wha; Kim, Ok Hwa; Kim, Ji Hye; Kim, Hong Dae; Sim, Jung Suk
2010-01-01
Renal length offers important information to detect or follow-up various renal diseases. The purpose of this study was to determine the kidney length of normal Korean children in relation to age, height, weight, body surface area (BSA), and body mass index (BMI). Children between 1 month and 15 years of age without urological abnormality were recruited. Children below 3rd percentile and over 97th percentile for height or weight were excluded. Both renal lengths were measured in the prone position three times and then averaged by experienced radiologists. The mean length and standard deviation for each age group was obtained, and regression equation was calculated between renal length and age, weight, height, BSA, and BMI, respectively. Renal length was measured in 550 children. Renal length grows rapidly until 24 month, while the growth rate is reduced thereafter. The regression equation for age is: renal length (mm) = 45.953 + 1.064 x age (month, ≤ 24 months) (R2 = 0.720) or 62.173 + 0.203 x age (months, > 24 months) (R2 = 0.711). The regression equation for height is: renal length (mm) = 24.494 + 0.457 x height (cm) (R2 = 0.894). The regression equation for weight is: renal length (mm) = 38.342 + 2.117 x weight (kg, ≤18 kg) (R2 = 0.852) or 64.498 + 0.646 x weight (kg, > 18 kg) (R2 = 0.651). The regression equation for BSA is: renal length (mm) = 31.622 + 61.363 x BSA (m2, ≤ 0.7) (R2 = 0.857) or 52.717 + 29.959 x BSA (m2, > 0.7) (R2 = 0.715). The regression equation for BMI is: renal length (mm) = 44.474 + 1.163 x BMI (R2 = 0.079). This study provides data on the normal renal length and its association with age, weight, height, BSA and BMI. The results of this study will guide the detection and follow-up of renal diseases in Korean children
Algorithms for parallel computers
International Nuclear Information System (INIS)
Churchhouse, R.F.
1985-01-01
Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)
Fluid structure coupling algorithm
International Nuclear Information System (INIS)
McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.
1980-01-01
A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
International Nuclear Information System (INIS)
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-01-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Leães, Sabrina Durgante
2008-01-01
Dissertação de mestrado em Design e Marketing O actual estado do Marketing em Moda é uma das questões ainda complexa com que se debate a sociedade global. As questões do Marketing em Moda percorrem alguns aspectos fundamentais tais como as constantes mutações do meio envolvente, a forma de como é percebida e comunicada a identidade das marcas de moda, em busca da melhor forma de segmentar o mercado e definir o seu posicionamento, bem como a reacção ao produto de moda do consumidor final. ...
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
Zero-point length from string fluctuations
International Nuclear Information System (INIS)
Fontanini, Michele; Spallucci, Euro; Padmanabhan, T.
2006-01-01
One of the leading candidates for quantum gravity, viz. string theory, has the following features incorporated in it. (i) The full spacetime is higher-dimensional, with (possibly) compact extra-dimensions; (ii) there is a natural minimal length below which the concept of continuum spacetime needs to be modified by some deeper concept. On the other hand, the existence of a minimal length (zero-point length) in four-dimensional spacetime, with obvious implications as UV regulator, has been often conjectured as a natural aftermath of any correct quantum theory of gravity. We show that one can incorporate the apparently unrelated pieces of information-zero-point length, extra-dimensions, string T-duality-in a consistent framework. This is done in terms of a modified Kaluza-Klein theory that interpolates between (high-energy) string theory and (low-energy) quantum field theory. In this model, the zero-point length in four dimensions is a 'virtual memory' of the length scale of compact extra-dimensions. Such a scale turns out to be determined by T-duality inherited from the underlying fundamental string theory. From a low energy perspective short distance infinities are cutoff by a minimal length which is proportional to the square root of the string slope, i.e., α ' . Thus, we bridge the gap between the string theory domain and the low energy arena of point-particle quantum field theory
Penile length and circumference: an Indian study.
Promodu, K; Shanmughadas, K V; Bhat, S; Nair, K R
2007-01-01
Apprehension about the normal size of penis is a major concern for men. Aim of the present investigation is to estimate the penile length and circumference of Indian males and to compare the results with the data from other countries. Results will help in counseling the patients worried about the penile size and seeking penis enlargement surgery. Penile length in flaccid and stretched conditions and circumference were measured in a group of 301 physically normal men. Erected length and circumference were measured for 93 subjects. Mean flaccid length was found to be 8.21 cm, mean stretched length 10.88 cm and circumference 9.14 cm. Mean erected length was found to be 13.01 cm and erected circumference was 11.46 cm. Penile dimensions are found to be correlated with anthropometric parameters. Insight into the normative data of penile size of Indian males obtained. There are significant differences in the mean penile length and circumference of Indian sample compared to the data reported from other countries. Study need to be continued with a large sample to establish a normative data applicable to the general population.
From Genetics to Genetic Algorithms
Indian Academy of Sciences (India)
Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.
Algorithmic Principles of Mathematical Programming
Faigle, Ulrich; Kern, Walter; Still, Georg
2002-01-01
Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Directory of Open Access Journals (Sweden)
Surafel Luleseged Tilahun
2012-01-01
Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.
Application of the region–time–length algorithm to study of ...
Indian Academy of Sciences (India)
P Puangjaktha
2018-03-27
Mar 27, 2018 ... The seismogenic fault lines, major cities and hydropower dams are expressed as red lines, black squares and ..... quiescence of each case study (same five earthquakes as in figure 3). ..... ijitr K and Sangsuwan C 1991 Development of Cenozoic ... mograph network and the quiescence preceding the 1986.
Automatic Control Of Length Of Welding Arc
Iceland, William F.
1991-01-01
Nonlinear relationships among current, voltage, and length stored in electronic memory. Conceptual microprocessor-based control subsystem maintains constant length of welding arc in gas/tungsten arc-welding system, even when welding current varied. Uses feedback of current and voltage from welding arc. Directs motor to set position of torch according to previously measured relationships among current, voltage, and length of arc. Signal paths marked "calibration" or "welding" used during those processes only. Other signal paths used during both processes. Control subsystem added to existing manual or automatic welding system equipped with automatic voltage control.
Bunch Length Measurements in SPEAR3
Energy Technology Data Exchange (ETDEWEB)
Corbett, W.J.; Fisher, A.; Huang, X.; Safranek, J.; Sebek, J.; /SLAC; Lumpkin, A.; /Argonne; Sannibale, F.; /LBL, Berkeley; Mok, W.; /Unlisted
2007-11-28
A series of bunch length measurements were made in SPEAR3 for two different machine optics. In the achromatic optics the bunch length increases from the low-current value of 16.6ps rms to about 30ps at 25ma/bunch yielding an inductive impedance of -0.17{Omega}. Reducing the momentum compaction factor by a factor of {approx}60 [1] yields a low-current bunch length of {approx}4ps rms. In this paper we review the experimental setup and results.
Quantitation of PET data with the EM reconstruction technique
International Nuclear Information System (INIS)
Rosenqvist, G.; Dahlbom, M.; Erikson, L.; Bohm, C.; Blomqvist, G.
1989-01-01
The expectation maximization (EM) algorithm offers high spatial resolution and excellent noise reduction with low statistics PET data, since it incorporates the Poisson nature of the data. The main difficulties are long computation times, difficulties to find appropriate criteria to terminate the reconstruction and to quantify the resulting image data. In the present work a modified EM algorithm has been implements on a VAX 11/780. Its capability to quantify image data has been tested in phantom studies and in two clinical cases, cerebral blood flow studies and dopamine D2-receptor studies. Data from phantom studies indicate the superiority of images reconstructed with the EM technique compared to images reconstructed with the conventional filtered back-projection (FB) technique in areas with low statistics. At higher statistics the noise characteristics of the two techniques coincide. Clinical data support these findings
Adaptive subdivision and the length and energy of Bézier curves
DEFF Research Database (Denmark)
Gravesen, Jens
1997-01-01
It is an often used fact that the control polygon of a Bézier curve approximates the curve and that the approximation gets better when the curve is subdivided. In particular, if a Bézier curve is subdivided into some number of pieces, then the arc-length of the original curve is greater than...... the sum of the chord-lengths of the pieces, and less than the sum of the polygon-lengths of the pieces. Under repeated subdivisions, the difference between this lower and upper bound gets arbitrarily small.If $L_c$ denotes the total chord-length of the pieces and $L_p$ denotes the total polygon...... combination, and it forms the basis for a fast adaptive algorithm, which determines the arc-length of a Bézier curve.The energy of a curve is half the square of the curvature integrated with respect to arc-length. Like in the case of the arc-length, it is possible to use the chord-length and polygon...
Phase retrieval via incremental truncated amplitude flow algorithm
Zhang, Quanbing; Wang, Zhifa; Wang, Linjie; Cheng, Shichao
2017-10-01
This paper considers the phase retrieval problem of recovering the unknown signal from the given quadratic measurements. A phase retrieval algorithm based on Incremental Truncated Amplitude Flow (ITAF) which combines the ITWF algorithm and the TAF algorithm is proposed. The proposed ITAF algorithm enhances the initialization by performing both of the truncation methods used in ITWF and TAF respectively, and improves the performance in the gradient stage by applying the incremental method proposed in ITWF to the loop stage of TAF. Moreover, the original sampling vector and measurements are preprocessed before initialization according to the variance of the sensing matrix. Simulation experiments verified the feasibility and validity of the proposed ITAF algorithm. The experimental results show that it can obtain higher success rate and faster convergence speed compared with other algorithms. Especially, for the noiseless random Gaussian signals, ITAF can recover any real-valued signal accurately from the magnitude measurements whose number is about 2.5 times of the signal length, which is close to the theoretic limit (about 2 times of the signal length). And it usually converges to the optimal solution within 20 iterations which is much less than the state-of-the-art algorithms.
Algorithmic detectability threshold of the stochastic block model
Kawamoto, Tatsuro
2018-03-01
The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.
Improved multivariate polynomial factoring algorithm
International Nuclear Information System (INIS)
Wang, P.S.
1978-01-01
A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included
Emergency Medical Service (EMS) Stations
Kansas Data Access and Support Center — EMS Locations in Kansas The EMS stations dataset consists of any location where emergency medical services (EMS) personnel are stationed or based out of, or where...
IMPROVED ESTIMATION OF FIBER LENGTH FROM 3-DIMENSIONAL IMAGES
Directory of Open Access Journals (Sweden)
Joachim Ohser
2013-03-01
Full Text Available A new method is presented for estimating the specific fiber length from 3D images of macroscopically homogeneous fiber systems. The method is based on a discrete version of the Crofton formula, where local knowledge from 3x3x3-pixel configurations of the image data is exploited. It is shown that the relative error resulting from the discretization of the outer integral of the Crofton formula amonts at most 1.2%. An algorithmic implementation of the method is simple and the runtime as well as the amount of memory space are low. The estimation is significantly improved by considering 3x3x3-pixel configurations instead of 2x2x2, as already studied in literature.
The benefits of longer fuel cycle lengths
International Nuclear Information System (INIS)
Kesler, D.C.
1986-01-01
Longer fuel cycle lengths have been found to increase generation and improve outage management. A study at Duke Power Company has shown that longer fuel cycles offer both increased scheduling flexibility and increased capacity factors
Atomic frequency-time-length standards
International Nuclear Information System (INIS)
Gheorghiu, O.C.; Mandache, C.
1987-01-01
The principles of operative of atomic frequency-time-length standards and their principle characteristics are described. The role of quartz crystal oscillators which are sloved to active or passive standards is presented. (authors)
The analysis of projected fission track lengths
International Nuclear Information System (INIS)
Laslett, G.M.; Galbraith, R.F.; Green, P.F.
1994-01-01
This article deals with the question of how features of the thermal history can be estimated from projected track length measurements, i.e. lengths of the remaining parts of tracks that have intersected a surface, projected onto that surface. The appropriate mathematical theory is described and used to provide a sound basis both for understanding the nature of projected length measurements and for analysing observed data. The estimation of thermal history parameters corresponding to the current temperature, the maximum palaeotemperature and the time since cooling, is studied using laboratory data and simulations. In general the information contained in projected track lengths and angles is fairly limited, compared, for example, with that from a much smaller number of confined tracks, though we identify some circumstances when such measurements may be useful. Also it is not straightforward to extract the information and simple ad hoc estimation methods are generally inadequate. (author)
Complementary DNA-amplified fragment length polymorphism ...
African Journals Online (AJOL)
Complementary DNA-amplified fragment length polymorphism (AFLP-cDNA) analysis of differential gene expression from the xerophyte Ammopiptanthus mongolicus in response to cold, drought and cold together with drought.
Impedance of finite length resistive cylinder
Directory of Open Access Journals (Sweden)
S. Krinsky
2004-11-01
Full Text Available We determine the impedance of a cylindrical metal tube (resistor of radius a, length g, and conductivity σ attached at each end to perfect conductors of semi-infinite length. Our main interest is in the asymptotic behavior of the impedance at high frequency (k≫1/a. In the equilibrium regime, ka^{2}≪g, the impedance per unit length is accurately described by the well-known result for an infinite length tube with conductivity σ. In the transient regime, ka^{2}≫g, where the contribution of transition radiation arising from the discontinuity in conductivity is important, we derive an analytic expression for the impedance and compute the short-range wakefield. The analytic results are shown to agree with numerical evaluation of the impedance.
Characteristic length of the knotting probability revisited
International Nuclear Information System (INIS)
Uehara, Erica; Deguchi, Tetsuo
2015-01-01
We present a self-avoiding polygon (SAP) model for circular DNA in which the radius of impermeable cylindrical segments corresponds to the screening length of double-stranded DNA surrounded by counter ions. For the model we evaluate the probability for a generated SAP with N segments having a given knot K through simulation. We call it the knotting probability of a knot K with N segments for the SAP model. We show that when N is large the most significant factor in the knotting probability is given by the exponentially decaying part exp(−N/N K ), where the estimates of parameter N K are consistent with the same value for all the different knots we investigated. We thus call it the characteristic length of the knotting probability. We give formulae expressing the characteristic length as a function of the cylindrical radius r ex , i.e. the screening length of double-stranded DNA. (paper)
Chord length distribution for a compound capsule
International Nuclear Information System (INIS)
Pitřík, Pavel
2017-01-01
Chord length distribution is a factor important in the calculation of ionisation chamber responses. This article describes Monte Carlo calculations of the chord length distribution for a non-convex compound capsule. A Monte Carlo code was set up for generation of random chords and calculation of their lengths based on the input number of generations and cavity dimensions. The code was written in JavaScript and can be executed in the majority of HTML viewers. The plot of occurrence of cords of different lengths has 3 peaks. It was found that the compound capsule cavity cannot be simply replaced with a spherical cavity of a triangular design. Furthermore, the compound capsule cavity is directionally dependent, which must be taken into account in calculations involving non-isotropic fields of primary particles in the beam, unless equilibrium of the secondary charged particles is attained. (orig.)
Study on the Connecting Length of CFRP
Liu, Xiongfei; Li, Yue; Li, Zhanguo
2018-05-01
The paper studied the varying mode of shear stress in the connecting zone of CFRP. Using epoxy resin (EP) as bond material, performance of specimens with different connecting length of CFRP was tested to obtain the conclusion. CFRP-confined concrete column was tested subsequently to verify the conclusion. The results show that: (1) The binding properties of modified epoxy resin with CFRP is good; (2) As the connecting length increased, the ultimate tensile strength of CFRP increased as well in the range of the experiment parameters; (3) Tensile strength of CFRP can reach the ultimate strength when the connecting length is 90mm;(4) The connecting length of 90mm of CFRP meet the reinforcement requirements.
Fragment Length of Circulating Tumor DNA.
Underhill, Hunter R; Kitzman, Jacob O; Hellwig, Sabine; Welker, Noah C; Daza, Riza; Baker, Daniel N; Gligorich, Keith M; Rostomily, Robert C; Bronner, Mary P; Shendure, Jay
2016-07-01
Malignant tumors shed DNA into the circulation. The transient half-life of circulating tumor DNA (ctDNA) may afford the opportunity to diagnose, monitor recurrence, and evaluate response to therapy solely through a non-invasive blood draw. However, detecting ctDNA against the normally occurring background of cell-free DNA derived from healthy cells has proven challenging, particularly in non-metastatic solid tumors. In this study, distinct differences in fragment length size between ctDNAs and normal cell-free DNA are defined. Human ctDNA in rat plasma derived from human glioblastoma multiforme stem-like cells in the rat brain and human hepatocellular carcinoma in the rat flank were found to have a shorter principal fragment length than the background rat cell-free DNA (134-144 bp vs. 167 bp, respectively). Subsequently, a similar shift in the fragment length of ctDNA in humans with melanoma and lung cancer was identified compared to healthy controls. Comparison of fragment lengths from cell-free DNA between a melanoma patient and healthy controls found that the BRAF V600E mutant allele occurred more commonly at a shorter fragment length than the fragment length of the wild-type allele (132-145 bp vs. 165 bp, respectively). Moreover, size-selecting for shorter cell-free DNA fragment lengths substantially increased the EGFR T790M mutant allele frequency in human lung cancer. These findings provide compelling evidence that experimental or bioinformatic isolation of a specific subset of fragment lengths from cell-free DNA may improve detection of ctDNA.
Electron Effective-Attenuation-Length Database
SRD 82 NIST Electron Effective-Attenuation-Length Database (PC database, no charge) This database provides values of electron effective attenuation lengths (EALs) in solid elements and compounds at selected electron energies between 50 eV and 2,000 eV. The database was designed mainly to provide EALs (to account for effects of elastic-eletron scattering) for applications in surface analysis by Auger-electron spectroscopy (AES) and X-ray photoelectron spectroscopy (XPS).
The SME gauge sector with minimum length
Energy Technology Data Exchange (ETDEWEB)
Belich, H.; Louzada, H.L.C. [Universidade Federal do Espirito Santo, Departamento de Fisica e Quimica, Vitoria, ES (Brazil)
2017-12-15
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory. (orig.)
The SME gauge sector with minimum length
Belich, H.; Louzada, H. L. C.
2017-12-01
We study the gauge sector of the Standard Model Extension (SME) with the Lorentz covariant deformed Heisenberg algebra associated to the minimum length. In order to find and estimate corrections, we clarify whether the violation of Lorentz symmetry and the existence of a minimum length are independent phenomena or are, in some way, related. With this goal, we analyze the dispersion relations of this theory.
A Novel Parallel Algorithm for Edit Distance Computation
Directory of Open Access Journals (Sweden)
Muhammad Murtaza Yousaf
2018-01-01
Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.
A fast exact sequential algorithm for the partial digest problem.
Abbas, Mostafa M; Bahig, Hazem M
2016-12-22
Restriction site analysis involves determining the locations of restriction sites after the process of digestion by reconstructing their positions based on the lengths of the cut DNA. Using different reaction times with a single enzyme to cut DNA is a technique known as a partial digestion. Determining the exact locations of restriction sites following a partial digestion is challenging due to the computational time required even with the best known practical algorithm. In this paper, we introduce an efficient algorithm to find the exact solution for the partial digest problem. The algorithm is able to find all possible solutions for the input and works by traversing the solution tree with a breadth-first search in two stages and deleting all repeated subproblems. Two types of simulated data, random and Zhang, are used to measure the efficiency of the algorithm. We also apply the algorithm to real data for the Luciferase gene and the E. coli K12 genome. Our algorithm is a fast tool to find the exact solution for the partial digest problem. The percentage of improvement is more than 75% over the best known practical algorithm for the worst case. For large numbers of inputs, our algorithm is able to solve the problem in a suitable time, while the best known practical algorithm is unable.
Similarity-regulation of OS-EM for accelerated SPECT reconstruction
Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.
2016-06-01
Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.
A Parallel Butterfly Algorithm
Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing
2014-01-01
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
A Parallel Butterfly Algorithm
Poulson, Jack
2014-02-04
The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.
Directory of Open Access Journals (Sweden)
Hanns Holger Rutz
2016-11-01
Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory.
AC-600 reactor reloading pattern optimization by using genetic algorithms
International Nuclear Information System (INIS)
Wu Hongchun; Xie Zhongsheng; Yao Dong; Li Dongsheng; Zhang Zongyao
2000-01-01
The use of genetic algorithms to optimize reloading pattern of the nuclear power plant reactor is proposed. And a new encoding and translating method is given. Optimization results of minimizing core power peak and maximizing cycle length for both low-leakage and out-in loading pattern of AC-600 reactor are obtained
Computational performance of a projection and rescaling algorithm
Pena, Javier; Soheili, Negar
2018-01-01
This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Directory of Open Access Journals (Sweden)
Marta Magi
2010-06-01
Full Text Available Abstract During 2004-2005 and 2007-2008, 189 foxes (<em>Vulpes vulpesem> and 6 badgers (<em>Meles melesem> were collected in different areas of Central Northern Italy (Piedmont, Liguria and Tuscany and examined for <em>Angiostrongylus vasorumem> infection. The prevalence of the infection was significantly different in the areas considered, with the highest values in the district of Imperia (80%, Liguria and in Montezemolo (70%, southern Piedmont; the prevalence in Tuscany was 7%. One badger collected in the area of Imperia turned out to be infected, representing the first report of the parasite in this species in Italy. Further studies are needed to evaluate the role played by fox populations as reservoirs of infection and the probability of its spreading to domestic dogs.
Riassunto <em>Angiostrongylus vasorumem> nella volpe (<em>Vulpes vulpesem> e nel tasso (<em>Meles melesem> in Italia centro-settentrionale. Nel 2004-2005 e 2007-2008, 189 volpi (<em>Vulpes vulpesem> e 6 tassi (<em>Meles melesem> provenienti da differenti aree dell'Italia settentrionale e centrale (Piemonte, Liguria Toscana, sono stati esaminati per la ricerca di <em>Angiostrongylus vasorumem>. La prevalenza del nematode è risultata significativamente diversa nelle varie zone, con valori elevati nelle zone di Imperia (80% e di Montezemolo (70%, provincia di Cuneo; la prevalenza in Toscana è risultata del 7%. Un tasso proveniente dall'area di Imperia è risultato positivo per A. vasorum; questa è la prima segnalazione del parassita in tale specie in Italia. Ulteriori studi sono necessari per valutare il potenziale della volpe come serbatoio e la possibilità di diffusione della parassitosi ai cani domestici.
doi:10.4404/hystrix-20.2-4442
Optical flow optimization using parallel genetic algorithm
Zavala-Romero, Olmo; Botella, Guillermo; Meyer-Bäse, Anke; Meyer Base, Uwe
2011-06-01
A new approach to optimize the parameters of a gradient-based optical flow model using a parallel genetic algorithm (GA) is proposed. The main characteristics of the optical flow algorithm are its bio-inspiration and robustness against contrast, static patterns and noise, besides working consistently with several optical illusions where other algorithms fail. This model depends on many parameters which conform the number of channels, the orientations required, the length and shape of the kernel functions used in the convolution stage, among many more. The GA is used to find a set of parameters which improve the accuracy of the optical flow on inputs where the ground-truth data is available. This set of parameters helps to understand which of them are better suited for each type of inputs and can be used to estimate the parameters of the optical flow algorithm when used with videos that share similar characteristics. The proposed implementation takes into account the embarrassingly parallel nature of the GA and uses the OpenMP Application Programming Interface (API) to speedup the process of estimating an optimal set of parameters. The information obtained in this work can be used to dynamically reconfigure systems, with potential applications in robotics, medical imaging and tracking.
GPU accelerated population annealing algorithm
Barash, Lev Yu.; Weigel, Martin; Borovský, Michal; Janke, Wolfhard; Shchur, Lev N.
2017-11-01
Population annealing is a promising recent approach for Monte Carlo simulations in statistical physics, in particular for the simulation of systems with complex free-energy landscapes. It is a hybrid method, combining importance sampling through Markov chains with elements of sequential Monte Carlo in the form of population control. While it appears to provide algorithmic capabilities for the simulation of such systems that are roughly comparable to those of more established approaches such as parallel tempering, it is intrinsically much more suitable for massively parallel computing. Here, we tap into this structural advantage and present a highly optimized implementation of the population annealing algorithm on GPUs that promises speed-ups of several orders of magnitude as compared to a serial implementation on CPUs. While the sample code is for simulations of the 2D ferromagnetic Ising model, it should be easily adapted for simulations of other spin models, including disordered systems. Our code includes implementations of some advanced algorithmic features that have only recently been suggested, namely the automatic adaptation of temperature steps and a multi-histogram analysis of the data at different temperatures. Program Files doi:http://dx.doi.org/10.17632/sgzt4b7b3m.1 Licensing provisions: Creative Commons Attribution license (CC BY 4.0) Programming language: C, CUDA External routines/libraries: NVIDIA CUDA Toolkit 6.5 or newer Nature of problem: The program calculates the internal energy, specific heat, several magnetization moments, entropy and free energy of the 2D Ising model on square lattices of edge length L with periodic boundary conditions as a function of inverse temperature β. Solution method: The code uses population annealing, a hybrid method combining Markov chain updates with population control. The code is implemented for NVIDIA GPUs using the CUDA language and employs advanced techniques such as multi-spin coding, adaptive temperature
Detection of algorithmic trading
Bogoev, Dimitar; Karam, Arzé
2017-10-01
We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Algorithms in invariant theory
Sturmfels, Bernd
2008-01-01
J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.
CERN. Geneva; PUNZI, Giovanni
2015-01-01
Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.
length-weight relationhip of freshwater wild fish species
African Journals Online (AJOL)
Dr Naeem
2012-06-21
Jun 21, 2012 ... Length-weight (LWR) and length-length relationships (LLR) were determined for a freshwater catfish ... Key words: Mystus bleekeri, length-weight relationship, length-length relationship, predictive equations. INTRODUCTION. Mystus bleekeri (freshwater catfish Day, 1877), locally ..... fish farmers, Aquacult.
Multiple Word-Length High-Level Synthesis
Directory of Open Access Journals (Sweden)
Coussy Philippe
2008-01-01
Full Text Available Abstract Digital signal processing (DSP applications are nowadays widely used and their complexity is ever growing. The design of dedicated hardware accelerators is thus still needed in system-on-chip and embedded systems. Realistic hardware implementation requires first to convert the floating-point data of the initial specification into arbitrary length data (finite-precision while keeping an acceptable computation accuracy. Next, an optimized hardware architecture has to be designed. Considering uniform bit-width specification allows to use traditional automated design flow. However, it leads to oversized design. On the other hand, considering non uniform bit-width specification allows to get a smaller circuit but requires complex design tasks. In this paper, we propose an approach that inputs a C/C++ specification. The design flow, based on high-level synthesis (HLS techniques, automatically generates a potentially pipeline RTL architecture described in VHDL. Both bitaccurate integer and fixed-point data types can be used in the input specification. The generated architecture uses components (operator, register, etc. that have different widths. The design constraints are the clock period and the throughput of the application. The proposed approach considers data word-length information in all the synthesis steps by using dedicated algorithms. We show in this paper the effectiveness of the proposed approach through several design experiments in the DSP domain.
Multiple Word-Length High-Level Synthesis
Directory of Open Access Journals (Sweden)
Dominique Heller
2008-09-01
Full Text Available Digital signal processing (DSP applications are nowadays widely used and their complexity is ever growing. The design of dedicated hardware accelerators is thus still needed in system-on-chip and embedded systems. Realistic hardware implementation requires first to convert the floating-point data of the initial specification into arbitrary length data (finite-precision while keeping an acceptable computation accuracy. Next, an optimized hardware architecture has to be designed. Considering uniform bit-width specification allows to use traditional automated design flow. However, it leads to oversized design. On the other hand, considering non uniform bit-width specification allows to get a smaller circuit but requires complex design tasks. In this paper, we propose an approach that inputs a C/C++ specification. The design flow, based on high-level synthesis (HLS techniques, automatically generates a potentially pipeline RTL architecture described in VHDL. Both bitaccurate integer and fixed-point data types can be used in the input specification. The generated architecture uses components (operator, register, etc. that have different widths. The design constraints are the clock period and the throughput of the application. The proposed approach considers data word-length information in all the synthesis steps by using dedicated algorithms. We show in this paper the effectiveness of the proposed approach through several design experiments in the DSP domain.
Named Entity Linking Algorithm
Directory of Open Access Journals (Sweden)
M. F. Panteleev
2017-01-01
Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.
Fokkinga, M.M.
1992-01-01
An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as
A cluster algorithm for graphs
S. van Dongen
2000-01-01
textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Animation of planning algorithms
Sun, Fan
2014-01-01
Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...
Secondary Vertex Finder Algorithm
Heer, Sebastian; The ATLAS collaboration
2017-01-01
If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Randomized Filtering Algorithms
DEFF Research Database (Denmark)
Katriel, Irit; Van Hentenryck, Pascal
2008-01-01
of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...
A space-efficient algorithm for local similarities.
Huang, X Q; Hardison, R C; Miller, W
1990-10-01
Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.
Subsurface imaging by electrical and EM methods
Energy Technology Data Exchange (ETDEWEB)
NONE
1998-12-01
This report consists of 3 subjects. 1) Three dimensional inversion of resistivity data with topography : In this study, we developed a 3-D inversion method based on the finite element calculation of model responses, which can effectively accommodate the irregular topography. In solving the inverse problem, the iterative least-squares approach comprising the smoothness-constraints was taken along with the reciprocity approach in the calculation of Jacobian. Furthermore the Active Constraint Balancing, which has been recently developed by ourselves to enhance the resolving power of the inverse problem, was also employed. Since our new algorithm accounts for the topography in the inversion step, topography correction is not necessary as a preliminary processing and we can expect a more accurate image of the earth. 2) Electromagnetic responses due to a source in the borehole : The effects of borehole fluid and casing on the borehole EM responses should thoroughly be analyzed since they may affect the resultant image of the earth. In this study, we developed an accurate algorithm for calculating the EM responses containing the effects of borehole fluid and casing when a current-carrying ring is located on the borehole axis. An analytic expression for primary vertical magnetic field along the borehole axis was first formulated and the fast Fourier transform is to be applied to get the EM fields at any location in whole space. 3) High frequency electromagnetic impedance survey : At high frequencies the EM impedance becomes a function of the angle of incidence or the horizontal wavenumber, so the electrical properties cannot be readily extracted without first eliminating the effect of horizontal wavenumber on the impedance. For this purpose, this paper considers two independent methods for accurately determining the horizontal wavenumber, which in turn is used to correct the impedance data. The 'apparent' electrical properties derived from the corrected impedance
Optimization of the Critical Diameter and Average Path Length of Social Networks
Directory of Open Access Journals (Sweden)
Haifeng Du
2017-01-01
Full Text Available Optimizing average path length (APL by adding shortcut edges has been widely discussed in connection with social networks, but the relationship between network diameter and APL is generally ignored in the dynamic optimization of APL. In this paper, we analyze this relationship and transform the problem of optimizing APL into the problem of decreasing diameter to 2. We propose a mathematic model based on a memetic algorithm. Experimental results show that our algorithm can efficiently solve this problem as well as optimize APL.
Length expectation values in quantum Regge calculus
International Nuclear Information System (INIS)
Khatsymovsky, V.M.
2004-01-01
Regge calculus configuration superspace can be embedded into a more general superspace where the length of any edge is defined ambiguously depending on the 4-tetrahedron containing the edge. Moreover, the latter superspace can be extended further so that even edge lengths in each the 4-tetrahedron are not defined, only area tensors of the 2-faces in it are. We make use of our previous result concerning quantization of the area tensor Regge calculus which gives finite expectation values for areas. Also our result is used showing that quantum measure in the Regge calculus can be uniquely fixed once we know quantum measure on (the space of the functionals on) the superspace of the theory with ambiguously defined edge lengths. We find that in this framework quantization of the usual Regge calculus is defined up to a parameter. The theory may possess nonzero (of the order of Planck scale) or zero length expectation values depending on whether this parameter is larger or smaller than a certain value. Vanishing length expectation values means that the theory is becoming continuous, here dynamically in the originally discrete framework
Explaining the length threshold of polyglutamine aggregation
International Nuclear Information System (INIS)
De Los Rios, Paolo; Hafner, Marc; Pastore, Annalisa
2012-01-01
The existence of a length threshold, of about 35 residues, above which polyglutamine repeats can give rise to aggregation and to pathologies, is one of the hallmarks of polyglutamine neurodegenerative diseases such as Huntington’s disease. The reason why such a minimal length exists at all has remained one of the main open issues in research on the molecular origins of such classes of diseases. Following the seminal proposals of Perutz, most research has focused on the hunt for a special structure, attainable only above the minimal length, able to trigger aggregation. Such a structure has remained elusive and there is growing evidence that it might not exist at all. Here we review some basic polymer and statistical physics facts and show that the existence of a threshold is compatible with the modulation that the repeat length imposes on the association and dissociation rates of polyglutamine polypeptides to and from oligomers. In particular, their dramatically different functional dependence on the length rationalizes the very presence of a threshold and hints at the cellular processes that might be at play, in vivo, to prevent aggregation and the consequent onset of the disease. (paper)
Explaining the length threshold of polyglutamine aggregation
De Los Rios, Paolo; Hafner, Marc; Pastore, Annalisa
2012-06-01
The existence of a length threshold, of about 35 residues, above which polyglutamine repeats can give rise to aggregation and to pathologies, is one of the hallmarks of polyglutamine neurodegenerative diseases such as Huntington’s disease. The reason why such a minimal length exists at all has remained one of the main open issues in research on the molecular origins of such classes of diseases. Following the seminal proposals of Perutz, most research has focused on the hunt for a special structure, attainable only above the minimal length, able to trigger aggregation. Such a structure has remained elusive and there is growing evidence that it might not exist at all. Here we review some basic polymer and statistical physics facts and show that the existence of a threshold is compatible with the modulation that the repeat length imposes on the association and dissociation rates of polyglutamine polypeptides to and from oligomers. In particular, their dramatically different functional dependence on the length rationalizes the very presence of a threshold and hints at the cellular processes that might be at play, in vivo, to prevent aggregation and the consequent onset of the disease.
An Ordering Linear Unification Algorithm
Institute of Scientific and Technical Information of China (English)
胡运发
1989-01-01
In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.
Advanced algorithms for information science
Energy Technology Data Exchange (ETDEWEB)
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Advanced algorithms for information science
International Nuclear Information System (INIS)
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-01-01
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
A polynomial time algorithm for checking regularity of totally normed process algebra
Yang, F.; Huang, H.
2015-01-01
A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for
Faster exact algorithms for computing Steiner trees in higher dimensional Euclidean spaces
DEFF Research Database (Denmark)
Fonseca, Rasmus; Brazil, Marcus; Winter, Pawel
The Euclidean Steiner tree problem asks for a network of minimum total length interconnecting a finite set of points in d-dimensional space. For d ≥ 3, only one practical algorithmic approach exists for this problem --- proposed by Smith in 1992. A number of refinements of Smith's algorithm have...
A parallel row-based algorithm for standard cell placement with integrated error control
Sargent, Jeff S.; Banerjee, Prith
1989-01-01
A new row-based parallel algorithm for standard-cell placement targeted for execution on a hypercube multiprocessor is presented. Key features of this implementation include a dynamic simulated-annealing schedule, row-partitioning of the VLSI chip image, and two novel approaches to control error in parallel cell-placement algorithms: (1) Heuristic Cell-Coloring; (2) Adaptive Sequence Length Control.
Nuclear reactor with scrammable part length rod
International Nuclear Information System (INIS)
Bevilacqua, F.
1979-01-01
A new part length rod is provided. It may be used to control xenon induced power oscillations but to contribute to shutdown reactivity when a rapid shutdown of the reactor is required. The part length rod consists of a control rod with three regions. The lower control region is a longer weaker active portion separated from an upper stronger shorter poison section by an intermediate section which is a relative non-absorber of neutrons. The combination of the longer weaker control section with the upper high worth poison section permits the part length rod of this to be scrammed into the core when a reactor shutdown is required but also permits the control rod to be used as a tool to control power distribution in both the axial and radial directions during normal operation
Resonance effects in neutron scattering lengths
Energy Technology Data Exchange (ETDEWEB)
Lynn, J.E.
1989-06-01
The nature of neutron scattering lengths is described and the nuclear effects giving rise to their variation is discussed. Some examples of the shortcomings of the available nuclear data base, particularly for heavy nuclei, are given. Methods are presented for improving this data base, in particular for obtaining the energy variation of the complex coherent scattering length from long to sub-/angstrom/ wave lengths from the available sources of slow neutron cross section data. Examples of this information are given for several of the rare earth nuclides. Some examples of the effect of resonances in neutron reflection and diffraction are discussed. This report documents a seminar given at Argonne National Laboratory in March 1989. 18 refs., 18 figs.
Aminophylline increases seizure length during electroconvulsive therapy.
Stern, L; Dannon, P N; Hirschmann, S; Schriber, S; Amytal, D; Dolberg, O T; Grunhaus, L
1999-12-01
Electroconvulsive therapy (ECT) is considered to be one of the most effective treatments for patients with major depression and persistent psychosis. Seizure characteristics probably determine the therapeutic effect of ECT; as a consequence, short seizures are accepted as one of the factors of poor outcome. During most ECT courses seizure threshold increases and seizure duration decreases. Methylxanthine preparations, caffeine, and theophylline have been used to prolong seizure duration. The use of aminophylline, more readily available than caffeine, has not been well documented. The objective of this study was to test the effects of aminophylline on seizure length. Fourteen drug-free patients with diagnoses of affective disorder or psychotic episode receiving ECT participated in this study. Seizure length was assessed clinically and per EEG. Statistical comparisons were done using paired t tests. A significant increase (p < 0.04) in seizure length was achieved and maintained on three subsequent treatments with aminophylline. No adverse events were noted from the addition of aminophylline.
Minimal Length Scale Scenarios for Quantum Gravity.
Hossenfelder, Sabine
2013-01-01
We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.
Resonance effects in neutron scattering lengths
International Nuclear Information System (INIS)
Lynn, J.E.
1989-01-01
The nature of neutron scattering lengths is described and the nuclear effects giving rise to their variation is discussed. Some examples of the shortcomings of the available nuclear data base, particularly for heavy nuclei, are given. Methods are presented for improving this data base, in particular for obtaining the energy variation of the complex coherent scattering length from long to sub-angstrom wave lengths from the available sources of slow neutron cross section data. Examples of this information are given for several of the rare earth nuclides. Some examples of the effect of resonances in neutron reflection and diffraction are discussed. This report documents a seminar given at Argonne National Laboratory in March 1989. 18 refs., 18 figs
Minimal Length Scale Scenarios for Quantum Gravity
Directory of Open Access Journals (Sweden)
Sabine Hossenfelder
2013-01-01
Full Text Available We review the question of whether the fundamental laws of nature limit our ability to probe arbitrarily short distances. First, we examine what insights can be gained from thought experiments for probes of shortest distances, and summarize what can be learned from different approaches to a theory of quantum gravity. Then we discuss some models that have been developed to implement a minimal length scale in quantum mechanics and quantum field theory. These models have entered the literature as the generalized uncertainty principle or the modified dispersion relation, and have allowed the study of the effects of a minimal length scale in quantum mechanics, quantum electrodynamics, thermodynamics, black-hole physics and cosmology. Finally, we touch upon the question of ways to circumvent the manifestation of a minimal length scale in short-distance physics.
Directory of Open Access Journals (Sweden)
César A Sánchez
2009-12-01
Full Text Available Se presenta un método de solución etapa a etapa para el conjunto de ecuaciones de balance de masa, relaciones de equilibrio, suma de composiciones y entalpía (MESH, mass equilibrium sum enthalpy que representan el modelo de equilibrio para un arreglo a contracorriente de etapas de extracción en fase líquida. El fundamento teórico se encuentra en la termodinámica: equilibrio líquido-líquido, flash isotérmico y flash adiabático. El algoritmo supera los alcances de los métodos gráficos e isotérmicos típicos en el estudio de los procesos de extracción y es aplicable a situaciones adicionales muy comunes: transferencia de calor en las etapas, etapas adiabáticas, temperaturas diferentes para los flujos de alimentación y solvente. El algoritmo se ilustra en tres ejemplos, los dos primeros en operación isotérmica con tres componentes (agua, ácido acético y acetato de butilo y diez etapas, y un tercero más elaborado que involucra transferencia de calor con cuatro componentes (agua, ácido acético, butanol y acetato de butilo y quince etapas.Apresenta-se um método de solução etapa a etapa para o conjunto de equações de balanço de massa, relações de equilíbrio, soma de composições e entalpia (MESH mass equilibrium sum enthalpy que representam o modelo de equilíbrio para um arranjo en contracorrente de períodos de extração em fase líquida. O fundamento teórico se encontra na termodinâmica: equilíbrio líquido-líquido, flash isotérmico e flash adiabático. O algoritmo supera os alcances dos métodos gráficos e isotérmicos típicos no estudo dos processos de extração e é aplicável a situações adicionais muito comuns: transferência de calor nas etapas, etapas adiabáticas, temperaturas diferentes para os fluxos de alimentação e solvente. O algoritmo se ilustra em três exemplos, os dois primeiros em operação isotérmica com três componentes (água, ácido acético e acetato de butila e dez etapas, e um
A propositional CONEstrip algorithm
E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)
2014-01-01
textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations
Modular Regularization Algorithms
DEFF Research Database (Denmark)
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...
Indian Academy of Sciences (India)
Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
de Casteljau's Algorithm Revisited
DEFF Research Database (Denmark)
Gravesen, Jens
1998-01-01
It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...
Algorithms in ambient intelligence
Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.
2005-01-01
We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of
General Algorithm (High level)
Indian Academy of Sciences (India)
First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...
Comprehensive eye evaluation algorithm
Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.
2016-03-01
In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.
Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko
2013-01-01
In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.
DEFF Research Database (Denmark)
This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Optimal Quadratic Programming Algorithms
Dostal, Zdenek
2009-01-01
Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers
A quasi-Newton algorithm for large-scale nonlinear equations
Directory of Open Access Journals (Sweden)
Linghua Huang
2017-02-01
Full Text Available Abstract In this paper, the algorithm for large-scale nonlinear equations is designed by the following steps: (i a conjugate gradient (CG algorithm is designed as a sub-algorithm to obtain the initial points of the main algorithm, where the sub-algorithm’s initial point does not have any restrictions; (ii a quasi-Newton algorithm with the initial points given by sub-algorithm is defined as main algorithm, where a new nonmonotone line search technique is presented to get the step length α k $\\alpha_{k}$ . The given nonmonotone line search technique can avoid computing the Jacobian matrix. The global convergence and the 1 + q $1+q$ -order convergent rate of the main algorithm are established under suitable conditions. Numerical results show that the proposed method is competitive with a similar method for large-scale problems.
Stride length: measuring its instantaneous value
International Nuclear Information System (INIS)
Campiglio, G C; Mazzeo, J R
2007-01-01
Human gait has been studied from different viewpoints: kinematics, dynamics, sensibility and others. Many of its characteristics still remain open to research, both for normal gait and for pathological gait. Objective measures of some of its most significant spatial/temporal parameters are important in this context. Stride length, one of these parameters, is defined as the distance between two consecutive contacts of one foot with ground. On this work we present a device designed to provide automatic measures of stride length. Its features make it particularly appropriate for the evaluation of pathological gait
Word length, set size, and lexical factors: Re-examining what causes the word length effect.
Guitard, Dominic; Gabel, Andrew J; Saint-Aubin, Jean; Surprenant, Aimée M; Neath, Ian
2018-04-19
The word length effect, better recall of lists of short (fewer syllables) than long (more syllables) words has been termed a benchmark effect of working memory. Despite this, experiments on the word length effect can yield quite different results depending on set size and stimulus properties. Seven experiments are reported that address these 2 issues. Experiment 1 replicated the finding of a preserved word length effect under concurrent articulation for large stimulus sets, which contrasts with the abolition of the word length effect by concurrent articulation for small stimulus sets. Experiment 2, however, demonstrated that when the short and long words are equated on more dimensions, concurrent articulation abolishes the word length effect for large stimulus sets. Experiment 3 shows a standard word length effect when output time is equated, but Experiments 4-6 show no word length effect when short and long words are equated on increasingly more dimensions that previous demonstrations have overlooked. Finally, Experiment 7 compared recall of a small and large neighborhood words that were equated on all the dimensions used in Experiment 6 (except for those directly related to neighborhood size) and a neighborhood size effect was still observed. We conclude that lexical factors, rather than word length per se, are better predictors of when the word length effect will occur. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Sighting optics including an optical element having a first focal length and a second focal length
Crandall, David Lynn [Idaho Falls, ID
2011-08-01
One embodiment of sighting optics according to the teachings provided herein may include a front sight and a rear sight positioned in spaced-apart relation. The rear sight includes an optical element having a first focal length and a second focal length. The first focal length is selected so that it is about equal to a distance separating the optical element and the front sight and the second focal length is selected so that it is about equal to a target distance. The optical element thus brings into simultaneous focus, for a user, images of the front sight and the target.
Cutting Whole Length or Partial Length of Internal Anal Sphincter in Managementof Fissure in Ano
Directory of Open Access Journals (Sweden)
Furat Shani Aoda
2017-12-01
Full Text Available A chronic anal fissure is a common painful perianal condition.The main operative procedure to treat this painful condition is a lateral internal sphincteretomy (LIS.The aim of study is to compare the outcome and complications of closed LIS up to the dentate line (whole length of internal sphincter or up to the fissure apex (partial length of internal sphincter in the treatment of anal fissure.It is a prospective comparativestudy including 100 patients with chronic fissure in ano. All patients assigned to undergo closed LIS. Those patients were randomly divided into two groups: 50 patients underwent LIS to the level of dentate line (whole length and other 50 patients underwent LIS to the level of fissure apex (partial length. Patients were followed up weekly in the 1st month, twice monthly in the second month then monthly for next 2 months and finally after 1 year. There was satisfactory relief of pain in all patients in both groups & complete healing of the fissure occurred. Regarding post operative incontinence no major degree of incontinence occur in both group but minor degree of incontinence persists In 7 patients after whole length LIS after one year. In conclusion, both whole length & partial length LIS associated with improvement of pain, good chance of healing but whole length LIS associated with more chance of long term flatus incontinence. Hence,we recommend partial length LIS as treatment forchronic anal fissure.
Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.
2018-04-01
Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.
Benchmarking monthly homogenization algorithms
Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.
2011-08-01
The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data
Reactive power dispatch considering voltage stability with seeker optimization algorithm
Energy Technology Data Exchange (ETDEWEB)
Dai, Chaohua; Chen, Weirong; Zhang, Xuexia [The School of Electrical Engineering, Southwest Jiaotong University, Chengdu 610031 (China); Zhu, Yunfang [Department of Computer and Communication Engineering, E' mei Campus, Southwest Jiaotong University, E' mei 614202 (China)
2009-10-15
Optimal reactive power dispatch (ORPD) has a growing impact on secure and economical operation of power systems. This issue is well known as a non-linear, multi-modal and multi-objective optimization problem where global optimization techniques are required in order to avoid local minima. In the last decades, computation intelligence-based techniques such as genetic algorithms (GAs), differential evolution (DE) algorithms and particle swarm optimization (PSO) algorithms, etc., have often been used for this aim. In this work, a seeker optimization algorithm (SOA) based method is proposed for ORPD considering static voltage stability and voltage deviation. The SOA is based on the concept of simulating the act of human searching where search direction is based on the empirical gradient by evaluating the response to the position changes and step length is based on uncertainty reasoning by using a simple Fuzzy rule. The algorithm's performance is studied with comparisons of two versions of GAs, three versions of DE algorithms and four versions of PSO algorithms on the IEEE 57 and 118-bus power systems. The simulation results show that the proposed approach performed better than the other listed algorithms and can be efficiently used for the ORPD problem. (author)
FPGA Hardware Acceleration of a Phylogenetic Tree Reconstruction with Maximum Parsimony Algorithm
BLOCK, Henry; MARUYAMA, Tsutomu
2017-01-01
In this paper, we present an FPGA hardware implementation for a phylogenetic tree reconstruction with a maximum parsimony algorithm. We base our approach on a particular stochastic local search algorithm that uses the Progressive Neighborhood and the Indirect Calculation of Tree Lengths method. This method is widely used for the acceleration of the phylogenetic tree reconstruction algorithm in software. In our implementation, we define a tree structure and accelerate the search by parallel an...
Python algorithms mastering basic algorithms in the Python language
Hetland, Magnus Lie
2014-01-01
Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc
Adaptive Step Size Gradient Ascent ICA Algorithm for Wireless MIMO Systems
Directory of Open Access Journals (Sweden)
Zahoor Uddin
2018-01-01
Full Text Available Independent component analysis (ICA is a technique of blind source separation (BSS used for separation of the mixed received signals. ICA algorithms are classified into adaptive and batch algorithms. Adaptive algorithms perform well in time-varying scenario with high-computational complexity, while batch algorithms have better separation performance in quasistatic channels with low-computational complexity. Amongst batch algorithms, the gradient-based ICA algorithms perform well, but step size selection is critical in these algorithms. In this paper, an adaptive step size gradient ascent ICA (ASS-GAICA algorithm is presented. The proposed algorithm is free from selection of the step size parameter with improved convergence and separation performance. Different performance evaluation criteria are used to verify the effectiveness of the proposed algorithm. Performance of the proposed algorithm is compared with the FastICA and optimum block adaptive ICA (OBAICA algorithms for quasistatic and time-varying wireless channels. Simulation is performed over quadrature amplitude modulation (QAM and binary phase shift keying (BPSK signals. Results show that the proposed algorithm outperforms the FastICA and OBAICA algorithms for a wide range of signal-to-noise ratio (SNR and input data block lengths.
Effect of Calcium and Potassium on Antioxidant System of <em>Vicia fabaem> L. Under Cadmium Stress
Directory of Open Access Journals (Sweden)
Hayssam M. Ali
2012-05-01
Full Text Available Cadmium (Cd in soil poses a major threat to plant growth and productivity. In the present experiment, we studied the effect of calcium (Ca^{2+} and/or potassium (K^{+} on the antioxidant system, accumulation of proline (Pro, malondialdehyde (MDA, and content of photosynthetic pigments, cadmium (Cd and nutrients, <em>i.e.>, Ca^{2+} and K^{+} in leaf of <em>Vicia faba em>L. (cv. TARA under Cd stress. Plants grown in the presence of Cd exhibited reduced growth traits [root length (RL plant^{−1}, shoot length (SL plant^{−1}, root fresh weight (RFW plant^{−1}, shoot fresh weight (SFW plant^{−1}, root dry weight (RDW plant^{−1} and shoot dry weight (SDW plant^{−1}] and concentration of Ca^{2+}, K^{+}, Chlorophyll (Chl <em>a> and Chl <em>b em>content, except content of MDA, Cd and (Pro. The antioxidant enzymes [peroxidase (POD and superoxide dismutase (SOD] slightly increased as compared to control under Cd stress. However, a significant improvement was observed in all growth traits and content of Ca^{2+}, K^{+}, Chl <em>a>, Chl <em>b em>,Pro and activity of antioxidant enzymes catalase (CAT, POD and SOD in plants subjected to Ca^{2+} and/or K^{+}. The maximum alleviating effect was recorded in the plants grown in medium containing Ca^{2+} and K^{+} together. This study indicates that the application of Ca^{2+} and/or K^{+} had a significant and synergistic effect on plant growth. Also, application of Ca^{2+} and/or K^{+} was highly effective against the toxicity of Cd by improving activity of antioxidant enzymes and solute that led to the enhanced plant growth of faba bean plants.
DEFF Research Database (Denmark)
Langhelle, Audun; Lossius, Hans Morten; Silfvast, Tom
2004-01-01
exist, however, especially within the ground and air ambulance service, and the EMS systems face several challenges. Main problems and challenges emphasized by the authors are: (1) Denmark: the dispatch centres are presently not under medical control and are without a national criteria based system......Emergency medicine service (EMS) systems in the five Nordic countries have more similarities than differences. One similarity is the involvement of anaesthesiologists as pre-hospital physicians and their strong participation for all critically ill and injured patients in-hospital. Discrepancies do....... Access to on-line medical advice of a physician is not available; (2) Finland: the autonomy of the individual municipalities and their responsibility to cover for primary and specialised health care, as well as the EMS, and the lack of supporting or demanding legislation regarding the EMS; (3) Iceland...
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Directory of Open Access Journals (Sweden)
Dazhi Jiang
2015-01-01
Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.
Neutron scattering lengths of 3He
International Nuclear Information System (INIS)
Alfimenkov, V.P.; Akopian, G.G.; Wierzbicki, J.; Govorov, A.M.; Pikelner, L.B.; Sharapov, E.I.
1976-01-01
The total neutron scattering cross-section of 3 He has been measured in the neutron energy range from 20 meV to 2 eV. Together with the known value of coherent scattering amplitude it leads to the two sts of n 3 He scattering lengths
Phonological length, phonetic duration and aphasia
Gilbers, D.G.; Bastiaanse, Y.R.M.; van der Linde, K.J.
1997-01-01
This study discusses an error type that is expected to occur in aphasics suffering from a phonological disorder, i.e. Wernicke's and conduction aphasics, but not in aphasics suffering from a phonetic disorder, i.e. Broca's aphasics. The critical notion is 'phonological length'. It will be argued
Information-theoretic lengths of Jacobi polynomials
Energy Technology Data Exchange (ETDEWEB)
Guerrero, A; Dehesa, J S [Departamento de Fisica Atomica, Molecular y Nuclear, Universidad de Granada, Granada (Spain); Sanchez-Moreno, P, E-mail: agmartinez@ugr.e, E-mail: pablos@ugr.e, E-mail: dehesa@ugr.e [Instituto ' Carlos I' de Fisica Teorica y Computacional, Universidad de Granada, Granada (Spain)
2010-07-30
The information-theoretic lengths of the Jacobi polynomials P{sup ({alpha}, {beta})}{sub n}(x), which are information-theoretic measures (Renyi, Shannon and Fisher) of their associated Rakhmanov probability density, are investigated. They quantify the spreading of the polynomials along the orthogonality interval [- 1, 1] in a complementary but different way as the root-mean-square or standard deviation because, contrary to this measure, they do not refer to any specific point of the interval. The explicit expressions of the Fisher length are given. The Renyi lengths are found by the use of the combinatorial multivariable Bell polynomials in terms of the polynomial degree n and the parameters ({alpha}, {beta}). The Shannon length, which cannot be exactly calculated because of its logarithmic functional form, is bounded from below by using sharp upper bounds to general densities on [- 1, +1] given in terms of various expectation values; moreover, its asymptotics is also pointed out. Finally, several computational issues relative to these three quantities are carefully analyzed.
Context quantization by minimum adaptive code length
DEFF Research Database (Denmark)
Forchhammer, Søren; Wu, Xiaolin
2007-01-01
Context quantization is a technique to deal with the issue of context dilution in high-order conditional entropy coding. We investigate the problem of context quantizer design under the criterion of minimum adaptive code length. A property of such context quantizers is derived for binary symbols....
Asymptotic Translation Length in the Curve Complex
Valdivia, Aaron D.
2013-01-01
We show that when the genus and punctures of a surface are directly proportional by some rational number the minimal asymptotic translation length in the curve complex has behavior inverse to the square of the Euler characteristic. We also show that when the genus is fixed and the number of punctures varies the behavior is inverse to the Euler characteristic.
Minimum Description Length Shape and Appearance Models
DEFF Research Database (Denmark)
Thodberg, Hans Henrik
2003-01-01
The Minimum Description Length (MDL) approach to shape modelling is reviewed. It solves the point correspondence problem of selecting points on shapes defined as curves so that the points correspond across a data set. An efficient numerical implementation is presented and made available as open s...
Hydrodynamic slip length as a surface property
Ramos-Alvarado, Bladimir; Kumar, Satish; Peterson, G. P.
2016-02-01
Equilibrium and nonequilibrium molecular dynamics simulations were conducted in order to evaluate the hypothesis that the hydrodynamic slip length is a surface property. The system under investigation was water confined between two graphite layers to form nanochannels of different sizes (3-8 nm). The water-carbon interaction potential was calibrated by matching wettability experiments of graphitic-carbon surfaces free of airborne hydrocarbon contamination. Three equilibrium theories were used to calculate the hydrodynamic slip length. It was found that one of the recently reported equilibrium theories for the calculation of the slip length featured confinement effects, while the others resulted in calculations significantly hindered by the large margin of error observed between independent simulations. The hydrodynamic slip length was found to be channel-size independent using equilibrium calculations, i.e., suggesting a consistency with the definition of a surface property, for 5-nm channels and larger. The analysis of the individual trajectories of liquid particles revealed that the reason for observing confinement effects in 3-nm nanochannels is the high mobility of the bulk particles. Nonequilibrium calculations were not consistently affected by size but by noisiness in the smallest systems.
2010-04-01
... 23 Highways 1 2010-04-01 2010-04-01 false Length. 658.13 Section 658.13 Highways FEDERAL HIGHWAY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION ENGINEERING AND TRAFFIC OPERATIONS TRUCK SIZE AND WEIGHT, ROUTE... Network or in transit between these highways and terminals or service locations pursuant to § 658.19. (b...
Link lengths and their growth powers
International Nuclear Information System (INIS)
Huh, Youngsik; No, Sungjong; Oh, Seungsang; Rawdon, Eric J
2015-01-01
For a certain infinite family F of knots or links, we study the growth power ratios of their stick number, lattice stick number, minimum lattice length and minimum ropelength compared with their minimum crossing number c(K) for every K∈F. It is known that the stick number and lattice stick number grow between the (1/2) and linear power of the crossing number, and minimum lattice length and minimum ropelength grow with at least the (3/4) power of crossing number (which is called the four-thirds power law). Furthermore, the minimal lattice length and minimum ropelength grow at most as O (c(K)[ln(c(K))] 5 ), but it is unknown whether any family exhibits superlinear growth. For any real number r between (1/2) and 1, we give an infinite family of non-splittable prime links in which the stick number and lattice stick number grow exactly as the rth power of crossing number. Furthermore for any real number r between (3/4) and 1, we give another infinite family of non-splittable prime links in which the minimum lattice length and minimum ropelength grow exactly as the rth power of crossing number. (paper)
Exciton diffusion length in narrow bandgap polymers
Mikhnenko, O.V.; Azimi, H.; Morana, M.; Blom, P.W.M.; Loi, M.A.
2012-01-01
We developed a new method to accurately extract the singlet exciton diffusion length in organic semiconductors by blending them with a low concentration of methanofullerene[6,6]-phenyl-C61-butyric acid methyl ester (PCBM). The dependence of photoluminescence (PL) decay time on the fullerene
Scale Length of the Galactic Thin Disk
Indian Academy of Sciences (India)
tribpo
thin disk density scale length, hR, is rather short (2.7 ± 0.1 kpc). Key words. ... The 2MASS near infrared data provide, for the first time, deep star counts on a ... peaks allows to adjust the spatial extinction law in the model. ... probability that fi.
Reactive Collision Avoidance Algorithm
Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred
2010-01-01
The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on
Directory of Open Access Journals (Sweden)
Hélio A. Oliveira
1994-06-01
Full Text Available Os autores apresentam estudo retrospectivo da poliomielite em Sergipe. Dividem o estudo em três períodos levando em consideração a taxa de notificação de casos positivos e enfatizam o período que vai de 1979 a 1989, pelo estudo de 159 casos consecutivos. As seguintes informações foram levantadas para cada caso: sexo, idade, procedência, estado vacinai anterior e evolução clínica. São também avaliadas a incidência anual, relação incidência/cobertura vacinai e distribuição geográfica. Tecem comentários sobre os surtos epidêmicos ocorridos em 1984 e 1986, sobre as alterações na circulação do poliovírus selvagem (P1 para P3 e sobre problemas relacionados à não aquisição de imunidade em crianças com esquema vacinal completo. Consideram a poliomielite controlada no Estado, mas enfatizam a necessidade de manutenção de Vigilância epidemiológica efetiva para todos os casos de paralisia aguda e flácida em menores de 14 anos de idade.
EM type radioactive standards. Radioaktivni etalony EM
Energy Technology Data Exchange (ETDEWEB)
1981-01-01
The standard contains technical specifications and conditions of production, testing, packing, transport and storage of EM type planar calibration standards containing radionuclides /sup 14/C, /sup 60/Co, /sup 90/Sr, /sup 137/Cs, /sup 147/Pm, /sup 204/Tl, /sup 239/Pu, /sup 241/Am and natural U. The terminology is explained, the related Czechoslovak standards and legal prescriptions given and amendments to these prescriptions presented.
Discrete algorithmic mathematics
Maurer, Stephen B
2005-01-01
The exposition is self-contained, complemented by diverse exercises and also accompanied by an introduction to mathematical reasoning … this book is an excellent textbook for a one-semester undergraduate course and it includes a lot of additional material to choose from.-EMS, March 2006In a textbook, it is necessary to select carefully the statements and difficulty of the problems … in this textbook, this is fully achieved … This review considers this book an excellent one.-The Mathematical Gazette, March 2006
Chu, Xiaowen; Li, Bo; Chlamtac, Imrich
2002-07-01
Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength
Budiman, M. A.; Rachmawati, D.; Parlindungan, M. R.
2018-03-01
MDTM is a classical symmetric cryptographic algorithm. As with other classical algorithms, the MDTM Cipher algorithm is easy to implement but it is less secure compared to modern symmetric algorithms. In order to make it more secure, a stream cipher RC4A is added and thus the cryptosystem becomes super encryption. In this process, plaintexts derived from PDFs are firstly encrypted with the MDTM Cipher algorithm and are encrypted once more with the RC4A algorithm. The test results show that the value of complexity is Θ(n2) and the running time is linearly directly proportional to the length of plaintext characters and the keys entered.
Optimized Min-Sum Decoding Algorithm for Low Density Parity Check Codes
Mohammad Rakibul Islam; Dewan Siam Shafiullah; Muhammad Mostafa Amir Faisal; Imran Rahman
2011-01-01
Low Density Parity Check (LDPC) code approaches Shannon–limit performance for binary field and long code lengths. However, performance of binary LDPC code is degraded when the code word length is small. An optimized min-sum algorithm for LDPC code is proposed in this paper. In this algorithm unlike other decoding methods, an optimization factor has been introduced in both check node and bit node of the Min-sum algorithm. The optimization factor is obtained before decoding program, and the sam...
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Treatment Algorithm for Ameloblastoma
Directory of Open Access Journals (Sweden)
Madhumati Singh
2014-01-01
Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.
An Algorithmic Diversity Diet?
DEFF Research Database (Denmark)
Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik
2016-01-01
With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....
Aydemir, Bahar
2017-01-01
The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Clustering performance comparison using K-means and expectation maximization algorithms.
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-11-14
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Stochastic split determinant algorithms
International Nuclear Information System (INIS)
Horvatha, Ivan
2000-01-01
I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed
Quantum gate decomposition algorithms.
Energy Technology Data Exchange (ETDEWEB)
Slepoy, Alexander
2006-07-01
Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.
KAM Tori Construction Algorithms
Wiesel, W.
In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.
Irregular Applications: Architectures & Algorithms
Energy Technology Data Exchange (ETDEWEB)
Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone
2012-02-06
Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.
Large scale tracking algorithms
Energy Technology Data Exchange (ETDEWEB)
Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
NEUTRON ALGORITHM VERIFICATION TESTING
International Nuclear Information System (INIS)
COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST
2000-01-01
Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility
Convex hull ranking algorithm for multi-objective evolutionary algorithms
Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.
2012-01-01
Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity
Directory of Open Access Journals (Sweden)
Yijun Fan
2012-07-01
Full Text Available A water-soluble polysaccharide (BEBP was extracted from <em>Boletus edulis em>Bull using hot water extraction followed by ethanol precipitation. The polysaccharide BEBP was further purified by chromatography on a DEAE-cellulose column, giving three major polysaccharide fractions termed BEBP-1, BEBP-2 and BEBP-3. In the next experiment, the average molecular weight (Mw, IR and monosaccharide compositional analysis of the three polysaccharide fractions were determined. The evaluation of antioxidant activities both <em>in vitroem> and <em>in vivo em>suggested that BEBP-3 had good potential antioxidant activity, and should be explored as a novel potential antioxidant.
Synchronization in a Random Length Ring Network for SDN-Controlled Optical TDM Switching
DEFF Research Database (Denmark)
Kamchevska, Valerija; Cristofori, Valentina; Da Ros, Francesco
2016-01-01
. In addition, we propose a novel synchronization algorithm that enables automatic synchronization of software defined networking controlled all-optical TDM switching nodes connected in a ring network. Besides providing synchronization, the algorithm also can facilitate dynamic slot size change and failure......In this paper we focus on optical time division multiplexed (TDM) switching and its main distinguishing characteristics compared with other optical subwavelength switching technologies. We review and discuss in detail the synchronization requirements that allow for proper switching operation...... detection. We experimentally validate the algorithm behavior and achieve correct operation for three different ring lengths. Moreover, we experimentally demonstrate data plane connectivity in a ring network composed of three nodes and show successful wavelength division multiplexing space division...
Directory of Open Access Journals (Sweden)
A.M. Paci
2003-10-01
Full Text Available Lo scopo del contributo è di fornire un aggiornamento sulla presenza del Toporagno del Vallese <em>Sorex antinoriiem>, del Toporagno acquatico di Miller <em>Neomys anomalusem> e della Talpa cieca <em>Talpa caecaem> in Umbria, dove le specie risultano accertate ormai da qualche anno. A tal fine sono stati rivisitati i reperti collezionati e la bibliografia conosciuta. Toporagno del Vallese: elevato di recente a livello di specie da Brünner et al. (2002, altrimenti considerato sottospecie del Toporagno comune (<em>S. araneus antinoriiem>. È conservato uno di tre crani incompleti (mancano mandibole ed incisivi superiori al momento prudenzialmente riferiti a <em>Sorex> cfr. <em>antinorii>, provenienti dall?Appennino umbro-marchigiano settentrionale (dintorni di Scalocchio - PG, 590 m. s.l.m. e determinati sulla base della pigmentazione rossa degli ipoconi del M^{1} e M^{2}; Toporagno acquatico di Miller: tre crani (Breda in Paci e Romano op. cit. e un esemplare intero (Paci, ined. sono stati trovati a pochi chilometri di distanza gli uni dall?altro tra i comuni di Assisi e Valfabbrica, in ambienti mediocollinari limitrofi al Parco Regionale del M.te Subasio (Perugia. In provincia di Terni la specie viene segnalata da Isotti (op. cit. per i dintorni di Orvieto. Talpa cieca: sono noti una femmina e un maschio raccolti nel comune di Pietralunga (PG, rispettivamente in una conifereta a <em>Pinus nigraem> (m. 630 s.l.m. e nelle vicinanze di un bosco misto collinare a prevalenza di <em>Quercus cerrisem> (m. 640 s.l.m.. Recentemente un terzo individuo è stato rinvenuto nel comune di Sigillo (PG, all?interno del Parco Regionale di M.te Cucco, sul margine di una faggeta a 1100 m s.l.m. In entrambi i casi l?areale della specie è risultato parapatrico con quello di <em>Talpa europaeaem>.
Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data
Directory of Open Access Journals (Sweden)
Günther Greiner
2010-01-01
Full Text Available Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber length has a strong influence on overall strength, stiffness, and stability of the material. The approximate fiber length distribution is usually determined early in the development process, as conventional methods require a destruction of the sample component. In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the segmentation method is demonstrated by applying it to artificially generated test data and selected real components.
Automatic Determination of Fiber-Length Distribution in Composite Material Using 3D CT Data
Teßmann, Matthias; Mohr, Stephan; Gayetskyy, Svitlana; Haßler, Ulf; Hanke, Randolf; Greiner, Günther
2010-12-01
Determining fiber length distribution in fiber reinforced polymer components is a crucial step in quality assurance, since fiber length has a strong influence on overall strength, stiffness, and stability of the material. The approximate fiber length distribution is usually determined early in the development process, as conventional methods require a destruction of the sample component. In this paper, a novel, automatic, and nondestructive approach for the determination of fiber length distribution in fiber reinforced polymers is presented. For this purpose, high-resolution computed tomography is used as imaging method together with subsequent image analysis for evaluation. The image analysis consists of an iterative process where single fibers are detected automatically in each iteration step after having applied image enhancement algorithms. Subsequently, a model-based approach is used together with a priori information in order to guide a fiber tracing and segmentation process. Thereby, the length of the segmented fibers can be calculated and a length distribution can be deduced. The performance and the robustness of the segmentation method is demonstrated by applying it to artificially generated test data and selected real components.
EGNAS: an exhaustive DNA sequence design algorithm
Directory of Open Access Journals (Sweden)
Kick Alfred
2012-06-01
Full Text Available Abstract Background The molecular recognition based on the complementary base pairing of deoxyribonucleic acid (DNA is the fundamental principle in the fields of genetics, DNA nanotechnology and DNA computing. We present an exhaustive DNA sequence design algorithm that allows to generate sets containing a maximum number of sequences with defined properties. EGNAS (Exhaustive Generation of Nucleic Acid Sequences offers the possibility of controlling both interstrand and intrastrand properties. The guanine-cytosine content can be adjusted. Sequences can be forced to start and end with guanine or cytosine. This option reduces the risk of “fraying” of DNA strands. It is possible to limit cross hybridizations of a defined length, and to adjust the uniqueness of sequences. Self-complementarity and hairpin structures of certain length can be avoided. Sequences and subsequences can optionally be forbidden. Furthermore, sequences can be designed to have minimum interactions with predefined strands and neighboring sequences. Results The algorithm is realized in a C++ program. TAG sequences can be generated and combined with primers for single-base extension reactions, which were described for multiplexed genotyping of single nucleotide polymorphisms. Thereby, possible foldback through intrastrand interaction of TAG-primer pairs can be limited. The design of sequences for specific attachment of molecular constructs to DNA origami is presented. Conclusions We developed a new software tool called EGNAS for the design of unique nucleic acid sequences. The presented exhaustive algorithm allows to generate greater sets of sequences than with previous software and equal constraints. EGNAS is freely available for noncommercial use at http://www.chm.tu-dresden.de/pc6/EGNAS.
Relationship between photoreceptor outer segment length and visual acuity in diabetic macular edema.
Forooghian, Farzin; Stetson, Paul F; Meyer, Scott A; Chew, Emily Y; Wong, Wai T; Cukras, Catherine; Meyerle, Catherine B; Ferris, Frederick L
2010-01-01
The purpose of this study was to quantify photoreceptor outer segment (PROS) length in 27 consecutive patients (30 eyes) with diabetic macular edema using spectral domain optical coherence tomography and to describe the correlation between PROS length and visual acuity. Three spectral domain-optical coherence tomography scans were performed on all eyes during each session using Cirrus HD-OCT. A prototype algorithm was developed for quantitative assessment of PROS length. Retinal thicknesses and PROS lengths were calculated for 3 parameters: macular grid (6 x 6 mm), central subfield (1 mm), and center foveal point (0.33 mm). Intrasession repeatability was assessed using coefficient of variation and intraclass correlation coefficient. The association between retinal thickness and PROS length with visual acuity was assessed using linear regression and Pearson correlation analyses. The main outcome measures include intrasession repeatability of macular parameters and correlation of these parameters with visual acuity. Mean retinal thickness and PROS length were 298 mum to 381 microm and 30 microm to 32 mum, respectively, for macular parameters assessed in this study. Coefficient of variation values were 0.75% to 4.13% for retinal thickness and 1.97% to 14.01% for PROS length. Intraclass correlation coefficient values were 0.96 to 0.99 and 0.73 to 0.98 for retinal thickness and PROS length, respectively. Slopes from linear regression analyses assessing the association of retinal thickness and visual acuity were not significantly different from 0 (P > 0.20), whereas the slopes of PROS length and visual acuity were significantly different from 0 (P < 0.0005). Correlation coefficients for macular thickness and visual acuity ranged from 0.13 to 0.22, whereas coefficients for PROS length and visual acuity ranged from -0.61 to -0.81. Photoreceptor outer segment length can be quantitatively assessed using Cirrus HD-OCT. Although the intrasession repeatability of PROS
Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method
Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong
2014-01-01
To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...
Fatigue Crack Length Sizing Using a Novel Flexible Eddy Current Sensor Array
Directory of Open Access Journals (Sweden)
Ruifang Xie
2015-12-01
Full Text Available The eddy current probe, which is flexible, array typed, highly sensitive and capable of quantitative inspection is one practical requirement in nondestructive testing and also a research hotspot. A novel flexible planar eddy current sensor array for the inspection of microcrack presentation in critical parts of airplanes is developed in this paper. Both exciting and sensing coils are etched on polyimide films using a flexible printed circuit board technique, thus conforming the sensor to complex geometric structures. In order to serve the needs of condition-based maintenance (CBM, the proposed sensor array is comprised of 64 elements. Its spatial resolution is only 0.8 mm, and it is not only sensitive to shallow microcracks, but also capable of sizing the length of fatigue cracks. The details and advantages of our sensor design are introduced. The working principal and the crack responses are analyzed by finite element simulation, with which a crack length sizing algorithm is proposed. Experiments based on standard specimens are implemented to verify the validity of our simulation and the efficiency of the crack length sizing algorithm. Experimental results show that the sensor array is sensitive to microcracks, and is capable of crack length sizing with an accuracy within ±0.2 mm.
Distance and Cable Length Measurement System
Hernández, Sergio Elias; Acosta, Leopoldo; Toledo, Jonay
2009-01-01
A simple, economic and successful design for distance and cable length detection is presented. The measurement system is based on the continuous repetition of a pulse that endlessly travels along the distance to be detected. There is a pulse repeater at both ends of the distance or cable to be measured. The endless repetition of the pulse generates a frequency that varies almost inversely with the distance to be measured. The resolution and distance or cable length range could be adjusted by varying the repetition time delay introduced at both ends and the measurement time. With this design a distance can be measured with centimeter resolution using electronic system with microsecond resolution, simplifying classical time of flight designs which require electronics with picosecond resolution. This design was also applied to position measurement. PMID:22303169
Investigations on quantum mechanics with minimal length
International Nuclear Information System (INIS)
Chargui, Yassine
2009-01-01
We consider a modified quantum mechanics where the coordinates and momenta are assumed to satisfy a non-standard commutation relation of the form( X i , P j ) = iℎ(δ ij (1+βP 2 )+β'P i P j ). Such an algebra results in a generalized uncertainty relation which leads to the existence of a minimal observable length. Moreover, it incorporates an UV/IR mixing and non commutative position space. We analyse the possible representations in terms of differential operators. The latter are used to study the low energy effects of the minimal length by considering different quantum systems : the harmonic oscillator, the Klein-Gordon oscillator, the spinless Salpeter Coulomb problem, and the Dirac equation with a linear confining potential. We also discuss whether such effects are observable in precision measurements on a relativistic electron trapped in strong magnetic field.
Aberrant leukocyte telomere length in Birdshot Uveitis.
Vazirpanah, Nadia; Verhagen, Fleurieke H; Rothova, Anna; Missotten, Tom O A R; van Velthoven, Mirjam; Den Hollander, Anneke I; Hoyng, Carel B; Radstake, Timothy R D J; Broen, Jasper C A; Kuiper, Jonas J W
2017-01-01
Birdshot Uveitis (BU) is an archetypical chronic inflammatory eye disease, with poor visual prognosis, that provides an excellent model for studying chronic inflammation. BU typically affects patients in the fifth decade of life. This suggests that it may represent an age-related chronic inflammatory disease, which has been linked to increased erosion of telomere length of leukocytes. To study this in detail, we exploited a sensitive standardized quantitative real-time polymerase chain reaction to determine the peripheral blood leukocyte telomere length (LTL) in 91 genotyped Dutch BU patients and 150 unaffected Dutch controls. Although LTL erosion rates were very similar between BU patients and healthy controls, we observed that BU patients displayed longer LTL, with a median of log (LTL) = 4.87 (= 74131 base pair) compared to 4.31 (= 20417 base pair) in unaffected controls (PRTEL1. These findings suggest that BU is accompanied by significantly longer LTL.
Increasing LIGO sensitivity by feedforward subtraction of auxiliary length control noise
International Nuclear Information System (INIS)
Meadors, Grant David; Riles, Keith; Kawabe, Keita
2014-01-01
LIGO, the Laser Interferometer Gravitational-wave Observatory, has been designed and constructed to measure gravitational wave strain via differential arm length. The LIGO 4 km Michelson arms with Fabry–Perot cavities have auxiliary length control servos for suppressing Michelson motion of the beam-splitter and arm cavity input mirrors, which degrades interferometer sensitivity. We demonstrate how a post facto pipeline improves a data sample from LIGO Science Run 6 with feedforward subtraction. Dividing data into 1024 s windows, we numerically fit filter functions representing the frequency-domain transfer functions from Michelson length channels into the gravitational-wave strain data channel for each window, then subtract the filtered Michelson channel noise (witness) from the strain channel (target). In this paper we describe the algorithm, assess achievable improvements in sensitivity to astrophysical sources, and consider relevance to future interferometry. (paper)
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
THE APPROACHING TRAIN DETECTION ALGORITHM
S. V. Bibikov
2015-01-01
The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...
Combinatorial optimization algorithms and complexity
Papadimitriou, Christos H
1998-01-01
This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.
Methyl 2-Benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate
Directory of Open Access Journals (Sweden)
Alami Anouar
2012-09-01
Full Text Available The heterocyclic carboxylic α-aminoester methyl 2-benzamido-2-(1<em>H>-benzimidazol-1-ylmethoxyacetate is obtained by <em>O>-alkylation of methyl α-azido glycinate <em>N>-benzoylated with 1<em>H>-benzimidazol-1-ylmethanol.
Directory of Open Access Journals (Sweden)
B. Azzouz
2007-01-01
Full Text Available The textile fibre mixture as a multicomponent blend of variable fibres imposes regarding the proper method to predict the characteristics of the final blend. The length diagram and the fibrogram of cotton are generated. Then the length distribution, the length diagram, and the fibrogram of a blend of different categories of cotton are determined. The length distributions by weight of five different categories of cotton (Egyptian, USA (Pima, Brazilian, USA (Upland, and Uzbekistani are measured by AFIS. From these distributions, the length distribution, the length diagram, and the fibrogram by weight of four binary blends are expressed. The length parameters of these cotton blends are calculated and their variations are plotted against the mass fraction x of one component in the blend .These calculated parameters are compared to those of real blends. Finally, the selection of the optimal blends using the linear programming method, based on the hypothesis that the cotton blend parameters vary linearly in function of the components rations, is proved insufficient.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Quark ensembles with infinite correlation length
Molodtsov, S. V.; Zinovjev, G. M.
2014-01-01
By studying quark ensembles with infinite correlation length we formulate the quantum field theory model that, as we show, is exactly integrable and develops an instability of its standard vacuum ensemble (the Dirac sea). We argue such an instability is rooted in high ground state degeneracy (for 'realistic' space-time dimensions) featuring a fairly specific form of energy distribution, and with the cutoff parameter going to infinity this inherent energy distribution becomes infinitely narrow...
Summary of coherent neutron scattering length
International Nuclear Information System (INIS)
Rauch, H.
1981-07-01
Experimental values of neutron-nuclei bound scattering lengths for some 354 isotopes and elements and the various spin-states are compiled in a uniform way together with their error bars as quoted in the original literature. Recommended values are also given. The definitions of the relevant quantities presented in the data tables and the basic principles of measurements are explained in the introductory chapters. The data is also available on a magnetic tape
Asymptotic safety, emergence and minimal length
International Nuclear Information System (INIS)
Percacci, Roberto; Vacca, Gian Paolo
2010-01-01
There seems to be a common prejudice that asymptotic safety is either incompatible with, or at best unrelated to, the other topics in the title. This is not the case. In fact, we show that (1) the existence of a fixed point with suitable properties is a promising way of deriving emergent properties of gravity, and (2) there is a sense in which asymptotic safety implies a minimal length. In doing so we also discuss possible signatures of asymptotic safety in scattering experiments.
Minimal length uncertainty relation and ultraviolet regularization
Kempf, Achim; Mangano, Gianpiero
1997-06-01
Studies in string theory and quantum gravity suggest the existence of a finite lower limit Δx0 to the possible resolution of distances, at the latest on the scale of the Planck length of 10-35 m. Within the framework of the Euclidean path integral we explicitly show ultraviolet regularization in field theory through this short distance structure. Both rotation and translation invariance can be preserved. An example is studied in detail.
Directory of Open Access Journals (Sweden)
Giovanni Ferrara
1992-07-01
Full Text Available Abstract In a lot of Barn Owl's pellets from the Murge plateau a specimen of <em>Sorex> sp. was detected. Thank to some morphological and morphometrical features, the cranial bones can be tentatively attributed to <em>Sorex samniticusem> Altobello, 1926. The genus <em>Sorex> was not yet included in the Apulia's fauna southwards of the Gargano district; the origin and significance of the above record is briefly discussed, the actual presence of a natural population of <em>Sorex> in the Murge being not yet proved. Riassunto Viene segnalato il rinvenimento di un esemplare di <em>Sorex> cfr. <em>samniticus> da borre di <em>Tyto albaem> delle Murge. Poiché il genere non era stato ancora segnalato nella Puglia a sud del Gargano, viene discusso il significato faunistico del reperto.
A method for evaluating discoverability and navigability of recommendation algorithms.
Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis
2017-01-01
Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.
The Effective Coherence Length in Anisotropic Superconductors
International Nuclear Information System (INIS)
Polturak, E.; Koren, G.; Nesher, O
1999-01-01
If electrons are transmitted from a normal conductor(N) into a superconductor(S), common wisdom has it that the electrons are converted into Cooper pairs within a coherence length from the interface. This is true in conventional superconductors with an isotropic order parameter. We have established experimentally that the situation is rather different in high Tc superconductors having an anisotropic order parameter. We used epitaxial thin film S/N bilayers having different interface orientations in order to inject carriers from S into N along different directions. The distance to which these carriers penetrate were determined through their effect on the Tc of the bilayers. We found that the effective coherence length is 20A only along the a or b directions, while in other directions we find a length of 250dr20A out of plane, and an even larger value for in-plane, off high symmetry directions. These observations can be explained using the Blonder-Tinkham-Klapwijk model adapted to anisotropic superconductivity. Several implications of our results on outstanding problems with high Tc junctions will be discussed
FTO associations with obesity and telomere length.
Zhou, Yuling; Hambly, Brett D; McLachlan, Craig S
2017-09-01
This review examines the biology of the Fat mass- and obesity-associated gene (FTO), and the implications of genetic association of FTO SNPs with obesity and genetic aging. Notably, we focus on the role of FTO in the regulation of methylation status as possible regulators of weight gain and genetic aging. We present a theoretical review of the FTO gene with a particular emphasis on associations with UCP2, AMPK, RBL2, IRX3, CUX1, mTORC1 and hormones involved in hunger regulation. These associations are important for dietary behavior regulation and cellular nutrient sensing via amino acids. We suggest that these pathways may also influence telomere regulation. Telomere length (TL) attrition may be influenced by obesity-related inflammation and oxidative stress, and FTO gene-involved pathways. There is additional emerging evidence to suggest that telomere length and obesity are bi-directionally associated. However, the role of obesity risk-related genotypes and associations with TL are not well understood. The FTO gene may influence pathways implicated in regulation of TL, which could help to explain some of the non-consistent relationship between weight phenotype and telomere length that is observed in population studies investigating obesity.
Development of the Heated Length Correction Factor
International Nuclear Information System (INIS)
Park, Ho-Young; Kim, Kang-Hoon; Nahm, Kee-Yil; Jung, Yil-Sup; Park, Eung-Jun
2008-01-01
The Critical Heat Flux (CHF) on a nuclear fuel is defined by the function of flow channel geometry and flow condition. According to the selection of the explanatory variable, there are three hypotheses to explain CHF at uniformly heated vertical rod (inlet condition hypothesis, exit condition hypothesis, local condition hypothesis). For inlet condition hypothesis, CHF is characterized by function of system pressure, rod diameter, rod length, mass flow and inlet subcooling. For exit condition hypothesis, exit quality substitutes for inlet subcooling. Generally the heated length effect on CHF in exit condition hypothesis is smaller than that of other variables. Heated length is usually excluded in local condition hypothesis to describe the CHF with only local fluid conditions. Most of commercial plants currently use the empirical CHF correlation based on local condition hypothesis. Empirical CHF correlation is developed by the method of fitting the selected sensitive local variables to CHF test data using the multiple non-linear regression. Because this kind of method can not explain physical meaning, it is difficult to reflect the proper effect of complex geometry. So the recent CHF correlation development strategy of nuclear fuel vendor is making the basic CHF correlation which consists of basic flow variables (local fluid conditions) at first, and then the geometrical correction factors are compensated additionally. Because the functional forms of correction factors are determined from the independent test data which represent the corresponding geometry separately, it can be applied to other CHF correlation directly only with minor coefficient modification
Slip length crossover on a graphene surface
Energy Technology Data Exchange (ETDEWEB)
Liang, Zhi, E-mail: liangz3@rpi.edu [Rensselaer Nanotechnology Center, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Keblinski, Pawel, E-mail: keplip@rpi.edu [Rensselaer Nanotechnology Center, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States); Department of Materials Science and Engineering, Rensselaer Polytechnic Institute, Troy, New York 12180 (United States)
2015-04-07
Using equilibrium and non-equilibrium molecular dynamics simulations, we study the flow of argon fluid above the critical temperature in a planar nanochannel delimited by graphene walls. We observe that, as a function of pressure, the slip length first decreases due to the decreasing mean free path of gas molecules, reaches the minimum value when the pressure is close to the critical pressure, and then increases with further increase in pressure. We demonstrate that the slip length increase at high pressures is due to the fact that the viscosity of fluid increases much faster with pressure than the friction coefficient between the fluid and the graphene. This behavior is clearly exhibited in the case of graphene due to a very smooth potential landscape originating from a very high atomic density of graphene planes. By contrast, on surfaces with lower atomic density, such as an (100) Au surface, the slip length for high fluid pressures is essentially zero, regardless of the nature of interaction between fluid and the solid wall.
Short Rayleigh length free electron lasers
Directory of Open Access Journals (Sweden)
W. B. Colson
2006-03-01
Full Text Available Conventional free electron laser (FEL oscillators minimize the optical mode volume around the electron beam in the undulator by making the resonator Rayleigh length about one third to one half of the undulator length. This maximizes gain and beam-mode coupling. In compact configurations of high-power infrared FELs or moderate power UV FELs, the resulting optical intensity can damage the resonator mirrors. To increase the spot size and thereby reduce the optical intensity at the mirrors below the damage threshold, a shorter Rayleigh length can be used, but the FEL interaction is significantly altered. We model this interaction using a coordinate system that expands with the rapidly diffracting optical mode from the ends of the undulator to the mirrors. Simulations show that the interaction of the strongly focused optical mode with a narrow electron beam inside the undulator distorts the optical wave front so it is no longer in the fundamental Gaussian mode. The simulations are used to study how mode distortion affects the single-pass gain in weak fields, and the steady-state extraction in strong fields.
Efficient GPS Position Determination Algorithms
National Research Council Canada - National Science Library
Nguyen, Thao Q
2007-01-01
... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...
Algorithmic approach to diagram techniques
International Nuclear Information System (INIS)
Ponticopoulos, L.
1980-10-01
An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)
Glycosylation of Vanillin and 8-Nordihydrocapsaicin by Cultured <em>Eucalyptus perrinianaem> Cells
Directory of Open Access Journals (Sweden)
Naoji Kubota
2012-05-01
Full Text Available Glycosylation of vanilloids such as vanillin and 8-nordihydrocapsaicin by cultured plant cells of <em>Eucalyptus perrinianaem> was studied. Vanillin was converted into vanillin 4-<em>O>-b-D-glucopyranoside, vanillyl alcohol, and 4-<em>O>-b-D-glucopyranosylvanillyl alcohol by <em>E. perriniana em>cells. Incubation of cultured <em>E. perrinianaem> cells with 8-nor- dihydrocapsaicin gave 8-nordihydrocapsaicin 4-<em>O>-b-D-glucopyranoside and 8-nordihydro- capsaicin 4-<em>O>-b-D-gentiobioside.
ALGORITHM FOR SORTING GROUPED DATA
Evans, J. D.
1994-01-01
It is often desirable to sort data sets in ascending or descending order. This becomes more difficult for grouped data, i.e., multiple sets of data, where each set of data involves several measurements or related elements. The sort becomes increasingly cumbersome when more than a few elements exist for each data set. In order to achieve an efficient sorting process, an algorithm has been devised in which the maximum most significant element is found, and then compared to each element in succession. The program was written to handle the daily temperature readings of the Voyager spacecraft, particularly those related to the special tracking requirements of Voyager 2. By reducing each data set to a single representative number, the sorting process becomes very easy. The first step in the process is to reduce the data set of width 'n' to a data set of width '1'. This is done by representing each data set by a polynomial of length 'n' based on the differences of the maximum and minimum elements. These single numbers are then sorted and converted back to obtain the original data sets. Required input data are the name of the data file to read and sort, and the starting and ending record numbers. The package includes a sample data file, containing 500 sets of data with 5 elements in each set. This program will perform a sort of the 500 data sets in 3 - 5 seconds on an IBM PC-AT with a hard disk; on a similarly equipped IBM PC-XT the time is under 10 seconds. This program is written in BASIC (specifically the Microsoft QuickBasic compiler) for interactive execution and has been implemented on the IBM PC computer series operating under PC-DOS with a central memory requirement of approximately 40K of 8 bit bytes. A hard disk is desirable for speed considerations, but is not required. This program was developed in 1986.
Selfish Gene Algorithm Vs Genetic Algorithm: A Review
Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed
2016-11-01
Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.
Directory of Open Access Journals (Sweden)
Luiz Carlos Oliveira Junior
Full Text Available resumo O artigo aborda aspectos da relação do cinema com a arte do retrato. Buscamos, em primeiro lugar, uma definição estética do que seria um retrato cinematográfico, sempre em tensão com os critérios formais e padrões estilísticos que historicamente constituíram o retrato pictórico. Em seguida, relacionamos essa questão com a importância que se deu à representação do close-up de rosto nas primeiras décadas do cinema, quando foi atribuído aos filmes um papel inédito no estudo da fisionomia e da expressão facial. Por fim, apresentamos exemplos de autorretratos na pintura e no cinema para expor a forma como a autorrepresentação põe em crise as noções de subjetividade e identidade em que a definição clássica do retrato se apoiava.
Optimum design for rotor-bearing system using advanced generic algorithm
International Nuclear Information System (INIS)
Kim, Young Chan; Choi, Seong Pil; Yang, Bo Suk
2001-01-01
This paper describes a combinational method to compute the global and local solutions of optimization problems. The present hybrid algorithm uses both a generic algorithm and a local concentrate search algorithm (e.g simplex method). The hybrid algorithm is not only faster than the standard genetic algorithm but also supplies a more accurate solution. In addition, this algorithm can find the global and local optimum solutions. The present algorithm can be supplied to minimize the resonance response (Q factor) and to yield the critical speeds as far from the operating speed as possible. These factors play very important roles in designing a rotor-bearing system under the dynamic behavior constraint. In the present work, the shaft diameter, the bearing length, and clearance are used as the design variables
<em>In Vivoem> Histamine Optical Nanosensors
Directory of Open Access Journals (Sweden)
Heather A. Clark
2012-08-01
Full Text Available In this communication we discuss the development of ionophore based nanosensors for the detection and monitoring of histamine levels <em>in vivoem>. This approach is based on the use of an amine-reactive, broad spectrum ionophore which is capable of recognizing and binding to histamine. We pair this ionophore with our already established nanosensor platform, and demonstrate <em>in vitroem> and <em>in vivoem> monitoring of histamine levels. This approach enables capturing rapid kinetics of histamine after injection, which are more difficult to measure with standard approaches such as blood sampling, especially on small research models. The coupling together of <em>in vivoem> nanosensors with ionophores such as nonactin provide a way to generate nanosensors for novel targets without the difficult process of designing and synthesizing novel ionophores.
Directory of Open Access Journals (Sweden)
Fernanda Keley Silva Pereira
2010-03-01
Full Text Available The objective of this work was to associate the energy digestible level of the diet to ovary development in post larval stage. Twenty-four female fish were used with weight and average initial length of 0.33 ± 0.11 g and 2.94 ± 0.39 cm, respectively. There were four treatments: 2600, 2700, 2800, and 2900 kcal of DE/kg of ration. Digestible energy levels influenced the final weight up to the estimated level of 2757.142kcal/kg. However, there was no significant effect for total length, gonad weight, and gonadossomatic index in females. The level of digestible energy did not provide better initial development of the ovary. This information may guide new experiments, to improve the conditions of cultivation, to value the activity on the economic sense as well as increase the number of fingerlings for aquaculture.Objetivou-se associar o nível de energia digestível da dieta com o desenvolvimento ovariano em estagio pós-larval. Foram usadas 24 fêmeas com peso e comprimento médio inicial de 0,33 ± 0,11g e 2,94 ± 0,39cm, respectivamente. Foram realizados 4 tratamentos: 2600; 2700; 2800 e 2900kcal de ED/Kg de ração. No presente estudo, o nível de energia digestível influenciou no peso final até o nível estimado de 2757,142kcal/kg. Entretanto, não apresentou efeito significativo para comprimento total, peso da gônada e índice gonadossomático das fêmeas. O nível de energia digestível não proporcionou melhor desenvolvimento inicial do ovário. Essas informações poderão nortear novos experimentos, melhorar as condições de cultivo, valorizar a atividade no sentido econômico, assim como aumentar o número de alevinos destinados à aquicultura.
Multiple Lookup Table-Based AES Encryption Algorithm Implementation
Gong, Jin; Liu, Wenyi; Zhang, Huixin
Anew AES (Advanced Encryption Standard) encryption algorithm implementation was proposed in this paper. It is based on five lookup tables, which are generated from S-box(the substitution table in AES). The obvious advantages are reducing the code-size, improving the implementation efficiency, and helping new learners to understand the AES encryption algorithm and GF(28) multiplication which are necessary to correctly implement AES[1]. This method can be applied on processors with word length 32 or above, FPGA and others. And correspondingly we can implement it by VHDL, Verilog, VB and other languages.
Polarization ray tracing in anisotropic optically active media. I. Algorithms
International Nuclear Information System (INIS)
McClain, S.C.; Hillman, L.W.; Chipman, R.A.
1993-01-01
Procedures for performing polarization ray tracing through birefringent media are presented in a form compatible with the standard methods of geometrical ray tracing. The birefringent materials treated include the following: anisotropic optically active materials such as quartz, non-optically active uniaxial materials such as calcite, and isotropic optically active materials such as mercury sulfide and organic liquids. Refraction and reflection algorithms are presented that compute both ray directions and wave directions. Methods for computing polarization modes, refractive indices, optical path lengths, and Fresnel transmission and reflection coefficients are also specified. A numerical example of these algorithms is given for analyzing the field of view of a quartz rotator. 37 refs., 3 figs
Honing process optimization algorithms
Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.
2018-03-01
This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.
Opposite Degree Algorithm and Its Applications
Directory of Open Access Journals (Sweden)
Xiao-Guang Yue
2015-12-01
Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.
Correlated evolution of sternal keel length and ilium length in birds
Directory of Open Access Journals (Sweden)
Tao Zhao
2017-07-01
Full Text Available The interplay between the pectoral module (the pectoral girdle and limbs and the pelvic module (the pelvic girdle and limbs plays a key role in shaping avian evolution, but prior empirical studies on trait covariation between the two modules are limited. Here we empirically test whether (size-corrected sternal keel length and ilium length are correlated during avian evolution using phylogenetic comparative methods. Our analyses on extant birds and Mesozoic birds both recover a significantly positive correlation. The results provide new evidence regarding the integration between the pelvic and pectoral modules. The correlated evolution of sternal keel length and ilium length may serve as a mechanism to cope with the effect on performance caused by a tradeoff in muscle mass between the pectoral and pelvic modules, via changing moment arms of muscles that function in flight and in terrestrial locomotion.
Directory of Open Access Journals (Sweden)
Nawal M. Al-Musayeib
2012-09-01
Full Text Available The present study investigated the <em>in vitroem> antiprotozoal activity of sixteen selected medicinal plants. Plant materials were extracted with methanol and screened <em>in vitroem> against erythrocytic schizonts of <em>Plasmodium falciparumem>, intracellular amastigotes of <em>Leishmania infantum em>and <em>Trypanosoma cruzi em>and free trypomastigotes of<em> T. bruceiem>. Cytotoxic activity was determined against MRC-5 cells to assess selectivity<em>. em>The criterion for activity was an IC_{50} < 10 µg/mL (4. Antiplasmodial activity was found in the<em> em>extracts of<em> em>>Prosopis julifloraem> and <em>Punica granatumem>. Antileishmanial activity<em> em>against <em>L. infantumem> was demonstrated in <em>Caralluma sinaicaem> and <em>Periploca aphylla.em> Amastigotes of<em> T. cruzi em>were affected by the methanol extract of<em> em>>Albizia lebbeckem>> em>pericarp, <em>Caralluma sinaicaem>,> Periploca aphylla em>and <em>Prosopius julifloraem>. Activity against<em> T. brucei em>was obtained in<em> em>>Prosopis julifloraem>. Cytotoxicity (MRC-5 IC_{50} < 10 µg/mL and hence non-specific activities were observed for<em> em>>Conocarpus lancifoliusem>.>
Componentes oculares em anisometropia The ocular components in anisometropia
Directory of Open Access Journals (Sweden)
David Tayah
2007-06-01
Full Text Available OBJETIVO: Comparar as correlações dos componentes oculares (comprimento axial, comprimento do segmento anterior, poder médio da córnea, profundidade da câmara vítrea e poder refrativo equivalente com o erro refrativo total do olho portador da menor e da maior ametropia em anisométropes. MÉTODOS: Foi realizado um "survey" analítico conduzido em população de 68 anisométropes de duas ou mais dioptrias atendida no Ambulatório da Clinica Oftalmológica do Hospital das Clínicas da Faculdade de Medicina da Universidade de São Paulo Os anisométropes foram submetidos à refração estática objetiva e subjetiva, ceratometria e biometria ultra-sônica. RESULTADOS: Não houve diferença significativa entre os valores dos componentes oculares medidos dos olhos portadores da menor e da maior ametropia. Os olhos portadores da menor ametropia apresentaram as mesmas correlações significantes observadas em olhos emétropes, ou seja, correlação da refração com comprimento do segmento anterior e comprimento axial, e correlação do comprimento axial com poder corneano e profundidade da câmara vítrea. Os olhos portadores da maior ametropia apresentaram correlação significante da refração com o comprimento axial e do comprimento axial com a profundidade da câmara vítrea. Ainda em ambos os olhos observou-se correlação significante do poder do cristalino com a profundidade da câmara anterior. CONCLUSÃO: Os olhos portadores da menor ametropia desenvolveram as correlações mais freqüentemente observadas nos olhos emétropes. Os olhos portadores da maior ametropia não desenvolveram as mesmas correlações dos emétropes.PURPOSE: To asses the correlation between ocular components (axial length, anterior segment length, corneal power, vitreous length and equivalent power of the eye and refractive error in eyes with higher and lower ametropia of subjects with anisometropia. METHODS: An analytical survey was carried out in 68 patients
Fast algorithm for Morphological Filters
International Nuclear Information System (INIS)
Lou Shan; Jiang Xiangqian; Scott, Paul J
2011-01-01
In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.
Recognition algorithms in knot theory
International Nuclear Information System (INIS)
Dynnikov, I A
2003-01-01
In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory
A speedup technique for (l, d-motif finding algorithms
Directory of Open Access Journals (Sweden)
Dinh Hieu
2011-03-01
Full Text Available Abstract Background The discovery of patterns in DNA, RNA, and protein sequences has led to the solution of many vital biological problems. For instance, the identification of patterns in nucleic acid sequences has resulted in the determination of open reading frames, identification of promoter elements of genes, identification of intron/exon splicing sites, identification of SH RNAs, location of RNA degradation signals, identification of alternative splicing sites, etc. In protein sequences, patterns have proven to be extremely helpful in domain identification, location of protease cleavage sites, identification of signal peptides, protein interactions, determination of protein degradation elements, identification of protein trafficking elements, etc. Motifs are important patterns that are helpful in finding transcriptional regulatory elements, transcription factor binding sites, functional genomics, drug design, etc. As a result, numerous papers have been written to solve the motif search problem. Results Three versions of the motif search problem have been proposed in the literature: Simple Motif Search (SMS, (l, d-motif search (or Planted Motif Search (PMS, and Edit-distance-based Motif Search (EMS. In this paper we focus on PMS. Two kinds of algorithms can be found in the literature for solving the PMS problem: exact and approximate. An exact algorithm identifies the motifs always and an approximate algorithm may fail to identify some or all of the motifs. The exact version of PMS problem has been shown to be NP-hard. Exact algorithms proposed in the literature for PMS take time that is exponential in some of the underlying parameters. In this paper we propose a generic technique that can be used to speedup PMS algorithms. Conclusions We present a speedup technique that can be used on any PMS algorithm. We have tested our speedup technique on a number of algorithms. These experimental results show that our speedup technique is indeed very
Cyberbullying em adolescentes brasileiros
Wendt, Guilherme Welter
2012-01-01
O cyberbullying é entendido como uma forma de comportamento agressivo que ocorre através dos meios eletrônicos de interação (computadores, celulares, sites de relacionamento virtual), sendo realizado de maneira intencional por uma pessoa ou grupo contra alguém em situação desigual de poder e, ainda, com dificuldade em se defender. Os estudos disponíveis até o presente momento destacam que o cyberbullying é um fator de risco para o desenvolvimento de sintomas de ansiedade, depressão, ideação s...
Borromeu, Carlos
2015-01-01
Resumo:Texto publicado em 1941, na revista de orientação católica A Ordem, no Rio de Janeiro. Seu autor considera que Nietzsche teria negado a moral tradicional, concebendo em seu lugar outra, porém imoral e brutal. Acusa o filósofo, por fim, de ser responsável pela Guerra ora e curso na Europa. Abstract:Text published in 1941 in the Catholic orientation magazine, A Ordem, in Rio de Janeiro. The author believes that Nietzsche would have denied traditional morality, conceiving another in it...
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Fatigue evaluation algorithms: Review
Energy Technology Data Exchange (ETDEWEB)
Passipoularidis, V.A.; Broendsted, P.
2009-11-15
A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)