WorldWideScience

Sample records for maximization em algorithm

  1. A Linear Time Algorithm for the <em>k> Maximal Sums Problem

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Jørgensen, Allan Grønlund

    2007-01-01

     k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all......Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....

  2. Weighted expectation maximization reconstruction algorithms with application to gated megavoltage tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa

    2005-01-01

    We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)

  3. An Expectation-Maximization Algorithm for Amplitude Estimation of Saturated Optical Transient Signals.

    Energy Technology Data Exchange (ETDEWEB)

    Kagie, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Lanterman, Aaron D. [Georgia Inst. of Technology, Atlanta, GA (United States)

    2017-12-01

    This paper addresses parameter estimation for an optical transient signal when the received data has been right-censored. We develop an expectation-maximization (EM) algorithm to estimate the amplitude of a Poisson intensity with a known shape in the presence of additive background counts, where the measurements are subject to saturation effects. We compare the results of our algorithm with those of an EM algorithm that is unaware of the censoring.

  4. Development of regularized expectation maximization algorithms for fan-beam SPECT data

    International Nuclear Information System (INIS)

    Kim, Soo Mee; Lee, Jae Sung; Lee, Dong Soo; Lee, Soo Jin; Kim, Kyeong Min

    2005-01-01

    SPECT using a fan-beam collimator improves spatial resolution and sensitivity. For the reconstruction from fan-beam projections, it is necessary to implement direct fan-beam reconstruction methods without transforming the data into the parallel geometry. In this study, various fan-beam reconstruction algorithms were implemented and their performances were compared. The projector for fan-beam SPECT was implemented using a ray-tracing method. The direct reconstruction algorithms implemented for fan-beam projection data were FBP (filtered backprojection), EM (expectation maximization), OS-EM (ordered subsets EM) and MAP-EM OSL (maximum a posteriori EM using the one-step late method) with membrane and thin-plate models as priors. For comparison, the fan-beam projection data were also rebinned into the parallel data using various interpolation methods, such as the nearest neighbor, bilinear and bicubic interpolations, and reconstructed using the conventional EM algorithm for parallel data. Noiseless and noisy projection data from the digital Hoffman brain and Shepp/Logan phantoms were reconstructed using the above algorithms. The reconstructed images were compared in terms of a percent error metric. For the fan-beam data with Poisson noise, the MAP-EM OSL algorithm with the thin-plate prior showed the best result in both percent error and stability. Bilinear interpolation was the most effective method for rebinning from the fan-beam to parallel geometry when the accuracy and computation load were considered. Direct fan-beam EM reconstructions were more accurate than the standard EM reconstructions obtained from rebinned parallel data. Direct fan-beam reconstruction algorithms were implemented, which provided significantly improved reconstructions

  5. Clustering performance comparison using K-means and expectation maximization algorithms.

    Science.gov (United States)

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  6. Application of the EM algorithm to radiographic images.

    Science.gov (United States)

    Brailean, J C; Little, D; Giger, M L; Chen, C T; Sullivan, B J

    1992-01-01

    The expectation maximization (EM) algorithm has received considerable attention in the area of positron emitted tomography (PET) as a restoration and reconstruction technique. In this paper, the restoration capabilities of the EM algorithm when applied to radiographic images is investigated. This application does not involve reconstruction. The performance of the EM algorithm is quantitatively evaluated using a "perceived" signal-to-noise ratio (SNR) as the image quality metric. This perceived SNR is based on statistical decision theory and includes both the observer's visual response function and a noise component internal to the eye-brain system. For a variety of processing parameters, the relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to compare quantitatively the effects of the EM algorithm with two other image enhancement techniques: global contrast enhancement (windowing) and unsharp mask filtering. The results suggest that the EM algorithm's performance is superior when compared to unsharp mask filtering and global contrast enhancement for radiographic images which contain objects smaller than 4 mm.

  7. a Threshold-Free Filtering Algorithm for Airborne LIDAR Point Clouds Based on Expectation-Maximization

    Science.gov (United States)

    Hui, Z.; Cheng, P.; Ziggah, Y. Y.; Nie, Y.

    2018-04-01

    Filtering is a key step for most applications of airborne LiDAR point clouds. Although lots of filtering algorithms have been put forward in recent years, most of them suffer from parameters setting or thresholds adjusting, which will be time-consuming and reduce the degree of automation of the algorithm. To overcome this problem, this paper proposed a threshold-free filtering algorithm based on expectation-maximization. The proposed algorithm is developed based on an assumption that point clouds are seen as a mixture of Gaussian models. The separation of ground points and non-ground points from point clouds can be replaced as a separation of a mixed Gaussian model. Expectation-maximization (EM) is applied for realizing the separation. EM is used to calculate maximum likelihood estimates of the mixture parameters. Using the estimated parameters, the likelihoods of each point belonging to ground or object can be computed. After several iterations, point clouds can be labelled as the component with a larger likelihood. Furthermore, intensity information was also utilized to optimize the filtering results acquired using the EM method. The proposed algorithm was tested using two different datasets used in practice. Experimental results showed that the proposed method can filter non-ground points effectively. To quantitatively evaluate the proposed method, this paper adopted the dataset provided by the ISPRS for the test. The proposed algorithm can obtain a 4.48 % total error which is much lower than most of the eight classical filtering algorithms reported by the ISPRS.

  8. Time Series Modeling of Nano-Gold Immunochromatographic Assay via Expectation Maximization Algorithm.

    Science.gov (United States)

    Zeng, Nianyin; Wang, Zidong; Li, Yurong; Du, Min; Cao, Jie; Liu, Xiaohui

    2013-12-01

    In this paper, the expectation maximization (EM) algorithm is applied to the modeling of the nano-gold immunochromatographic assay (nano-GICA) via available time series of the measured signal intensities of the test and control lines. The model for the nano-GICA is developed as the stochastic dynamic model that consists of a first-order autoregressive stochastic dynamic process and a noisy measurement. By using the EM algorithm, the model parameters, the actual signal intensities of the test and control lines, as well as the noise intensity can be identified simultaneously. Three different time series data sets concerning the target concentrations are employed to demonstrate the effectiveness of the introduced algorithm. Several indices are also proposed to evaluate the inferred models. It is shown that the model fits the data very well.

  9. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    OpenAIRE

    Zhang, Lijuan; Li, Dongming; Su, Wei; Yang, Jinhua; Jiang, Yutong

    2014-01-01

    To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constrain...

  10. Global Convergence of the EM Algorithm for Unconstrained Latent Variable Models with Categorical Indicators

    Science.gov (United States)

    Weissman, Alexander

    2013-01-01

    Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…

  11. Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography

    Directory of Open Access Journals (Sweden)

    Kiyoko Tateishi

    2017-01-01

    Full Text Available The maximum-likelihood expectation-maximization (ML-EM algorithm is used for an iterative image reconstruction (IIR method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.

  12. Performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.

    1990-01-01

    In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms

  13. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  14. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  15. EM algorithm for one-shot device testing with competing risks under exponential distribution

    International Nuclear Information System (INIS)

    Balakrishnan, N.; So, H.Y.; Ling, M.H.

    2015-01-01

    This paper provides an extension of the work of Balakrishnan and Ling [1] by introducing a competing risks model into a one-shot device testing analysis under an accelerated life test setting. An Expectation Maximization (EM) algorithm is then developed for the estimation of the model parameters. An extensive Monte Carlo simulation study is carried out to assess the performance of the EM algorithm and then compare the obtained results with the initial estimates obtained by the Inequality Constrained Least Squares (ICLS) method of estimation. Finally, we apply the EM algorithm to a clinical data, ED01, to illustrate the method of inference developed here. - Highlights: • ALT data analysis for one-shot devices with competing risks is considered. • EM algorithm is developed for the determination of the MLEs. • The estimations of lifetime under normal operating conditions are presented. • The EM algorithm improves the convergence rate

  16. On the use of successive data in the ML-EM algorithm in Positron Emission Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Desmedt, P; Lemahieu, I [University of Ghent, ELIS Department, SInt-Pietersnieuwstraat 41, B-9000 Gent, (Belgium)

    1994-12-31

    The Maximum Likelihood-Expectation Maximization (ML-EM) algorithm is the most popular statistical reconstruction technique for Positron Emission Tomography (PET). The ML-EM algorithm is however also renowned for its long reconstruction times. An acceleration technique for this algorithm is studied in this paper. The proposed technique starts the ML-EM algorithm before the measurement process is completed. Since the reconstruction is initiated during the scan of the patient, the time elapsed before a reconstruction becomes available is reduced. Experiments with software phantoms indicate that the quality of the reconstructed image using successive data is comparable to the quality of the reconstruction with the normal ML-EM algorithm. (authors). 7 refs, 3 figs.

  17. Optimal data replication: A new approach to optimizing parallel EM algorithms on a mesh-connected multiprocessor for 3D PET image reconstruction

    International Nuclear Information System (INIS)

    Chen, C.M.; Lee, S.Y.

    1995-01-01

    The EM algorithm promises an estimated image with the maximal likelihood for 3D PET image reconstruction. However, due to its long computation time, the EM algorithm has not been widely used in practice. While several parallel implementations of the EM algorithm have been developed to make the EM algorithm feasible, they do not guarantee an optimal parallelization efficiency. In this paper, the authors propose a new parallel EM algorithm which maximizes the performance by optimizing data replication on a mesh-connected message-passing multiprocessor. To optimize data replication, the authors have formally derived the optimal allocation of shared data, group sizes, integration and broadcasting of replicated data as well as the scheduling of shared data accesses. The proposed parallel EM algorithm has been implemented on an iPSC/860 with 16 PEs. The experimental and theoretical results, which are consistent with each other, have shown that the proposed parallel EM algorithm could improve performance substantially over those using unoptimized data replication

  18. Classification of Ultrasonic NDE Signals Using the Expectation Maximization (EM) and Least Mean Square (LMS) Algorithms

    International Nuclear Information System (INIS)

    Kim, Dae Won

    2005-01-01

    Ultrasonic inspection methods are widely used for detecting flaws in materials. The signal analysis step plays a crucial part in the data interpretation process. A number of signal processing methods have been proposed to classify ultrasonic flaw signals. One of the more popular methods involves the extraction of an appropriate set of features followed by the use of a neural network for the classification of the signals in the feature spare. This paper describes an alternative approach which uses the least mean square (LMS) method and exportation maximization (EM) algorithm with the model based deconvolution which is employed for classifying nondestructive evaluation (NDE) signals from steam generator tubes in a nuclear power plant. The signals due to cracks and deposits are not significantly different. These signals must be discriminated to prevent from happening a huge disaster such as contamination of water or explosion. A model based deconvolution has been described to facilitate comparison of classification results. The method uses the space alternating generalized expectation maximiBation (SAGE) algorithm ill conjunction with the Newton-Raphson method which uses the Hessian parameter resulting in fast convergence to estimate the time of flight and the distance between the tube wall and the ultrasonic sensor. Results using these schemes for the classification of ultrasonic signals from cracks and deposits within steam generator tubes are presented and showed a reasonable performances

  19. The Orthogonally Partitioned EM Algorithm: Extending the EM Algorithm for Algorithmic Stability and Bias Correction Due to Imperfect Data.

    Science.gov (United States)

    Regier, Michael D; Moodie, Erica E M

    2016-05-01

    We propose an extension of the EM algorithm that exploits the common assumption of unique parameterization, corrects for biases due to missing data and measurement error, converges for the specified model when standard implementation of the EM algorithm has a low probability of convergence, and reduces a potentially complex algorithm into a sequence of smaller, simpler, self-contained EM algorithms. We use the theory surrounding the EM algorithm to derive the theoretical results of our proposal, showing that an optimal solution over the parameter space is obtained. A simulation study is used to explore the finite sample properties of the proposed extension when there is missing data and measurement error. We observe that partitioning the EM algorithm into simpler steps may provide better bias reduction in the estimation of model parameters. The ability to breakdown a complicated problem in to a series of simpler, more accessible problems will permit a broader implementation of the EM algorithm, permit the use of software packages that now implement and/or automate the EM algorithm, and make the EM algorithm more accessible to a wider and more general audience.

  20. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    International Nuclear Information System (INIS)

    Viana, R.S.; Yoriyaz, H.; Santos, A.

    2011-01-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  1. Conditional probability distribution associated to the E-M image reconstruction algorithm for neutron stimulated emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Viana, R.S.; Yoriyaz, H.; Santos, A., E-mail: rodrigossviana@gmail.com, E-mail: hyoriyaz@ipen.br, E-mail: asantos@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2011-07-01

    The Expectation-Maximization (E-M) algorithm is an iterative computational method for maximum likelihood (M-L) estimates, useful in a variety of incomplete-data problems. Due to its stochastic nature, one of the most relevant applications of E-M algorithm is the reconstruction of emission tomography images. In this paper, the statistical formulation of the E-M algorithm was applied to the in vivo spectrographic imaging of stable isotopes called Neutron Stimulated Emission Computed Tomography (NSECT). In the process of E-M algorithm iteration, the conditional probability distribution plays a very important role to achieve high quality image. This present work proposes an alternative methodology for the generation of the conditional probability distribution associated to the E-M reconstruction algorithm, using the Monte Carlo code MCNP5 and with the application of the reciprocity theorem. (author)

  2. A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.

    Science.gov (United States)

    Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh

    2015-02-01

    A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.

  3. A Markov chain Monte Carlo Expectation Maximization Algorithm for Statistical Analysis of DNA Sequence Evolution with Neighbor-Dependent Substitution Rates

    DEFF Research Database (Denmark)

    Hobolth, Asger

    2008-01-01

    -dimensional integrals required in the EM algorithm are estimated using MCMC sampling. The MCMC sampler requires simulation of sample paths from a continuous time Markov process, conditional on the beginning and ending states and the paths of the neighboring sites. An exact path sampling algorithm is developed......The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor......-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...

  4. Expectation-maximization algorithms for learning a finite mixture of univariate survival time distributions from partially specified class values

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Youngrok [Iowa State Univ., Ames, IA (United States)

    2013-05-15

    Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.

  5. Maximum likelihood estimation and EM algorithm of Copas-like selection model for publication bias correction.

    Science.gov (United States)

    Ning, Jing; Chen, Yong; Piao, Jin

    2017-07-01

    Publication bias occurs when the published research results are systematically unrepresentative of the population of studies that have been conducted, and is a potential threat to meaningful meta-analysis. The Copas selection model provides a flexible framework for correcting estimates and offers considerable insight into the publication bias. However, maximizing the observed likelihood under the Copas selection model is challenging because the observed data contain very little information on the latent variable. In this article, we study a Copas-like selection model and propose an expectation-maximization (EM) algorithm for estimation based on the full likelihood. Empirical simulation studies show that the EM algorithm and its associated inferential procedure performs well and avoids the non-convergence problem when maximizing the observed likelihood. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Statistical trajectory of an approximate EM algorithm for probabilistic image processing

    International Nuclear Information System (INIS)

    Tanaka, Kazuyuki; Titterington, D M

    2007-01-01

    We calculate analytically a statistical average of trajectories of an approximate expectation-maximization (EM) algorithm with generalized belief propagation (GBP) and a Gaussian graphical model for the estimation of hyperparameters from observable data in probabilistic image processing. A statistical average with respect to observed data corresponds to a configuration average for the random-field Ising model in spin glass theory. In the present paper, hyperparameters which correspond to interactions and external fields of spin systems are estimated by an approximate EM algorithm. A practical algorithm is described for gray-level image restoration based on a Gaussian graphical model and GBP. The GBP approach corresponds to the cluster variation method in statistical mechanics. Our main result in the present paper is to obtain the statistical average of the trajectory in the approximate EM algorithm by using loopy belief propagation and GBP with respect to degraded images generated from a probability density function with true values of hyperparameters. The statistical average of the trajectory can be expressed in terms of recursion formulas derived from some analytical calculations

  7. Improved Algorithms OF CELF and CELF++ for Influence Maximization

    Directory of Open Access Journals (Sweden)

    Jiaguo Lv

    2014-06-01

    Full Text Available Motivated by the wide application in some fields, such as viral marketing, sales promotion etc, influence maximization has been the most important and extensively studied problem in social network. However, the most classical KK-Greedy algorithm for influence maximization is inefficient. Two major sources of the algorithm’s inefficiency were analyzed in this paper. With the analysis of algorithms CELF and CELF++, all nodes in the influenced set of u would never bring any marginal gain when a new seed u was produced. Through this optimization strategy, a lot of redundant nodes will be removed from the candidate nodes. Basing on the strategy, two improved algorithms of Lv_CELF and Lv_CELF++ were proposed in this study. To evaluate the two algorithms, the two algorithms with their benchmark algorithms of CELF and CELF++ were conducted on some real world datasets. To estimate the algorithms, influence degree and running time were employed to measure the performance and efficiency respectively. Experimental results showed that, compared with benchmark algorithms of CELF and CELF++, matching effects and higher efficiency were achieved by the new algorithms Lv_CELF and Lv_CELF++. Solutions with the proposed optimization strategy can be useful for the decisionmaking problems under the scenarios related to the influence maximization problem.

  8. An efficient community detection algorithm using greedy surprise maximization

    International Nuclear Information System (INIS)

    Jiang, Yawen; Jia, Caiyan; Yu, Jian

    2014-01-01

    Community detection is an important and crucial problem in complex network analysis. Although classical modularity function optimization approaches are widely used for identifying communities, the modularity function (Q) suffers from its resolution limit. Recently, the surprise function (S) was experimentally proved to be better than the Q function. However, up until now, there has been no algorithm available to perform searches to directly determine the maximal surprise values. In this paper, considering the superiority of the S function over the Q function, we propose an efficient community detection algorithm called AGSO (algorithm based on greedy surprise optimization) and its improved version FAGSO (fast-AGSO), which are based on greedy surprise optimization and do not suffer from the resolution limit. In addition, (F)AGSO does not need the number of communities K to be specified in advance. Tests on experimental networks show that (F)AGSO is able to detect optimal partitions in both simple and even more complex networks. Moreover, algorithms based on surprise maximization perform better than those algorithms based on modularity maximization, including Blondel–Guillaume–Lambiotte–Lefebvre (BGLL), Clauset–Newman–Moore (CNM) and the other state-of-the-art algorithms such as Infomap, order statistics local optimization method (OSLOM) and label propagation algorithm (LPA). (paper)

  9. A Receiver for Differential Space-Time -Shifted BPSK Modulation Based on Scalar-MSDD and the EM Algorithm

    Directory of Open Access Journals (Sweden)

    Kim Jae H

    2005-01-01

    Full Text Available In this paper, we consider the issue of blind detection of Alamouti-type differential space-time (ST modulation in static Rayleigh fading channels. We focus our attention on a -shifted BPSK constellation, introducing a novel transformation to the received signal such that this binary ST modulation, which has a second-order transmit diversity, is equivalent to QPSK modulation with second-order receive diversity. This equivalent representation allows us to apply a low-complexity detection technique specifically designed for receive diversity, namely, scalar multiple-symbol differential detection (MSDD. To further increase receiver performance, we apply an iterative expectation-maximization (EM algorithm which performs joint channel estimation and sequence detection. This algorithm uses minimum mean square estimation to obtain channel estimates and the maximum-likelihood principle to detect the transmitted sequence, followed by differential decoding. With receiver complexity proportional to the observation window length, our receiver can achieve the performance of a coherent maximal ratio combining receiver (with differential decoding in as few as a single EM receiver iteration, provided that the window size of the initial MSDD is sufficiently long. To further demonstrate that the MSDD is a vital part of this receiver setup, we show that an initial ST conventional differential detector would lead to strange convergence behavior in the EM algorithm.

  10. Nonlinear impairment compensation using expectation maximization for dispersion managed and unmanaged PDM 16-QAM transmission

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo

    2012-01-01

    In this paper, we show numerically and experimentally that expectation maximization (EM) algorithm is a powerful tool in combating system impairments such as fibre nonlinearities, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. The EM algorithm is an iterative algorithm ...

  11. Mean field theory of EM algorithm for Bayesian grey scale image restoration

    International Nuclear Information System (INIS)

    Inoue, Jun-ichi; Tanaka, Kazuyuki

    2003-01-01

    The EM algorithm for the Bayesian grey scale image restoration is investigated in the framework of the mean field theory. Our model system is identical to the infinite range random field Q-Ising model. The maximum marginal likelihood method is applied to the determination of hyper-parameters. We calculate both the data-averaged mean square error between the original image and its maximizer of posterior marginal estimate, and the data-averaged marginal likelihood function exactly. After evaluating the hyper-parameter dependence of the data-averaged marginal likelihood function, we derive the EM algorithm which updates the hyper-parameters to obtain the maximum likelihood estimate analytically. The time evolutions of the hyper-parameters and so-called Q function are obtained. The relation between the speed of convergence of the hyper-parameters and the shape of the Q function is explained from the viewpoint of dynamics

  12. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    Energy Technology Data Exchange (ETDEWEB)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung [Seoul National University, Seoul (Korea, Republic of)

    2009-10-15

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  13. A Study on GPU-based Iterative ML-EM Reconstruction Algorithm for Emission Computed Tomographic Imaging Systems

    International Nuclear Information System (INIS)

    Ha, Woo Seok; Kim, Soo Mee; Park, Min Jae; Lee, Dong Soo; Lee, Jae Sung

    2009-01-01

    The maximum likelihood-expectation maximization (ML-EM) is the statistical reconstruction algorithm derived from probabilistic model of the emission and detection processes. Although the ML-EM has many advantages in accuracy and utility, the use of the ML-EM is limited due to the computational burden of iterating processing on a CPU (central processing unit). In this study, we developed a parallel computing technique on GPU (graphic processing unit) for ML-EM algorithm. Using Geforce 9800 GTX+ graphic card and CUDA (compute unified device architecture) the projection and backprojection in ML-EM algorithm were parallelized by NVIDIA's technology. The time delay on computations for projection, errors between measured and estimated data and backprojection in an iteration were measured. Total time included the latency in data transmission between RAM and GPU memory. The total computation time of the CPU- and GPU-based ML-EM with 32 iterations were 3.83 and 0.26 sec, respectively. In this case, the computing speed was improved about 15 times on GPU. When the number of iterations increased into 1024, the CPU- and GPU-based computing took totally 18 min and 8 sec, respectively. The improvement was about 135 times and was caused by delay on CPU-based computing after certain iterations. On the other hand, the GPU-based computation provided very small variation on time delay per iteration due to use of shared memory. The GPU-based parallel computation for ML-EM improved significantly the computing speed and stability. The developed GPU-based ML-EM algorithm could be easily modified for some other imaging geometries

  14. A multicenter evaluation of seven commercial ML-EM algorithms for SPECT image reconstruction using simulation data

    International Nuclear Information System (INIS)

    Matsumoto, Keiichi; Ohnishi, Hideo; Niida, Hideharu; Nishimura, Yoshihiro; Wada, Yasuhiro; Kida, Tetsuo

    2003-01-01

    The maximum likelihood expectation maximization (ML-EM) algorithm has become available as an alternative to filtered back projection in SPECT. The actual physical performance may be different depending on the manufacturer and model, because of differences in computational details. The purpose of this study was to investigate the characteristics of seven different types of ML-EM algorithms using simple simulation data. Seven ML-EM algorithm programs were used: Genie (GE), esoft (Siemens), HARP-III (Hitachi), GMS-5500UI (Toshiba), Pegasys (ADAC), ODYSSEY-FX (Marconi), and Windows-PC (original software). Projection data of a 2-pixel-wide line source in the center of the field of view were simulated without attenuation or scatter. Images were reconstructed with ML-EM by changing the number of iterations from 1 to 45 for each algorithm. Image quality was evaluated after a reconstruction using full width at half maximum (FWHM), full width at tenth maximum (FWTM), and the total counts of the reconstructed images. In the maximum number of iterations, the difference in the FWHM value was up to 1.5 pixels, and that of FWTM, no less than 2.0 pixels. The total counts of the reconstructed images in the initial few iterations were larger or smaller than the converged value depending on the initial values. Our results for the simplest simulation data suggest that each ML-EM algorithm itself provides a simulation image. We should keep in mind which algorithm is being used and its computational details, when physical and clinical usefulness are compared. (author)

  15. Application and performance of an ML-EM algorithm in NEXT

    Science.gov (United States)

    Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.

    2017-08-01

    The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.

  16. An application of the maximal independent set algorithm to course ...

    African Journals Online (AJOL)

    In this paper, we demonstrated one of the many applications of the Maximal Independent Set Algorithm in the area of course allocation. A program was developed in Pascal and used in implementing a modified version of the algorithm to assign teaching courses to available lecturers in any academic environment and it ...

  17. Maximization Network Throughput Based on Improved Genetic Algorithm and Network Coding for Optical Multicast Networks

    Science.gov (United States)

    Wei, Chengying; Xiong, Cuilian; Liu, Huanlin

    2017-12-01

    Maximal multicast stream algorithm based on network coding (NC) can improve the network's throughput for wavelength-division multiplexing (WDM) networks, which however is far less than the network's maximal throughput in terms of theory. And the existing multicast stream algorithms do not give the information distribution pattern and routing in the meantime. In the paper, an improved genetic algorithm is brought forward to maximize the optical multicast throughput by NC and to determine the multicast stream distribution by hybrid chromosomes construction for multicast with single source and multiple destinations. The proposed hybrid chromosomes are constructed by the binary chromosomes and integer chromosomes, while the binary chromosomes represent optical multicast routing and the integer chromosomes indicate the multicast stream distribution. A fitness function is designed to guarantee that each destination can receive the maximum number of decoding multicast streams. The simulation results showed that the proposed method is far superior over the typical maximal multicast stream algorithms based on NC in terms of network throughput in WDM networks.

  18. High-speed computation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1994-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution backprojection algorithms. However, two major drawbacks have impeded the routine use of the EM algorithm, namely, the long computational time due to slow convergence and the large memory required for the storage of the image, projection data and the probability matrix. In this study, the authors attempts to solve these two problems by parallelizing the EM algorithm on a multiprocessor system. The authors have implemented an extended hypercube (EH) architecture for the high-speed computation of the EM algorithm using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PEs). The authors discuss and compare the performance of the EM algorithm on a 386/387 machine, CD 4360 mainframe, and on the EH system. The results show that the computational speed performance of an EH using DSP chips as PEs executing the EM image reconstruction algorithm is about 130 times better than that of the CD 4360 mainframe. The EH topology is expandable with more number of PEs

  19. A Trust Region Aggressive Space Mapping Algorithm for EM

    DEFF Research Database (Denmark)

    Bakr., M.; Bandler, J. W.; Biernacki, R.

    1998-01-01

    A robust new algorithm for electromagnetic (EM) optimization of microwave circuits is presented. The algorithm (TRASM) integrates a trust region methodology with the aggressive space mapping (ASM). The trust region ensures that each iteration results in improved alignment between the coarse....... This suggested step exploits all the available EM simulations for improving the uniqueness of parameter extraction. The new algorithm was successfully used to design a number of microwave circuits. Examples include the EM optimization of a double-folded stub filter and of a high-temperature superconducting (HTS...

  20. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    International Nuclear Information System (INIS)

    Han, H; Xing, L; Liang, Z; Li, L

    2016-01-01

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of each tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions

  1. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    Energy Technology Data Exchange (ETDEWEB)

    Han, H; Xing, L [Stanford University, Palo Alto, CA (United States); Liang, Z [Stony Brook University, Stony Brook, NY (United States); Li, L [City University of New York College of Staten Island, Staten Island, NY (United States)

    2016-06-15

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of each tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions

  2. Linear array implementation of the EM algorithm for PET image reconstruction

    International Nuclear Information System (INIS)

    Rajan, K.; Patnaik, L.M.; Ramakrishna, J.

    1995-01-01

    The PET image reconstruction based on the EM algorithm has several attractive advantages over the conventional convolution back projection algorithms. However, the PET image reconstruction based on the EM algorithm is computationally burdensome for today's single processor systems. In addition, a large memory is required for the storage of the image, projection data, and the probability matrix. Since the computations are easily divided into tasks executable in parallel, multiprocessor configurations are the ideal choice for fast execution of the EM algorithms. In tis study, the authors attempt to overcome these two problems by parallelizing the EM algorithm on a multiprocessor systems. The parallel EM algorithm on a linear array topology using the commercially available fast floating point digital signal processor (DSP) chips as the processing elements (PE's) has been implemented. The performance of the EM algorithm on a 386/387 machine, IBM 6000 RISC workstation, and on the linear array system is discussed and compared. The results show that the computational speed performance of a linear array using 8 DSP chips as PE's executing the EM image reconstruction algorithm is about 15.5 times better than that of the IBM 6000 RISC workstation. The novelty of the scheme is its simplicity. The linear array topology is expandable with a larger number of PE's. The architecture is not dependant on the DSP chip chosen, and the substitution of the latest DSP chip is straightforward and could yield better speed performance

  3. Unsupervised Cryo-EM Data Clustering through Adaptively Constrained K-Means Algorithm.

    Science.gov (United States)

    Xu, Yaofang; Wu, Jiayi; Yin, Chang-Cheng; Mao, Youdong

    2016-01-01

    In single-particle cryo-electron microscopy (cryo-EM), K-means clustering algorithm is widely used in unsupervised 2D classification of projection images of biological macromolecules. 3D ab initio reconstruction requires accurate unsupervised classification in order to separate molecular projections of distinct orientations. Due to background noise in single-particle images and uncertainty of molecular orientations, traditional K-means clustering algorithm may classify images into wrong classes and produce classes with a large variation in membership. Overcoming these limitations requires further development on clustering algorithms for cryo-EM data analysis. We propose a novel unsupervised data clustering method building upon the traditional K-means algorithm. By introducing an adaptive constraint term in the objective function, our algorithm not only avoids a large variation in class sizes but also produces more accurate data clustering. Applications of this approach to both simulated and experimental cryo-EM data demonstrate that our algorithm is a significantly improved alterative to the traditional K-means algorithm in single-particle cryo-EM analysis.

  4. Positron emission tomographic images and expectation maximization: A VLSI architecture for multiple iterations per second

    International Nuclear Information System (INIS)

    Jones, W.F.; Byars, L.G.; Casey, M.E.

    1988-01-01

    A digital electronic architecture for parallel processing of the expectation maximization (EM) algorithm for Positron Emission tomography (PET) image reconstruction is proposed. Rapid (0.2 second) EM iterations on high resolution (256 x 256) images are supported. Arrays of two very large scale integration (VLSI) chips perform forward and back projection calculations. A description of the architecture is given, including data flow and partitioning relevant to EM and parallel processing. EM images shown are produced with software simulating the proposed hardware reconstruction algorithm. Projected cost of the system is estimated to be small in comparison to the cost of current PET scanners

  5. An Efficient Algorithm for Maximizing Range Sum Queries in a Road Network

    Directory of Open Access Journals (Sweden)

    Tien-Khoi Phan

    2014-01-01

    Full Text Available Given a set of positive-weighted points and a query rectangle r (specified by a client of given extents, the goal of a maximizing range sum (MaxRS query is to find the optimal location of r such that the total weights of all the points covered by r are maximized. All existing methods for processing MaxRS queries assume the Euclidean distance metric. In many location-based applications, however, the motion of a client may be constrained by an underlying (spatial road network; that is, the client cannot move freely in space. This paper addresses the problem of processing MaxRS queries in a road network. We propose the external-memory algorithm that is suited for a large road network database. In addition, in contrast to the existing methods, which retrieve only one optimal location, our proposed algorithm retrieves all the possible optimal locations. Through simulations, we evaluate the performance of the proposed algorithm.

  6. Applications of expectation maximization algorithm for coherent optical communication

    DEFF Research Database (Denmark)

    Carvalho, L.; Oliveira, J.; Zibar, Darko

    2014-01-01

    In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization...... algorithm and its application in coherent optical communication systems for linear and nonlinear impairment mitigation. Furthermore, the estimated parameters are used to build the probabilistic model of the system for the synthetic impairment generation....

  7. A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  8. An O(n²) maximal planarization algorithm based on PQ-trees

    NARCIS (Netherlands)

    Kant, G.

    1992-01-01

    In this paper we investigate the problem how to delete a number of edges from a nonplanar graph G such that the resulting graph G’ is maximal planar, i.e., such that we cannot add an edge e E G – G’ to G’ without destroying planarity. Actually, our algorithm is a corrected and more generalized

  9. A multilevel search algorithm for the maximization of submodular functions applied to the quadratic cost partition problem

    NARCIS (Netherlands)

    Goldengorin, B.; Ghosh, D.

    Maximization of submodular functions on a ground set is a NP-hard combinatorial optimization problem. Data correcting algorithms are among the several algorithms suggested for solving this problem exactly and approximately. From the point of view of Hasse diagrams data correcting algorithms use

  10. Energy-driven scheduling algorithm for nanosatellite energy harvesting maximization

    Science.gov (United States)

    Slongo, L. K.; Martínez, S. V.; Eiterer, B. V. B.; Pereira, T. G.; Bezerra, E. A.; Paiva, K. V.

    2018-06-01

    The number of tasks that a satellite may execute in orbit is strongly related to the amount of energy its Electrical Power System (EPS) is able to harvest and to store. The manner the stored energy is distributed within the satellite has also a great impact on the CubeSat's overall efficiency. Most CubeSat's EPS do not prioritize energy constraints in their formulation. Unlike that, this work proposes an innovative energy-driven scheduling algorithm based on energy harvesting maximization policy. The energy harvesting circuit is mathematically modeled and the solar panel I-V curves are presented for different temperature and irradiance levels. Considering the models and simulations, the scheduling algorithm is designed to keep solar panels working close to their maximum power point by triggering tasks in the appropriate form. Tasks execution affects battery voltage, which is coupled to the solar panels through a protection circuit. A software based Perturb and Observe strategy allows defining the tasks to be triggered. The scheduling algorithm is tested in FloripaSat, which is an 1U CubeSat. A test apparatus is proposed to emulate solar irradiance variation, considering the satellite movement around the Earth. Tests have been conducted to show that the scheduling algorithm improves the CubeSat energy harvesting capability by 4.48% in a three orbit experiment and up to 8.46% in a single orbit cycle in comparison with the CubeSat operating without the scheduling algorithm.

  11. Trend analysis of the power law process using Expectation-Maximization algorithm for data censored by inspection intervals

    International Nuclear Information System (INIS)

    Taghipour, Sharareh; Banjevic, Dragan

    2011-01-01

    Trend analysis is a common statistical method used to investigate the operation and changes of a repairable system over time. This method takes historical failure data of a system or a group of similar systems and determines whether the recurrent failures exhibit an increasing or decreasing trend. Most trend analysis methods proposed in the literature assume that the failure times are known, so the failure data is statistically complete; however, in many situations, such as hidden failures, failure times are subject to censoring. In this paper we assume that the failure process of a group of similar independent repairable units follows a non-homogenous Poisson process with a power law intensity function. Moreover, the failure data are subject to left, interval and right censoring. The paper proposes using the likelihood ratio test to check for trends in the failure data. It uses the Expectation-Maximization (EM) algorithm to find the parameters, which maximize the data likelihood in the case of null and alternative hypotheses. A recursive procedure is used to solve the main technical problem of calculating the expected values in the Expectation step. The proposed method is applied to a hospital's maintenance data for trend analysis of the components of a general infusion pump.

  12. Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method

    Directory of Open Access Journals (Sweden)

    Mohd Izhan Mohd Yusoff

    2013-01-01

    Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.

  13. Finding Maximal Quasiperiodicities in Strings

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Pedersen, Christian N. S.

    2000-01-01

    of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...... in the suffix tree that have a superprimitive path-label....

  14. An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains

    Directory of Open Access Journals (Sweden)

    Qihong Duan

    2010-01-01

    Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.

  15. Preconditioned alternating projection algorithms for maximum a posteriori ECT reconstruction

    International Nuclear Information System (INIS)

    Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng

    2012-01-01

    We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constraint involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the PAPA. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality. (paper)

  16. Convergence and resolution recovery of block-iterative EM algorithms modeling 3D detector response in SPECT

    International Nuclear Information System (INIS)

    Lalush, D.S.; Tsui, B.M.W.; Karimi, S.S.

    1996-01-01

    We evaluate fast reconstruction algorithms including ordered subsets-EM (OS-EM) and Rescaled Block Iterative EM (RBI-EM) in fully 3D SPECT applications on the basis of their convergence and resolution recovery properties as iterations proceed. Using a 3D computer-simulated phantom consisting of 3D Gaussian objects, we simulated projection data that includes only the effects of sampling and detector response of a parallel-hole collimator. Reconstructions were performed using each of the three algorithms (ML-EM, OS-EM, and RBI-EM) modeling the 3D detector response in the projection function. Resolution recovery was evaluated by fitting Gaussians to each of the four objects in the iterated image estimates at selected intervals. Results show that OS-EM and RBI-EM behave identically in this case; their resolution recovery results are virtually indistinguishable. Their resolution behavior appears to be very similar to that of ML-EM, but accelerated by a factor of twenty. For all three algorithms, smaller objects take more iterations to converge. Next, we consider the effect noise has on convergence. For both noise-free and noisy data, we evaluate the log likelihood function at each subiteration of OS-EM and RBI-EM, and at each iteration of ML-EM. With noisy data, both OS-EM and RBI-EM give results for which the log-likelihood function oscillates. Especially for 180-degree acquisitions, RBI-EM oscillates less than OS-EM. Both OS-EM and RBI-EM appear to converge to solutions, but not to the ML solution. We conclude that both OS-EM and RBI-EM can be effective algorithms for fully 3D SPECT reconstruction. Both recover resolution similarly to ML-EM, only more quickly

  17. Models and Algorithms for Tracking Target with Coordinated Turn Motion

    Directory of Open Access Journals (Sweden)

    Xianghui Yuan

    2014-01-01

    Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.

  18. Metal-induced streak artifact reduction using iterative reconstruction algorithms in x-ray computed tomography image of the dentoalveolar region.

    Science.gov (United States)

    Dong, Jian; Hayakawa, Yoshihiko; Kannenberg, Sven; Kober, Cornelia

    2013-02-01

    The objective of this study was to reduce metal-induced streak artifact on oral and maxillofacial x-ray computed tomography (CT) images by developing the fast statistical image reconstruction system using iterative reconstruction algorithms. Adjacent CT images often depict similar anatomical structures in thin slices. So, first, images were reconstructed using the same projection data of an artifact-free image. Second, images were processed by the successive iterative restoration method where projection data were generated from reconstructed image in sequence. Besides the maximum likelihood-expectation maximization algorithm, the ordered subset-expectation maximization algorithm (OS-EM) was examined. Also, small region of interest (ROI) setting and reverse processing were applied for improving performance. Both algorithms reduced artifacts instead of slightly decreasing gray levels. The OS-EM and small ROI reduced the processing duration without apparent detriments. Sequential and reverse processing did not show apparent effects. Two alternatives in iterative reconstruction methods were effective for artifact reduction. The OS-EM algorithm and small ROI setting improved the performance. Copyright © 2012 Elsevier Inc. All rights reserved.

  19. Noise properties of the EM algorithm. Pt. 1

    International Nuclear Information System (INIS)

    Barrett, H.H.; Wilson, D.W.; Tsui, B.M.W.

    1994-01-01

    The expectation-maximisation (EM) algorithm is an important tool for maximum-likelihood (ML) estimation and image reconstruction, especially in medical imaging. It is a non-linear iterative algorithm that attempts to find the ML estimate of the object that produced a data set. The convergence of the algorithm and other deterministic properties are well established, but relatively little is known about how noise in the data influences noise in the final reconstructed image. In this paper we present a detailed treatment of these statistical properties. The specific application we have in mind is image reconstruction in emission tomography, but the results are valid for any application of the EM algorithm in which the data set can be described by Poisson statistics. We show that the probability density function for the grey level at a pixel in the image is well approximated by a log-normal law. An expression is derived for the variance of the grey level and for pixel-to-pixel covariance. The variance increases rapidly with iteration number at first, but eventually saturates as the ML estimate is approached. Moreover, the variance at any iteration number has a factor proportional to the square of the mean image (though other factors may also depend on the mean image), so a map of the standard deviation resembles the object itself. Thus low-intensity regions of the image tend to have low noise. (author)

  20. PENDUGAAN DATA HILANG DENGAN METODE YATES DAN ALGORITMA EM PADA RANCANGAN LATTICE SEIMBANG

    Directory of Open Access Journals (Sweden)

    MADE SUSILAWATI

    2015-06-01

    Full Text Available Missing data often occur in agriculture and animal husbandry experiment. The missing data in experimental design makes the information that we get less complete. In this research, the missing data was estimated with Yates method and Expectation Maximization (EM algorithm. The basic concept of the Yates method is to minimize sum square error (JKG, meanwhile the basic concept of the EM algorithm is to maximize the likelihood function. This research applied Balanced Lattice Design with 9 treatments, 4 replications and 3 group of each repetition. Missing data estimation results showed that the Yates method was better used for two of missing data in the position on a treatment, a column and random, meanwhile the EM algorithm was better used to estimate one of missing data and two of missing data in the position of a group and a replication. The comparison of the result JKG of ANOVA showed that JKG of incomplete data larger than JKG of incomplete data that has been added with estimator of data. This suggest  thatwe need to estimate the missing data.

  1. AUC-Maximizing Ensembles through Metalearning.

    Science.gov (United States)

    LeDell, Erin; van der Laan, Mark J; Petersen, Maya

    2016-05-01

    Area Under the ROC Curve (AUC) is often used to measure the performance of an estimator in binary classification problems. An AUC-maximizing classifier can have significant advantages in cases where ranking correctness is valued or if the outcome is rare. In a Super Learner ensemble, maximization of the AUC can be achieved by the use of an AUC-maximining metalearning algorithm. We discuss an implementation of an AUC-maximization technique that is formulated as a nonlinear optimization problem. We also evaluate the effectiveness of a large number of different nonlinear optimization algorithms to maximize the cross-validated AUC of the ensemble fit. The results provide evidence that AUC-maximizing metalearners can, and often do, out-perform non-AUC-maximizing metalearning methods, with respect to ensemble AUC. The results also demonstrate that as the level of imbalance in the training data increases, the Super Learner ensemble outperforms the top base algorithm by a larger degree.

  2. On Data and Parameter Estimation Using the Variational Bayesian EM-algorithm for Block-fading Frequency-selective MIMO Channels

    DEFF Research Database (Denmark)

    Christensen, Lars P.B.; Larsen, Jan

    2006-01-01

    A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prior....... Simulation of a GSM-like system provides empirical proof that the VBEM-algorithm is able to provide better performance than the EM-algorithm. However, if the posterior distribution is highly peaked, the VBEM-algorithm approaches the EM-algorithm and the gain disappears. The potential gain is therefore...

  3. Polynomial algorithms for the Maximal Pairing Problem: efficient phylogenetic targeting on arbitrary trees

    Directory of Open Access Journals (Sweden)

    Stadler Peter F

    2010-06-01

    Full Text Available Abstract Background The Maximal Pairing Problem (MPP is the prototype of a class of combinatorial optimization problems that are of considerable interest in bioinformatics: Given an arbitrary phylogenetic tree T and weights ωxy for the paths between any two pairs of leaves (x, y, what is the collection of edge-disjoint paths between pairs of leaves that maximizes the total weight? Special cases of the MPP for binary trees and equal weights have been described previously; algorithms to solve the general MPP are still missing, however. Results We describe a relatively simple dynamic programming algorithm for the special case of binary trees. We then show that the general case of multifurcating trees can be treated by interleaving solutions to certain auxiliary Maximum Weighted Matching problems with an extension of this dynamic programming approach, resulting in an overall polynomial-time solution of complexity (n4 log n w.r.t. the number n of leaves. The source code of a C implementation can be obtained under the GNU Public License from http://www.bioinf.uni-leipzig.de/Software/Targeting. For binary trees, we furthermore discuss several constrained variants of the MPP as well as a partition function approach to the probabilistic version of the MPP. Conclusions The algorithms introduced here make it possible to solve the MPP also for large trees with high-degree vertices. This has practical relevance in the field of comparative phylogenetics and, for example, in the context of phylogenetic targeting, i.e., data collection with resource limitations.

  4. Similarity-regulation of OS-EM for accelerated SPECT reconstruction

    Science.gov (United States)

    Vaissier, P. E. B.; Beekman, F. J.; Goorden, M. C.

    2016-06-01

    Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.

  5. A New Optimization Approach for Maximizing the Photovoltaic Panel Power Based on Genetic Algorithm and Lagrange Multiplier Algorithm

    Directory of Open Access Journals (Sweden)

    Mahdi M. M. El-Arini

    2013-01-01

    Full Text Available In recent years, the solar energy has become one of the most important alternative sources of electric energy, so it is important to operate photovoltaic (PV panel at the optimal point to obtain the possible maximum efficiency. This paper presents a new optimization approach to maximize the electrical power of a PV panel. The technique which is based on objective function represents the output power of the PV panel and constraints, equality and inequality. First the dummy variables that have effect on the output power are classified into two categories: dependent and independent. The proposed approach is a multistage one as the genetic algorithm, GA, is used to obtain the best initial population at optimal solution and this initial population is fed to Lagrange multiplier algorithm (LM, then a comparison between the two algorithms, GA and LM, is performed. The proposed technique is applied to solar radiation measured at Helwan city at latitude 29.87°, Egypt. The results showed that the proposed technique is applicable.

  6. Automatic Derivation of Statistical Algorithms: The EM Family and Beyond

    OpenAIRE

    Gray, Alexander G.; Fischer, Bernd; Schumann, Johann; Buntine, Wray

    2003-01-01

    Machine learning has reached a point where many probabilistic methods can be understood as variations, extensions and combinations of a much smaller set of abstract themes, e.g., as different instances of the EM algorithm. This enables the systematic derivation of algorithms customized for different models. Here, we describe the AUTOBAYES system which takes a high-level statistical model specification, uses powerful symbolic techniques based on schema-based program synthesis and computer alge...

  7. A study of reconstruction artifacts in cone beam tomography using filtered backprojection and iterative EM algorithms

    International Nuclear Information System (INIS)

    Zeng, G.L.; Gullberg, G.T.

    1990-01-01

    Reconstruction artifacts in cone beam tomography are studied for filtered backprojection (Feldkamp) and iterative EM algorithms. The filtered backprojection algorithm uses a voxel-driven, interpolated backprojection to reconstruct the cone beam data; whereas, the iterative EM algorithm performs ray-driven projection and backprojection operations for each iteration. Two weight in schemes for the projection and backprojection operations in the EM algorithm are studied. One weights each voxel by the length of the ray through the voxel and the other equates the value of a voxel to the functional value of the midpoint of the line intersecting the voxel, which is obtained by interpolating between eight neighboring voxels. Cone beam reconstruction artifacts such as rings, bright vertical extremities, and slice-to slice cross talk are not found with parallel beam and fan beam geometries

  8. A Local Scalable Distributed EM Algorithm for Large P2P Networks

    Data.gov (United States)

    National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...

  9. Data Assimilation in Air Contaminant Dispersion Using a Particle Filter and Expectation-Maximization Algorithm

    Directory of Open Access Journals (Sweden)

    Rongxiao Wang

    2017-09-01

    Full Text Available The accurate prediction of air contaminant dispersion is essential to air quality monitoring and the emergency management of contaminant gas leakage incidents in chemical industry parks. Conventional atmospheric dispersion models can seldom give accurate predictions due to inaccurate input parameters. In order to improve the prediction accuracy of dispersion models, two data assimilation methods (i.e., the typical particle filter & the combination of a particle filter and expectation-maximization algorithm are proposed to assimilate the virtual Unmanned Aerial Vehicle (UAV observations with measurement error into the atmospheric dispersion model. Two emission cases with different dimensions of state parameters are considered. To test the performances of the proposed methods, two numerical experiments corresponding to the two emission cases are designed and implemented. The results show that the particle filter can effectively estimate the model parameters and improve the accuracy of model predictions when the dimension of state parameters is relatively low. In contrast, when the dimension of state parameters becomes higher, the method of particle filter combining the expectation-maximization algorithm performs better in terms of the parameter estimation accuracy. Therefore, the proposed data assimilation methods are able to effectively support air quality monitoring and emergency management in chemical industry parks.

  10. Quantitation of PET data with the EM reconstruction technique

    International Nuclear Information System (INIS)

    Rosenqvist, G.; Dahlbom, M.; Erikson, L.; Bohm, C.; Blomqvist, G.

    1989-01-01

    The expectation maximization (EM) algorithm offers high spatial resolution and excellent noise reduction with low statistics PET data, since it incorporates the Poisson nature of the data. The main difficulties are long computation times, difficulties to find appropriate criteria to terminate the reconstruction and to quantify the resulting image data. In the present work a modified EM algorithm has been implements on a VAX 11/780. Its capability to quantify image data has been tested in phantom studies and in two clinical cases, cerebral blood flow studies and dopamine D2-receptor studies. Data from phantom studies indicate the superiority of images reconstructed with the EM technique compared to images reconstructed with the conventional filtered back-projection (FB) technique in areas with low statistics. At higher statistics the noise characteristics of the two techniques coincide. Clinical data support these findings

  11. Algorithmic detectability threshold of the stochastic block model

    Science.gov (United States)

    Kawamoto, Tatsuro

    2018-03-01

    The assumption that the values of model parameters are known or correctly learned, i.e., the Nishimori condition, is one of the requirements for the detectability analysis of the stochastic block model in statistical inference. In practice, however, there is no example demonstrating that we can know the model parameters beforehand, and there is no guarantee that the model parameters can be learned accurately. In this study, we consider the expectation-maximization (EM) algorithm with belief propagation (BP) and derive its algorithmic detectability threshold. Our analysis is not restricted to the community structure but includes general modular structures. Because the algorithm cannot always learn the planted model parameters correctly, the algorithmic detectability threshold is qualitatively different from the one with the Nishimori condition.

  12. High-Performance Psychometrics: The Parallel-E Parallel-M Algorithm for Generalized Latent Variable Models. Research Report. ETS RR-16-34

    Science.gov (United States)

    von Davier, Matthias

    2016-01-01

    This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…

  13. PEG Enhancement for EM1 and EM2+ Missions

    Science.gov (United States)

    Von der Porten, Paul; Ahmad, Naeem; Hawkins, Matt

    2018-01-01

    NASA is currently building the Space Launch System (SLS) Block-1 launch vehicle for the Exploration Mission 1 (EM-1) test flight. The next evolution of SLS, the Block-1B Exploration Mission 2 (EM-2), is currently being designed. The Block-1 and Block-1B vehicles will use the Powered Explicit Guidance (PEG) algorithm. Due to the relatively low thrust-to-weight ratio of the Exploration Upper Stage (EUS), certain enhancements to the Block-1 PEG algorithm are needed to perform Block-1B missions. In order to accommodate mission design for EM-2 and beyond, PEG has been significantly improved since its use on the Space Shuttle program. The current version of PEG has the ability to switch to different targets during Core Stage (CS) or EUS flight, and can automatically reconfigure for a single Engine Out (EO) scenario, loss of communication with the Launch Abort System (LAS), and Inertial Navigation System (INS) failure. The Thrust Factor (TF) algorithm uses measured state information in addition to a priori parameters, providing PEG with an improved estimate of propulsion information. This provides robustness against unknown or undetected engine failures. A loft parameter input allows LAS jettison while maximizing payload mass. The current PEG algorithm is now able to handle various classes of missions with burn arcs much longer than were seen in the shuttle program. These missions include targeting a circular LEO orbit with a low-thrust, long-burn-duration upper stage, targeting a highly eccentric Trans-Lunar Injection (TLI) orbit, targeting a disposal orbit using the low-thrust Reaction Control System (RCS), and targeting a hyperbolic orbit. This paper will describe the design and implementation of the TF algorithm, the strategy to handle EO in various flight regimes, algorithms to cover off-nominal conditions, and other enhancements to the Block-1 PEG algorithm. This paper illustrates challenges posed by the Block-1B vehicle, and results show that the improved PEG

  14. Factor Analysis with EM Algorithm Never Gives Improper Solutions when Sample Covariance and Initial Parameter Matrices Are Proper

    Science.gov (United States)

    Adachi, Kohei

    2013-01-01

    Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…

  15. PedMine – A simulated annealing algorithm to identify maximally unrelated individuals in population isolates

    OpenAIRE

    Douglas, Julie A.; Sandefur, Conner I.

    2008-01-01

    In family-based genetic studies, it is often useful to identify a subset of unrelated individuals. When such studies are conducted in population isolates, however, most if not all individuals are often detectably related to each other. To identify a set of maximally unrelated (or equivalently, minimally related) individuals, we have implemented simulated annealing, a general-purpose algorithm for solving difficult combinatorial optimization problems. We illustrate our method on data from a ge...

  16. A Comparative Study of Frequent and Maximal Periodic Pattern Mining Algorithms in Spatiotemporal Databases

    Science.gov (United States)

    Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.

    2017-08-01

    Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.

  17. Generation of a statistical shape model with probabilistic point correspondences and the expectation maximization- iterative closest point algorithm

    International Nuclear Information System (INIS)

    Hufnagel, Heike; Pennec, Xavier; Ayache, Nicholas; Ehrhardt, Jan; Handels, Heinz

    2008-01-01

    Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of ''generalization ability'' and ''specificity'', the estimates were very satisfactory

  18. An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2016-01-06

    In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.

  19. Optimal topology of urban buildings for maximization of annual solar irradiation availability using a genetic algorithm

    International Nuclear Information System (INIS)

    Conceição António, Carlos A.; Monteiro, João Brasileiro; Afonso, Clito Félix

    2014-01-01

    An approach based on the optimal placement of buildings that favors the use of solar energy is proposed. By maximizing the area of exposure to incident solar irradiation on roofs and facades of buildings, improvements on the energy performance of the urban matrix are reached, contributing decisively to reduce dependence on other less environmentally friendly energy options. A mathematical model is proposed to optimize the annual solar irradiation availability where the placement of the buildings in urban environment favors the use of solar energy resource. Improvements on the solar energy potential of the urban grid are reached by maximizing the exposure of incident solar irradiation on roofs and facades of buildings. The proposed model considers predominant, the amount of direct solar radiation, omitting the components of the solar irradiation diffused and reflected. The dynamic interaction of buildings on exposure to sunlight is simulated aiming to evaluate the shadowing zones. The incident solar irradiation simulation and the dynamic shading model were integrated in an optimization approach implemented numerically. The search for optimal topological solutions for urban grid is based on a Genetic Algorithm. The objective is to generate optimal scenarios for the placement of buildings into the urban grid in the pre-design phase, which enhances the use of solar irradiation. - Highlights: • A mathematical model is proposed to optimize annual solar irradiation availability. • Maximization of incident solar irradiation on roofs and facades of buildings. • Dynamic interaction of buildings is simulated aiming to evaluate shadowing zones. • Search for optimal topological solutions for urban grid based on genetic algorithm. • Solutions are compared with the conventional configurations for urban grid

  20. Autism spectrum disorders and fetal hypoxia in a population-based cohort: Accounting for missing exposures via Estimation-Maximization algorithm

    Directory of Open Access Journals (Sweden)

    Yasui Yutaka

    2011-01-01

    Full Text Available Abstract Background Autism spectrum disorders (ASD are associated with complications of pregnancy that implicate fetal hypoxia (FH; the excess of ASD in male gender is poorly understood. We tested the hypothesis that risk of ASD is related to fetal hypoxia and investigated whether this effect is greater among males. Methods Provincial delivery records (PDR identified the cohort of all 218,890 singleton live births in the province of Alberta, Canada, between 01-01-98 and 12-31-04. These were followed-up for ASD via ICD-9 diagnostic codes assigned by physician billing until 03-31-08. Maternal and obstetric risk factors, including FH determined from blood tests of acidity (pH, were extracted from PDR. The binary FH status was missing in approximately half of subjects. Assuming that characteristics of mothers and pregnancies would be correlated with FH, we used an Estimation-Maximization algorithm to estimate HF-ASD association, allowing for both missing-at-random (MAR and specific not-missing-at-random (NMAR mechanisms. Results Data indicated that there was excess risk of ASD among males who were hypoxic at birth, not materially affected by adjustment for potential confounding due to birth year and socio-economic status: OR 1.13, 95%CI: 0.96, 1.33 (MAR assumption. Limiting analysis to full-term males, the adjusted OR under specific NMAR assumptions spanned 95%CI of 1.0 to 1.6. Conclusion Our results are consistent with a weak effect of fetal hypoxia on risk of ASD among males. E-M algorithm is an efficient and flexible tool for modeling missing data in the studied setting.

  1. Quantum speedup in solving the maximal-clique problem

    Science.gov (United States)

    Chang, Weng-Long; Yu, Qi; Li, Zhaokai; Chen, Jiahui; Peng, Xinhua; Feng, Mang

    2018-03-01

    The maximal-clique problem, to find the maximally sized clique in a given graph, is classically an NP-complete computational problem, which has potential applications ranging from electrical engineering, computational chemistry, and bioinformatics to social networks. Here we develop a quantum algorithm to solve the maximal-clique problem for any graph G with n vertices with quadratic speedup over its classical counterparts, where the time and spatial complexities are reduced to, respectively, O (√{2n}) and O (n2) . With respect to oracle-related quantum algorithms for the NP-complete problems, we identify our algorithm as optimal. To justify the feasibility of the proposed quantum algorithm, we successfully solve a typical clique problem for a graph G with two vertices and one edge by carrying out a nuclear magnetic resonance experiment involving four qubits.

  2. Maximal lattice free bodies, test sets and the Frobenius problem

    DEFF Research Database (Denmark)

    Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt

    Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral m...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....

  3. The Large Margin Mechanism for Differentially Private Maximization

    OpenAIRE

    Chaudhuri, Kamalika; Hsu, Daniel; Song, Shuang

    2014-01-01

    A basic problem in the design of privacy-preserving algorithms is the private maximization problem: the goal is to pick an item from a universe that (approximately) maximizes a data-dependent function, all under the constraint of differential privacy. This problem has been used as a sub-routine in many privacy-preserving algorithms for statistics and machine-learning. Previous algorithms for this problem are either range-dependent---i.e., their utility diminishes with the size of the universe...

  4. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali; Alsunaidi, Mohammad A.; Ng, Tien Khee; Ooi, Boon S.

    2013-01-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  5. A Simple FDTD Algorithm for Simulating EM-Wave Propagation in General Dispersive Anisotropic Material

    KAUST Repository

    Al-Jabr, Ahmad Ali

    2013-03-01

    In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.

  6. Maximizing the nurses' preferences in nurse scheduling problem: mathematical modeling and a meta-heuristic algorithm

    Science.gov (United States)

    Jafari, Hamed; Salmasi, Nasser

    2015-09-01

    The nurse scheduling problem (NSP) has received a great amount of attention in recent years. In the NSP, the goal is to assign shifts to the nurses in order to satisfy the hospital's demand during the planning horizon by considering different objective functions. In this research, we focus on maximizing the nurses' preferences for working shifts and weekends off by considering several important factors such as hospital's policies, labor laws, governmental regulations, and the status of nurses at the end of the previous planning horizon in one of the largest hospitals in Iran i.e., Milad Hospital. Due to the shortage of available nurses, at first, the minimum total number of required nurses is determined. Then, a mathematical programming model is proposed to solve the problem optimally. Since the proposed research problem is NP-hard, a meta-heuristic algorithm based on simulated annealing (SA) is applied to heuristically solve the problem in a reasonable time. An initial feasible solution generator and several novel neighborhood structures are applied to enhance performance of the SA algorithm. Inspired from our observations in Milad hospital, random test problems are generated to evaluate the performance of the SA algorithm. The results of computational experiments indicate that the applied SA algorithm provides solutions with average percentage gap of 5.49 % compared to the upper bounds obtained from the mathematical model. Moreover, the applied SA algorithm provides significantly better solutions in a reasonable time than the schedules provided by the head nurses.

  7. The indexing ambiguity in serial femtosecond crystallography (SFX) resolved using an expectation maximization algorithm.

    Science.gov (United States)

    Liu, Haiguang; Spence, John C H

    2014-11-01

    Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these 'stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated.

  8. Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique

    Energy Technology Data Exchange (ETDEWEB)

    Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.

  9. An iterative reconstruction method of complex images using expectation maximization for radial parallel MRI

    International Nuclear Information System (INIS)

    Choi, Joonsung; Kim, Dongchan; Oh, Changhyun; Han, Yeji; Park, HyunWook

    2013-01-01

    In MRI (magnetic resonance imaging), signal sampling along a radial k-space trajectory is preferred in certain applications due to its distinct advantages such as robustness to motion, and the radial sampling can be beneficial for reconstruction algorithms such as parallel MRI (pMRI) due to the incoherency. For radial MRI, the image is usually reconstructed from projection data using analytic methods such as filtered back-projection or Fourier reconstruction after gridding. However, the quality of the reconstructed image from these analytic methods can be degraded when the number of acquired projection views is insufficient. In this paper, we propose a novel reconstruction method based on the expectation maximization (EM) method, where the EM algorithm is remodeled for MRI so that complex images can be reconstructed. Then, to optimize the proposed method for radial pMRI, a reconstruction method that uses coil sensitivity information of multichannel RF coils is formulated. Experiment results from synthetic and in vivo data show that the proposed method introduces better reconstructed images than the analytic methods, even from highly subsampled data, and provides monotonic convergence properties compared to the conjugate gradient based reconstruction method. (paper)

  10. Formula I(1 and I(2: Race Tracks for Likelihood Maximization Algorithms of I(1 and I(2 Cointegrated VAR Models

    Directory of Open Access Journals (Sweden)

    Jurgen A. Doornik

    2017-11-01

    Full Text Available This paper provides some test cases, called circuits, for the evaluation of Gaussian likelihood maximization algorithms of the cointegrated vector autoregressive model. Both I(1 and I(2 models are considered. The performance of algorithms is compared first in terms of effectiveness, defined as the ability to find the overall maximum. The next step is to compare their efficiency and reliability across experiments. The aim of the paper is to commence a collective learning project by the profession on the actual properties of algorithms for cointegrated vector autoregressive model estimation, in order to improve their quality and, as a consequence, also the reliability of empirical research.

  11. Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.

    Science.gov (United States)

    Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane

    2016-08-01

    Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.

  12. Using and comparing metaheuristic algorithms for optimizing bidding strategy viewpoint of profit maximization of generators

    Science.gov (United States)

    Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan

    2015-03-01

    With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.

  13. An analytical study of composite laminate lay-up using search algorithms for maximization of flexural stiffness and minimization of springback angle

    Science.gov (United States)

    Singh, Ranjan Kumar; Rinawa, Moti Lal

    2018-04-01

    The residual stresses arising in fiber-reinforced laminates during their curing in closed molds lead to changes in the composites after their removal from the molds and cooling. One of these dimensional changes of angle sections is called springback. The parameters such as lay-up, stacking sequence, material system, cure temperature, thickness etc play important role in it. In present work, it is attempted to optimize lay-up and stacking sequence for maximization of flexural stiffness and minimization of springback angle. The search algorithms are employed to obtain best sequence through repair strategy such as swap. A new search algorithm, termed as lay-up search algorithm (LSA) is also proposed, which is an extension of permutation search algorithm (PSA). The efficacy of PSA and LSA is tested on the laminates with a range of lay-ups. A computer code is developed on MATLAB implementing the above schemes. Also, the strategies for multi objective optimization using search algorithms are suggested and tested.

  14. The maximization of the efficiency in the energy conversion in isolated photovoltaic systems; Tecnicas de maxima transferencia de potencia em sistemas fotovoltaicos isolados

    Energy Technology Data Exchange (ETDEWEB)

    Machado-Neto, L. V. B.; Cabral, C. V. T.; Diniz, A. S. A. C.; Cortizo, P. C.; Oliveira-Filho, D.

    2004-07-01

    The maximization of the efficiency in the energy conversion is essential into the developing of technical and economic sustainability of photovoltaic solar energy systems. In this paper is realized the study of a power maximization technique for photovoltaic generators. The power maximization technique explored in this paper is the Maximum Power Point Tracking (MPPT). There are different strategies being studied currently; this work consists of the development of an electronic converter prototype for MPPT, including the developing of the tracking algorithm implemented in a microcontroller. It is also realized a simulation of the system and a prototype was assembled and the first results are presented here. (Author)

  15. Real-time topic-aware influence maximization using preprocessing.

    Science.gov (United States)

    Chen, Wei; Lin, Tian; Yang, Cheng

    2016-01-01

    Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.

  16. Expectation Maximization Algorithm for Box-Cox Transformation Cure Rate Model and Assessment of Model Misspecification Under Weibull Lifetimes.

    Science.gov (United States)

    Pal, Suvra; Balakrishnan, Narayanaswamy

    2018-05-01

    In this paper, we develop likelihood inference based on the expectation maximization algorithm for the Box-Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model misspecification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.

  17. A Case Study on Maximizing Aqua Feed Pellet Properties Using Response Surface Methodology and Genetic Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Tumuluru, Jaya

    2013-01-10

    Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio, barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated

  18. A system for the 3D reconstruction of retracted-septa PET data using the EM algorithm

    International Nuclear Information System (INIS)

    Johnson, C.A.; Yan, Y.; Carson, R.E.; Martino, R.L.; Daube-Witherspoon, M.E.

    1995-01-01

    The authors have implemented the EM reconstruction algorithm for volume acquisition from current generation retracted-septa PET scanners. Although the software was designed for a GE Advance scanner, it is easily adaptable to other 3D scanners. The reconstruction software was written for an Intel iPSC/860 parallel computer with 128 compute nodes. Running on 32 processors, the algorithm requires approximately 55 minutes per iteration to reconstruct a 128 x 128 x 35 image. No projection data compression schemes or other approximations were used in the implementation. Extensive use of EM system matrix (C ij ) symmetries (including the 8-fold in-plane symmetries, 2-fold axial symmetries, and axial parallel line redundancies) reduces the storage cost by a factor of 188. The parallel algorithm operates on distributed projection data which are decomposed by base-symmetry angles. Symmetry operators copy and index the C ij chord to the form required for the particular symmetry. The use of asynchronous reads, lookup tables, and optimized image indexing improves computational performance

  19. Estimation of parameters in Shot-Noise-Driven Doubly Stochastic Poisson processes using the EM algorithm--modeling of pre- and postsynaptic spike trains.

    Science.gov (United States)

    Mino, H

    2007-01-01

    To estimate the parameters, the impulse response (IR) functions of some linear time-invariant systems generating intensity processes, in Shot-Noise-Driven Doubly Stochastic Poisson Process (SND-DSPP) in which multivariate presynaptic spike trains and postsynaptic spike trains can be assumed to be modeled by the SND-DSPPs. An explicit formula for estimating the IR functions from observations of multivariate input processes of the linear systems and the corresponding counting process (output process) is derived utilizing the expectation maximization (EM) algorithm. The validity of the estimation formula was verified through Monte Carlo simulations in which two presynaptic spike trains and one postsynaptic spike train were assumed to be observable. The IR functions estimated on the basis of the proposed identification method were close to the true IR functions. The proposed method will play an important role in identifying the input-output relationship of pre- and postsynaptic neural spike trains in practical situations.

  20. Efficiency maximization and performance evaluation of hybrid dual channel semitransparent photovoltaic thermal module using fuzzyfied genetic algorithm

    International Nuclear Information System (INIS)

    Singh, Sonveer; Agrawal, Sanjay

    2016-01-01

    Highlights: • Thermal modeling of novel dual channel semitransparent photovoltaic thermal hybrid module. • Efficiency maximization and performance evaluation of dual channel photovoltaic thermal module. • Annual performance has been evaluated for Srinagar, Jodhpur, Bangalore and New Delhi (India). • There are improvements in results for optimized system as compared to un-optimized system. - Abstract: The work has been carried out in two steps; firstly the parameters of hybrid dual channel semitransparent photovoltaic thermal module has been optimized using a fuzzyfied genetic algorithm. During the course of optimization, overall exergy efficiency is considered as an objective function and different design parameters of the proposed module have been optimized. Fuzzy controller is used to improve the performance of genetic algorithms and the approach is called as a fuzzyfied genetic algorithm. In the second step, the performance of the module has been analyzed for four cities of India such as Srinagar, Bangalore, Jodhpur and New Delhi. The performance of the module has been evaluated for daytime 08:00 AM to 05:00 PM and annually from January to December. It is to be noted that, an average improvement occurs in electrical efficiency of the optimized module, simultaneously there is also a reduction in solar cell temperature as compared to un-optimized module.

  1. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  2. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  3. Acceleration of the direct reconstruction of linear parametric images using nested algorithms

    International Nuclear Information System (INIS)

    Wang Guobao; Qi Jinyi

    2010-01-01

    Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.

  4. Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems

    DEFF Research Database (Denmark)

    Zibar, Darko; Winther, Ole; Franceschi, Niccolo

    2012-01-01

    We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....

  5. Maximizing Entropy over Markov Processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2013-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code....

  6. Maximizing entropy over Markov processes

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis

    2014-01-01

    The channel capacity of a deterministic system with confidential data is an upper bound on the amount of bits of data an attacker can learn from the system. We encode all possible attacks to a system using a probabilistic specification, an Interval Markov Chain. Then the channel capacity...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...

  7. Employing a Monte Carlo algorithm in Newton-type methods for restricted maximum likelihood estimation of genetic parameters.

    Directory of Open Access Journals (Sweden)

    Kaarina Matilainen

    Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.

  8. Finite sample performance of the E-M algorithm for ranks data modelling

    Directory of Open Access Journals (Sweden)

    Angela D'Elia

    2007-10-01

    Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.

  9. Método para classificação de tipos de erros humanos: estudo de caso em acidentes em canteiros de obras An algorithm for classifying error types of front-line workers: a case study in accidents in construction sites

    Directory of Open Access Journals (Sweden)

    Tarcisio Abreu Saurin

    2012-04-01

    Full Text Available Este trabalho tem como objetivo principal desenvolver melhorias em um método de classificação de tipos de erros humanos de operadores de linha de frente. Tais melhorias foram desenvolvidas com base no teste do método em canteiros de obras, um ambiente no qual ele ainda não havia sido aplicado. Assim, foram investigados 19 acidentes de trabalho ocorridos em uma construtora de pequeno porte, sendo classificados os tipos de erros dos trabalhadores lesionados e de colegas de equipe que se encontravam no cenário do acidente. Os resultados indicaram que não houve nenhum erro em 70,5% das 34 vezes em que o método foi aplicado, evidenciando que as causas dos acidentes estavam fortemente associadas a fatores organizacionais. O estudo apresenta ainda recomendações para a interpretação das perguntas que constituem o método, bem como modificações em algumas dessas perguntas em comparação às versões anteriores.The objective of this study is to propose improvements in the algorithm for classifying error types of front-line workers. The improvements have been identified on the basis of testing the algorithm in construction sites, an environment where it had not been implemented it. To this end, 19 occupational accidents which occurred in a small construction company were investigated, and the error types of both injured workers and team members were classified. The results indicated that there was no error in 70.5% of the 34 times the algorithm was applied, providing evidence that the causes were strongly linked to organizational factors. Moreover, the study presents not only recommendations to facilitate the interpretation of the questions that constitute the algorithm, but also changes in some questions in comparison to the previous versions of the tool.

  10. Memetic algorithms for de novo motif-finding in biomedical sequences.

    Science.gov (United States)

    Bi, Chengpeng

    2012-09-01

    The objectives of this study are to design and implement a new memetic algorithm for de novo motif discovery, which is then applied to detect important signals hidden in various biomedical molecular sequences. In this paper, memetic algorithms are developed and tested in de novo motif-finding problems. Several strategies in the algorithm design are employed that are to not only efficiently explore the multiple sequence local alignment space, but also effectively uncover the molecular signals. As a result, there are a number of key features in the implementation of the memetic motif-finding algorithm (MaMotif), including a chromosome replacement operator, a chromosome alteration-aware local search operator, a truncated local search strategy, and a stochastic operation of local search imposed on individual learning. To test the new algorithm, we compare MaMotif with a few of other similar algorithms using simulated and experimental data including genomic DNA, primary microRNA sequences (let-7 family), and transmembrane protein sequences. The new memetic motif-finding algorithm is successfully implemented in C++, and exhaustively tested with various simulated and real biological sequences. In the simulation, it shows that MaMotif is the most time-efficient algorithm compared with others, that is, it runs 2 times faster than the expectation maximization (EM) method and 16 times faster than the genetic algorithm-based EM hybrid. In both simulated and experimental testing, results show that the new algorithm is compared favorably or superior to other algorithms. Notably, MaMotif is able to successfully discover the transcription factors' binding sites in the chromatin immunoprecipitation followed by massively parallel sequencing (ChIP-Seq) data, correctly uncover the RNA splicing signals in gene expression, and precisely find the highly conserved helix motif in the transmembrane protein sequences, as well as rightly detect the palindromic segments in the primary micro

  11. Optimization of cascade hydropower system operation by genetic algorithm to maximize clean energy output

    Directory of Open Access Journals (Sweden)

    Aida Tayebiyan

    2016-06-01

    Full Text Available Background: Several reservoir systems have been constructed for hydropower generation around the world. Hydropower offers an economical source of electricity with reduce carbon emissions. Therefore, it is such a clean and renewable source of energy. Reservoirs that generate hydropower are typically operated with the goal of maximizing energy revenue. Yet, reservoir systems are inefficiently operated and manage according to policies determined at the construction time. It is worth noting that with little enhancement in operation of reservoir system, there could be an increase in efficiency of the scheme for many consumers. Methods: This research develops simulation-optimization models that reflect discrete hedging policy (DHP to manage and operate hydropower reservoir system and analyse it in both single and multireservoir system. Accordingly, three operational models (2 single reservoir systems and 1 multi-reservoir system were constructed and optimized by genetic algorithm (GA. Maximizing the total power generation in horizontal time is chosen as an objective function in order to improve the functional efficiency in hydropower production with consideration to operational and physical limitations. The constructed models, which is a cascade hydropower reservoirs system have been tested and evaluated in the Cameron Highland and Batang Padang in Malaysia. Results: According to the given results, usage of DHP for hydropower reservoir system operation could increase the power generation output to nearly 13% in the studied reservoir system compared to present operating policy (TNB operation. This substantial increase in power production will enhance economic development. Moreover, the given results of single and multi-reservoir systems affirmed that hedging policy could manage the single system much better than operation of the multi-reservoir system. Conclusion: It can be summarized that DHP is an efficient and feasible policy, which could be used

  12. Evaluation of 3D reconstruction algorithms for a small animal PET camera

    International Nuclear Information System (INIS)

    Johnson, C.A.; Gandler, W.R.; Seidel, J.

    1996-01-01

    The use of paired, opposing position-sensitive phototube scintillation cameras (SCs) operating in coincidence for small animal imaging with positron emitters is currently under study. Because of the low sensitivity of the system even in 3D mode and the need to produce images with high resolution, it was postulated that a 3D expectation maximization (EM) reconstruction algorithm might be well suited for this application. We investigated four reconstruction algorithms for the 3D SC PET camera: 2D filtered back-projection (FBP), 2D ordered subset EM (OSEM), 3D reprojection (3DRP), and 3D OSEM. Noise was assessed for all slices by the coefficient of variation in a simulated uniform cylinder. Resolution was assessed from a simulation of 15 point sources in the warm background of the uniform cylinder. At comparable noise levels, the resolution achieved with OSEM (0.9-mm to 1.2-mm) is significantly better than that obtained with FBP or 3DRP (1.5-mm to 2.0-mm.) Images of a rat skull labeled with 18 F-fluoride suggest that 3D OSEM can improve image quality of a small animal PET camera

  13. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Energy Technology Data Exchange (ETDEWEB)

    Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  14. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    International Nuclear Information System (INIS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-01-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm

  15. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Science.gov (United States)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  16. Tracking of Multiple Moving Sources Using Recursive EM Algorithm

    Directory of Open Access Journals (Sweden)

    Böhme Johann F

    2005-01-01

    Full Text Available We deal with recursive direction-of-arrival (DOA estimation of multiple moving sources. Based on the recursive EM algorithm, we develop two recursive procedures to estimate the time-varying DOA parameter for narrowband signals. The first procedure requires no prior knowledge about the source movement. The second procedure assumes that the motion of moving sources is described by a linear polynomial model. The proposed recursion updates the polynomial coefficients when a new data arrives. The suggested approaches have two major advantages: simple implementation and easy extension to wideband signals. Numerical experiments show that both procedures provide excellent results in a slowly changing environment. When the DOA parameter changes fast or two source directions cross with each other, the procedure designed for a linear polynomial model has a better performance than the general procedure. Compared to the beamforming technique based on the same parameterization, our approach is computationally favorable and has a wider range of applications.

  17. New Maximal Two-distance Sets

    DEFF Research Database (Denmark)

    Lisonek, Petr

    1996-01-01

    A two-distance set in E^d is a point set X inthe d-dimensional Euclidean spacesuch that the distances between distinct points in Xassume only two different non-zero values. Based on results from classical distance geometry, we developan algorithm to classify, for a given dimension, all maximal...... (largest possible)two-distance sets in E^d.Using this algorithm we have completed the full classificationfor all dimensions less than or equal to 7, andwe have found one set in E^8 whosemaximality follows from Blokhuis' upper bound on sizes of s-distance sets.While in the dimensions less than or equal to 6...

  18. Super-Relaxed ( -Proximal Point Algorithms, Relaxed ( -Proximal Point Algorithms, Linear Convergence Analysis, and Nonlinear Variational Inclusions

    Directory of Open Access Journals (Sweden)

    Agarwal RaviP

    2009-01-01

    Full Text Available We glance at recent advances to the general theory of maximal (set-valued monotone mappings and their role demonstrated to examine the convex programming and closely related field of nonlinear variational inequalities. We focus mostly on applications of the super-relaxed ( -proximal point algorithm to the context of solving a class of nonlinear variational inclusion problems, based on the notion of maximal ( -monotonicity. Investigations highlighted in this communication are greatly influenced by the celebrated work of Rockafellar (1976, while others have played a significant part as well in generalizing the proximal point algorithm considered by Rockafellar (1976 to the case of the relaxed proximal point algorithm by Eckstein and Bertsekas (1992. Even for the linear convergence analysis for the overrelaxed (or super-relaxed ( -proximal point algorithm, the fundamental model for Rockafellar's case does the job. Furthermore, we attempt to explore possibilities of generalizing the Yosida regularization/approximation in light of maximal ( -monotonicity, and then applying to first-order evolution equations/inclusions.

  19. Distributed-Memory Fast Maximal Independent Set

    Energy Technology Data Exchange (ETDEWEB)

    Kanewala Appuhamilage, Thejaka Amila J.; Zalewski, Marcin J.; Lumsdaine, Andrew

    2017-09-13

    The Maximal Independent Set (MIS) graph problem arises in many applications such as computer vision, information theory, molecular biology, and process scheduling. The growing scale of MIS problems suggests the use of distributed-memory hardware as a cost-effective approach to providing necessary compute and memory resources. Luby proposed four randomized algorithms to solve the MIS problem. All those algorithms are designed focusing on shared-memory machines and are analyzed using the PRAM model. These algorithms do not have direct efficient distributed-memory implementations. In this paper, we extend two of Luby’s seminal MIS algorithms, “Luby(A)” and “Luby(B),” to distributed-memory execution, and we evaluate their performance. We compare our results with the “Filtered MIS” implementation in the Combinatorial BLAS library for two types of synthetic graph inputs.

  20. An efficient forward–reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Bayer, Christian

    2016-02-20

    © 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  1. An efficient forward-reverse expectation-maximization algorithm for statistical inference in stochastic reaction networks

    KAUST Repository

    Vilanova, Pedro

    2016-01-07

    In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.

  2. Fisher's method of scoring in statistical image reconstruction: comparison of Jacobi and Gauss-Seidel iterative schemes.

    Science.gov (United States)

    Hudson, H M; Ma, J; Green, P

    1994-01-01

    Many algorithms for medical image reconstruction adopt versions of the expectation-maximization (EM) algorithm. In this approach, parameter estimates are obtained which maximize a complete data likelihood or penalized likelihood, in each iteration. Implicitly (and sometimes explicitly) penalized algorithms require smoothing of the current reconstruction in the image domain as part of their iteration scheme. In this paper, we discuss alternatives to EM which adapt Fisher's method of scoring (FS) and other methods for direct maximization of the incomplete data likelihood. Jacobi and Gauss-Seidel methods for non-linear optimization provide efficient algorithms applying FS in tomography. One approach uses smoothed projection data in its iterations. We investigate the convergence of Jacobi and Gauss-Seidel algorithms with clinical tomographic projection data.

  3. Sum Rate Maximization using Linear Precoding and Decoding in the Multiuser MIMO Downlink

    OpenAIRE

    Tenenbaum, Adam J.; Adve, Raviraj S.

    2008-01-01

    We propose an algorithm to maximize the instantaneous sum data rate transmitted by a base station in the downlink of a multiuser multiple-input, multiple-output system. The transmitter and the receivers may each be equipped with multiple antennas and each user may receive more than one data stream. We show that maximizing the sum rate is closely linked to minimizing the product of mean squared errors (PMSE). The algorithm employs an uplink/downlink duality to iteratively design transmit-recei...

  4. Determination of Pavement Rehabilitation Activities through a Permutation Algorithm

    Directory of Open Access Journals (Sweden)

    Sangyum Lee

    2013-01-01

    Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.

  5. Change detection algorithms for surveillance in visual iot: a comparative study

    International Nuclear Information System (INIS)

    Akram, B.A.; Zafar, A.; Akbar, A.H.; Chaudhry, A.

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule’s Coefficient) and JC (Jaccard’s Coefficient), execution time and memory consumption. Experimental results showed that Kapur’s algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes. (author)

  6. Coupling graph perturbation theory with scalable parallel algorithms for large-scale enumeration of maximal cliques in biological graphs

    International Nuclear Information System (INIS)

    Samatova, N F; Schmidt, M C; Hendrix, W; Breimyer, P; Thomas, K; Park, B-H

    2008-01-01

    Data-driven construction of predictive models for biological systems faces challenges from data intensity, uncertainty, and computational complexity. Data-driven model inference is often considered a combinatorial graph problem where an enumeration of all feasible models is sought. The data-intensive and the NP-hard nature of such problems, however, challenges existing methods to meet the required scale of data size and uncertainty, even on modern supercomputers. Maximal clique enumeration (MCE) in a graph derived from such biological data is often a rate-limiting step in detecting protein complexes in protein interaction data, finding clusters of co-expressed genes in microarray data, or identifying clusters of orthologous genes in protein sequence data. We report two key advances that address this challenge. We designed and implemented the first (to the best of our knowledge) parallel MCE algorithm that scales linearly on thousands of processors running MCE on real-world biological networks with thousands and hundreds of thousands of vertices. In addition, we proposed and developed the Graph Perturbation Theory (GPT) that establishes a foundation for efficiently solving the MCE problem in perturbed graphs, which model the uncertainty in the data. GPT formulates necessary and sufficient conditions for detecting the differences between the sets of maximal cliques in the original and perturbed graphs and reduces the enumeration time by more than 80% compared to complete recomputation

  7. Accelerated EM-based clustering of large data sets

    NARCIS (Netherlands)

    Verbeek, J.J.; Nunnink, J.R.J.; Vlassis, N.

    2006-01-01

    Motivated by the poor performance (linear complexity) of the EM algorithm in clustering large data sets, and inspired by the successful accelerated versions of related algorithms like k-means, we derive an accelerated variant of the EM algorithm for Gaussian mixtures that: (1) offers speedups that

  8. Efficient Maximum Likelihood Estimation for Pedigree Data with the Sum-Product Algorithm.

    Science.gov (United States)

    Engelhardt, Alexander; Rieger, Anna; Tresch, Achim; Mansmann, Ulrich

    2016-01-01

    We analyze data sets consisting of pedigrees with age at onset of colorectal cancer (CRC) as phenotype. The occurrence of familial clusters of CRC suggests the existence of a latent, inheritable risk factor. We aimed to compute the probability of a family possessing this risk factor as well as the hazard rate increase for these risk factor carriers. Due to the inheritability of this risk factor, the estimation necessitates a costly marginalization of the likelihood. We propose an improved EM algorithm by applying factor graphs and the sum-product algorithm in the E-step. This reduces the computational complexity from exponential to linear in the number of family members. Our algorithm is as precise as a direct likelihood maximization in a simulation study and a real family study on CRC risk. For 250 simulated families of size 19 and 21, the runtime of our algorithm is faster by a factor of 4 and 29, respectively. On the largest family (23 members) in the real data, our algorithm is 6 times faster. We introduce a flexible and runtime-efficient tool for statistical inference in biomedical event data with latent variables that opens the door for advanced analyses of pedigree data. © 2017 S. Karger AG, Basel.

  9. Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study

    Science.gov (United States)

    Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad

    2018-01-01

    The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.

  10. Polarity related influence maximization in signed social networks.

    Directory of Open Access Journals (Sweden)

    Dong Li

    Full Text Available Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.

  11. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    Science.gov (United States)

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  12. Maximum likelihood reconstruction for pinhole SPECT with a displaced center-of-rotation

    International Nuclear Information System (INIS)

    Li, J.; Jaszczak, R.J.; Coleman, R.E.

    1995-01-01

    In this paper, the authors describe the implementation of a maximum likelihood (ML), algorithm using expectation maximization (EM) for pin-hole SPECT with a displaced center-of-rotation. A ray-tracing technique is used in implementing the ML-EM algorithm. The proposed ML-EM algorithm is able to correct the center of rotation displacement which can be characterized by two orthogonal components. The algorithm is tested using experimentally acquired data, and the results demonstrate that the pinhole ML-EM algorithm is able to correct artifacts associated with the center-of-rotation displacement

  13. An Expectation-Maximization Method for Calibrating Synchronous Machine Models

    Energy Technology Data Exchange (ETDEWEB)

    Meng, Da; Zhou, Ning; Lu, Shuai; Lin, Guang

    2013-07-21

    The accuracy of a power system dynamic model is essential to its secure and efficient operation. Lower confidence in model accuracy usually leads to conservative operation and lowers asset usage. To improve model accuracy, this paper proposes an expectation-maximization (EM) method to calibrate the synchronous machine model using phasor measurement unit (PMU) data. First, an extended Kalman filter (EKF) is applied to estimate the dynamic states using measurement data. Then, the parameters are calculated based on the estimated states using maximum likelihood estimation (MLE) method. The EM method iterates over the preceding two steps to improve estimation accuracy. The proposed EM method’s performance is evaluated using a single-machine infinite bus system and compared with a method where both state and parameters are estimated using an EKF method. Sensitivity studies of the parameter calibration using EM method are also presented to show the robustness of the proposed method for different levels of measurement noise and initial parameter uncertainty.

  14. Gap processing for adaptive maximal poisson-disk sampling

    KAUST Repository

    Yan, Dongming; Wonka, Peter

    2013-01-01

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  15. Gap processing for adaptive maximal poisson-disk sampling

    KAUST Repository

    Yan, Dongming

    2013-10-17

    In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.

  16. A new approach for optimum DG placement and sizing based on voltage stability maximization and minimization of power losses

    International Nuclear Information System (INIS)

    Aman, M.M.; Jasmon, G.B.; Bakar, A.H.A.; Mokhlis, H.

    2013-01-01

    Highlights: • A new algorithm is proposed for optimum DG placement and sizing.• I 2 R losses minimization and voltage stability maximization is considered in fitness function.• Bus voltage stability and line stability is considered in voltage stability maximization.• Multi-objective PSO is used to solve the problem.• Proposed method is compared with analytical and grid search algorithm. - Abstract: Distributed Generation (DG) placement on the basis of minimization of losses and maximization of system voltage stability are two different approaches, discussed in research. In the new proposed algorithm, a multi-objective approach is used to combine the both approaches together. Minimization of power losses and maximization of voltage stability due to finding weakest voltage bus as well as due to weakest link in the system are considered in the fitness function. Particle Swarm Optimization (PSO) algorithm is used in this paper to solve the multi-objective problem. This paper will also compare the propose method with existing DG placement methods. From results, the proposed method is found more advantageous than the previous work in terms of voltage profile improvement, maximization of system loadability, reduction in power system losses and maximization of bus and line voltage stability. The results are validated on 12-bus, 30-bus, 33-bus and 69-bus radial distribution networks and also discussed in detailed

  17. Disk Density Tuning of a Maximal Random Packing.

    Science.gov (United States)

    Ebeida, Mohamed S; Rushdi, Ahmad A; Awad, Muhammad A; Mahmoud, Ahmed H; Yan, Dong-Ming; English, Shawn A; Owens, John D; Bajaj, Chandrajit L; Mitchell, Scott A

    2016-08-01

    We introduce an algorithmic framework for tuning the spatial density of disks in a maximal random packing, without changing the sizing function or radii of disks. Starting from any maximal random packing such as a Maximal Poisson-disk Sampling (MPS), we iteratively relocate, inject (add), or eject (remove) disks, using a set of three successively more-aggressive local operations. We may achieve a user-defined density, either more dense or more sparse, almost up to the theoretical structured limits. The tuned samples are conflict-free, retain coverage maximality, and, except in the extremes, retain the blue noise randomness properties of the input. We change the density of the packing one disk at a time, maintaining the minimum disk separation distance and the maximum domain coverage distance required of any maximal packing. These properties are local, and we can handle spatially-varying sizing functions. Using fewer points to satisfy a sizing function improves the efficiency of some applications. We apply the framework to improve the quality of meshes, removing non-obtuse angles; and to more accurately model fiber reinforced polymers for elastic and failure simulations.

  18. Wobbling and LSF-based maximum likelihood expectation maximization reconstruction for wobbling PET

    International Nuclear Information System (INIS)

    Kim, Hang-Keun; Son, Young-Don; Kwon, Dae-Hyuk; Joo, Yohan; Cho, Zang-Hee

    2016-01-01

    Positron emission tomography (PET) is a widely used imaging modality; however, the PET spatial resolution is not yet satisfactory for precise anatomical localization of molecular activities. Detector size is the most important factor because it determines the intrinsic resolution, which is approximately half of the detector size and determines the ultimate PET resolution. Detector size, however, cannot be made too small because both the decreased detection efficiency and the increased septal penetration effect degrade the image quality. A wobbling and line spread function (LSF)-based maximum likelihood expectation maximization (WL-MLEM) algorithm, which combined the MLEM iterative reconstruction algorithm with wobbled sampling and LSF-based deconvolution using the system matrix, was proposed for improving the spatial resolution of PET without reducing the scintillator or detector size. The new algorithm was evaluated using a simulation, and its performance was compared with that of the existing algorithms, such as conventional MLEM and LSF-based MLEM. Simulations demonstrated that the WL-MLEM algorithm yielded higher spatial resolution and image quality than the existing algorithms. The WL-MLEM algorithm with wobbling PET yielded substantially improved resolution compared with conventional algorithms with stationary PET. The algorithm can be easily extended to other iterative reconstruction algorithms, such as maximum a priori (MAP) and ordered subset expectation maximization (OSEM). The WL-MLEM algorithm with wobbling PET may offer improvements in both sensitivity and resolution, the two most sought-after features in PET design. - Highlights: • This paper proposed WL-MLEM algorithm for PET and demonstrated its performance. • WL-MLEM algorithm effectively combined wobbling and line spread function based MLEM. • WL-MLEM provided improvements in the spatial resolution and the PET image quality. • WL-MLEM can be easily extended to the other iterative

  19. Image reconstruction using three-dimensional compound Gauss-Markov random field in emission computed tomography

    International Nuclear Information System (INIS)

    Watanabe, Shuichi; Kudo, Hiroyuki; Saito, Tsuneo

    1993-01-01

    In this paper, we propose a new reconstruction algorithm based on MAP (maximum a posteriori probability) estimation principle for emission tomography. To improve noise suppression properties of the conventional ML-EM (maximum likelihood expectation maximization) algorithm, direct three-dimensional reconstruction that utilizes intensity correlations between adjacent transaxial slices is introduced. Moreover, to avoid oversmoothing of edges, a priori knowledge of RI (radioisotope) distribution is represented by using a doubly-stochastic image model called the compound Gauss-Markov random field. The a posteriori probability is maximized by using the iterative GEM (generalized EM) algorithm. Computer simulation results are shown to demonstrate validity of the proposed algorithm. (author)

  20. Tri-maximal vs. bi-maximal neutrino mixing

    International Nuclear Information System (INIS)

    Scott, W.G

    2000-01-01

    It is argued that data from atmospheric and solar neutrino experiments point strongly to tri-maximal or bi-maximal lepton mixing. While ('optimised') bi-maximal mixing gives an excellent a posteriori fit to the data, tri-maximal mixing is an a priori hypothesis, which is not excluded, taking account of terrestrial matter effects

  1. Fitting Nonlinear Ordinary Differential Equation Models with Random Effects and Unknown Initial Conditions Using the Stochastic Approximation Expectation-Maximization (SAEM) Algorithm.

    Science.gov (United States)

    Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu

    2016-03-01

    The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.

  2. Grouped fuzzy SVM with EM-based partition of sample space for clustered microcalcification detection.

    Science.gov (United States)

    Wang, Huiya; Feng, Jun; Wang, Hongyu

    2017-07-20

    Detection of clustered microcalcification (MC) from mammograms plays essential roles in computer-aided diagnosis for early stage breast cancer. To tackle problems associated with the diversity of data structures of MC lesions and the variability of normal breast tissues, multi-pattern sample space learning is required. In this paper, a novel grouped fuzzy Support Vector Machine (SVM) algorithm with sample space partition based on Expectation-Maximization (EM) (called G-FSVM) is proposed for clustered MC detection. The diversified pattern of training data is partitioned into several groups based on EM algorithm. Then a series of fuzzy SVM are integrated for classification with each group of samples from the MC lesions and normal breast tissues. From DDSM database, a total of 1,064 suspicious regions are selected from 239 mammography, and the measurement of Accuracy, True Positive Rate (TPR), False Positive Rate (FPR) and EVL = TPR* 1-FPR are 0.82, 0.78, 0.14 and 0.72, respectively. The proposed method incorporates the merits of fuzzy SVM and multi-pattern sample space learning, decomposing the MC detection problem into serial simple two-class classification. Experimental results from synthetic data and DDSM database demonstrate that our integrated classification framework reduces the false positive rate significantly while maintaining the true positive rate.

  3. Robust EM Continual Reassessment Method in Oncology Dose Finding

    Science.gov (United States)

    Yuan, Ying; Yin, Guosheng

    2012-01-01

    The continual reassessment method (CRM) is a commonly used dose-finding design for phase I clinical trials. Practical applications of this method have been restricted by two limitations: (1) the requirement that the toxicity outcome needs to be observed shortly after the initiation of the treatment; and (2) the potential sensitivity to the prespecified toxicity probability at each dose. To overcome these limitations, we naturally treat the unobserved toxicity outcomes as missing data, and use the expectation-maximization (EM) algorithm to estimate the dose toxicity probabilities based on the incomplete data to direct dose assignment. To enhance the robustness of the design, we propose prespecifying multiple sets of toxicity probabilities, each set corresponding to an individual CRM model. We carry out these multiple CRMs in parallel, across which model selection and model averaging procedures are used to make more robust inference. We evaluate the operating characteristics of the proposed robust EM-CRM designs through simulation studies and show that the proposed methods satisfactorily resolve both limitations of the CRM. Besides improving the MTD selection percentage, the new designs dramatically shorten the duration of the trial, and are robust to the prespecification of the toxicity probabilities. PMID:22375092

  4. Maximal Conflict Set Enumeration Algorithm Based on Locality of Petri Nets%基于Pe tri网局部性的极大冲突集枚举算法

    Institute of Scientific and Technical Information of China (English)

    潘理; 郑红; 刘显明; 杨勃

    2016-01-01

    冲突是Petri网研究的重要主题。目前Petri网冲突研究主要集中于冲突建模和冲突消解策略,而对冲突问题本身的计算复杂性却很少关注。提出Petri网的冲突集问题,并证明冲突集问题是NP(Non-deterministic Polyno-mial)完全的。提出极大冲突集动态枚举算法,该算法基于当前标识的所有极大冲突集,利用Petri网实施局部性,仅计算下一标识中受局部性影响的极大冲突集,从而避免重新枚举所有极大冲突集。该算法时间复杂度为O(m2 n),m是当前标识的极大冲突集数目,n是变迁数。最后证明自由选择网、非对称选择网的极大冲突集枚举算法复杂度可降至O(n2)。极大冲突集枚举算法研究将为Petri网冲突问题的算法求解提供理论参考。%Conflict is an essential concept in Petri net theory.The existing research focuses on the modelling and resolu-tion strategies of conflict problems,but less on the computational complexity of the problems theirselves.In this paper,we pro-pose the conflict set problem for Petri nets,and prove that the conflict set problem is NP-complete.Furthermore,we present a dynamic algorithm for the maximal conflict set enumeration.Our algorithm only computes those conflict sets that are affected by local firing,which avoids enumerating all maximal conflict sets at each marking.The algorithm needs time O(m2n)where m is the number of maximal conflict sets at the current marking and n is the number of transitions.Finally,we show that the maximal conflict set enumeration problem can be solved in O(n2)for free-choice nets and asymmetric choice nets.The results on complexity of thel conflict set problem provide a theoretical reference for solving conflict problems of Petri nets.

  5. Interval Solution for Nonlinear Programming of Maximizing the Fatigue Life of V-Belt under Polymorphic Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Zhong Wan

    2013-01-01

    Full Text Available In accord with the practical engineering design conditions, a nonlinear programming model is constructed for maximizing the fatigue life of V-belt drive in which some polymorphic uncertainties are incorporated. For a given satisfaction level and a confidence level, an equivalent formulation of this uncertain optimization model is obtained where only interval parameters are involved. Based on the concepts of maximal and minimal range inequalities for describing interval inequality, the interval parameter model is decomposed into two standard nonlinear programming problems, and an algorithm, called two-step based sampling algorithm, is developed to find an interval optimal solution for the original problem. Case study is employed to demonstrate the validity and practicability of the constructed model and the algorithm.

  6. A fast EM algorithm for BayesA-like prediction of genomic breeding values.

    Directory of Open Access Journals (Sweden)

    Xiaochen Sun

    Full Text Available Prediction accuracies of estimated breeding values for economically important traits are expected to benefit from genomic information. Single nucleotide polymorphism (SNP panels used in genomic prediction are increasing in density, but the Markov Chain Monte Carlo (MCMC estimation of SNP effects can be quite time consuming or slow to converge when a large number of SNPs are fitted simultaneously in a linear mixed model. Here we present an EM algorithm (termed "fastBayesA" without MCMC. This fastBayesA approach treats the variances of SNP effects as missing data and uses a joint posterior mode of effects compared to the commonly used BayesA which bases predictions on posterior means of effects. In each EM iteration, SNP effects are predicted as a linear combination of best linear unbiased predictions of breeding values from a mixed linear animal model that incorporates a weighted marker-based realized relationship matrix. Method fastBayesA converges after a few iterations to a joint posterior mode of SNP effects under the BayesA model. When applied to simulated quantitative traits with a range of genetic architectures, fastBayesA is shown to predict GEBV as accurately as BayesA but with less computing effort per SNP than BayesA. Method fastBayesA can be used as a computationally efficient substitute for BayesA, especially when an increasing number of markers bring unreasonable computational burden or slow convergence to MCMC approaches.

  7. Analysing Music with Point-Set Compression Algorithms

    DEFF Research Database (Denmark)

    Meredith, David

    2016-01-01

    Several point-set pattern-discovery and compression algorithms designed for analysing music are reviewed and evaluated. Each algorithm takes as input a point-set representation of a score in which each note is represented as a point in pitch-time space. Each algorithm computes the maximal...... and sections in pieces of classical music. On the first task, the best-performing algorithms achieved success rates of around 84%. In the second task, the best algorithms achieved mean F1 scores of around 0.49, with scores for individual pieces rising as high as 0.71....

  8. Phenomenology of maximal and near-maximal lepton mixing

    International Nuclear Information System (INIS)

    Gonzalez-Garcia, M. C.; Pena-Garay, Carlos; Nir, Yosef; Smirnov, Alexei Yu.

    2001-01-01

    The possible existence of maximal or near-maximal lepton mixing constitutes an intriguing challenge for fundamental theories of flavor. We study the phenomenological consequences of maximal and near-maximal mixing of the electron neutrino with other (x=tau and/or muon) neutrinos. We describe the deviations from maximal mixing in terms of a parameter ε(equivalent to)1-2sin 2 θ ex and quantify the present experimental status for |ε| e mixing comes from solar neutrino experiments. We find that the global analysis of solar neutrino data allows maximal mixing with confidence level better than 99% for 10 -8 eV 2 ∼ 2 ∼ -7 eV 2 . In the mass ranges Δm 2 ∼>1.5x10 -5 eV 2 and 4x10 -10 eV 2 ∼ 2 ∼ -7 eV 2 the full interval |ε| e mixing in atmospheric neutrinos, supernova neutrinos, and neutrinoless double beta decay

  9. Development of an image reconstruction algorithm for a few number of projection data

    International Nuclear Information System (INIS)

    Vieira, Wilson S.; Brandao, Luiz E.; Braz, Delson

    2007-01-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  10. Development of an image reconstruction algorithm for a few number of projection data

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Wilson S.; Brandao, Luiz E. [Instituto de Engenharia Nuclear (IEN-CNEN/RJ), Rio de Janeiro , RJ (Brazil)]. E-mails: wilson@ien.gov.br; brandao@ien.gov.br; Braz, Delson [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programa de Pos-graduacao de Engenharia (COPPE). Lab. de Instrumentacao Nuclear]. E-mail: delson@mailhost.lin.ufrj.br

    2007-07-01

    An image reconstruction algorithm was developed for specific cases of radiotracer applications in industry (rotating cylindrical mixers), involving a very few number of projection data. The algorithm was planned for imaging radioactive isotope distributions around the center of circular planes. The method consists of adapting the original expectation maximization algorithm (EM) to solve the ill-posed emission tomography inverse problem in order to reconstruct transversal 2D images of an object with only four projections. To achieve this aim, counts of photons emitted by selected radioactive sources in the plane, after they had been simulated using the commercial software MICROSHIELD 5.05, constitutes the projections and a computational code (SPECTEM) was developed to generate activity vectors or images related to those sources. SPECTEM is flexible to support simultaneous changes of the detectors's geometry, the medium under investigation and the properties of the gamma radiation. As a consequence of the code had been followed correctly the proposed method, good results were obtained and they encouraged us to continue the next step of the research: the validation of SPECTEM utilizing experimental data to check its real performance. We aim this code will improve considerably radiotracer methodology, making easier the diagnosis of fails in industrial processes. (author)

  11. 2D evaluation of spectral LIBS data derived from heterogeneous materials using cluster algorithm

    Science.gov (United States)

    Gottlieb, C.; Millar, S.; Grothe, S.; Wilsch, G.

    2017-08-01

    Laser-induced Breakdown Spectroscopy (LIBS) is capable of providing spatially resolved element maps in regard to the chemical composition of the sample. The evaluation of heterogeneous materials is often a challenging task, especially in the case of phase boundaries. In order to determine information about a certain phase of a material, the need for a method that offers an objective evaluation is necessary. This paper will introduce a cluster algorithm in the case of heterogeneous building materials (concrete) to separate the spectral information of non-relevant aggregates and cement matrix. In civil engineering, the information about the quantitative ingress of harmful species like Cl-, Na+ and SO42- is of great interest in the evaluation of the remaining lifetime of structures (Millar et al., 2015; Wilsch et al., 2005). These species trigger different damage processes such as the alkali-silica reaction (ASR) or the chloride-induced corrosion of the reinforcement. Therefore, a discrimination between the different phases, mainly cement matrix and aggregates, is highly important (Weritz et al., 2006). For the 2D evaluation, the expectation-maximization-algorithm (EM algorithm; Ester and Sander, 2000) has been tested for the application presented in this work. The method has been introduced and different figures of merit have been presented according to recommendations given in Haddad et al. (2014). Advantages of this method will be highlighted. After phase separation, non-relevant information can be excluded and only the wanted phase displayed. Using a set of samples with known and unknown composition, the EM-clustering method has been validated regarding to Gustavo González and Ángeles Herrador (2007).

  12. Enumerating all maximal frequent subtrees in collections of phylogenetic trees.

    Science.gov (United States)

    Deepak, Akshay; Fernández-Baca, David

    2014-01-01

    A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees.

  13. Simulating Evolution of <em>Drosophila melanogaster Ebonyem> Mutants Using a Genetic Algorithm

    DEFF Research Database (Denmark)

    Helles, Glennie

    2009-01-01

    Genetic algorithms are generally quite easy to understand and work with, and they are a popular choice in many cases. One area in which genetic algorithms are widely and successfully used is artificial life where they are used to simulate evolution of artificial creatures. However, despite...... their suggestive name, simplicity and popularity in artificial life, they do not seem to have gained a footing within the field of population genetics to simulate evolution of real organisms --- possibly because genetic algorithms are based on a rather crude simplification of the evolutionary mechanisms known...

  14. The geometry of entanglement and Grover's algorithm

    International Nuclear Information System (INIS)

    Iwai, Toshihiro; Hayashi, Naoki; Mizobe, Kimitake

    2008-01-01

    A measure of entanglement with respect to a bipartite partition of n-qubit has been defined and studied from the viewpoint of Riemannian geometry (Iwai 2007 J. Phys. A: Math. Theor. 40 12161). This paper has two aims. One is to study further the geometry of entanglement, and the other is to investigate Grover's search algorithms, both the original and the fixed-point ones, in reference with entanglement. As the distance between the maximally entangled states and the separable states is known already in the previous paper, this paper determines the set of maximally entangled states nearest to a typical separable state which is used as an initial state in Grover's search algorithms, and to find geodesic segments which realize the above-mentioned distance. As for Grover's algorithms, it is already known that while the initial and the target states are separable, the algorithms generate sequences of entangled states. This fact is confirmed also in the entanglement measure proposed in the previous paper, and then a split Grover algorithm is proposed which generates sequences of separable states only with respect to the bipartite partition

  15. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander; Mikhalev, Alexander; Serdyukov, Pavel; Gusev, Gleb; Oseledets, Ivan

    2017-01-01

    preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a

  16. Scheduling Algorithms for Maximizing Throughput with Zero-Forcing Beamforming in a MIMO Wireless System

    Science.gov (United States)

    Foronda, Augusto; Ohta, Chikara; Tamaki, Hisashi

    Dirty paper coding (DPC) is a strategy to achieve the region capacity of multiple input multiple output (MIMO) downlink channels and a DPC scheduler is throughput optimal if users are selected according to their queue states and current rates. However, DPC is difficult to implement in practical systems. One solution, zero-forcing beamforming (ZFBF) strategy has been proposed to achieve the same asymptotic sum rate capacity as that of DPC with an exhaustive search over the entire user set. Some suboptimal user group selection schedulers with reduced complexity based on ZFBF strategy (ZFBF-SUS) and proportional fair (PF) scheduling algorithm (PF-ZFBF) have also been proposed to enhance the throughput and fairness among the users, respectively. However, they are not throughput optimal, fairness and throughput decrease if each user queue length is different due to different users channel quality. Therefore, we propose two different scheduling algorithms: a throughput optimal scheduling algorithm (ZFBF-TO) and a reduced complexity scheduling algorithm (ZFBF-RC). Both are based on ZFBF strategy and, at every time slot, the scheduling algorithms have to select some users based on user channel quality, user queue length and orthogonality among users. Moreover, the proposed algorithms have to produce the rate allocation and power allocation for the selected users based on a modified water filling method. We analyze the schedulers complexity and numerical results show that ZFBF-RC provides throughput and fairness improvements compared to the ZFBF-SUS and PF-ZFBF scheduling algorithms.

  17. Bioinformatics algorithm based on a parallel implementation of a machine learning approach using transducers

    International Nuclear Information System (INIS)

    Roche-Lima, Abiel; Thulasiram, Ruppa K

    2012-01-01

    Finite automata, in which each transition is augmented with an output label in addition to the familiar input label, are considered finite-state transducers. Transducers have been used to analyze some fundamental issues in bioinformatics. Weighted finite-state transducers have been proposed to pairwise alignments of DNA and protein sequences; as well as to develop kernels for computational biology. Machine learning algorithms for conditional transducers have been implemented and used for DNA sequence analysis. Transducer learning algorithms are based on conditional probability computation. It is calculated by using techniques, such as pair-database creation, normalization (with Maximum-Likelihood normalization) and parameters optimization (with Expectation-Maximization - EM). These techniques are intrinsically costly for computation, even worse when are applied to bioinformatics, because the databases sizes are large. In this work, we describe a parallel implementation of an algorithm to learn conditional transducers using these techniques. The algorithm is oriented to bioinformatics applications, such as alignments, phylogenetic trees, and other genome evolution studies. Indeed, several experiences were developed using the parallel and sequential algorithm on Westgrid (specifically, on the Breeze cluster). As results, we obtain that our parallel algorithm is scalable, because execution times are reduced considerably when the data size parameter is increased. Another experience is developed by changing precision parameter. In this case, we obtain smaller execution times using the parallel algorithm. Finally, number of threads used to execute the parallel algorithm on the Breezy cluster is changed. In this last experience, we obtain as result that speedup is considerably increased when more threads are used; however there is a convergence for number of threads equal to or greater than 16.

  18. A comparison of algorithms for inference and learning in probabilistic graphical models.

    Science.gov (United States)

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  19. Enumerating all maximal frequent subtrees in collections of phylogenetic trees

    Science.gov (United States)

    2014-01-01

    Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474

  20. Use of Genetic Algorithms for Contrast and Entropy Optimization in ISAR Autofocusing

    Directory of Open Access Journals (Sweden)

    Martorella Marco

    2006-01-01

    Full Text Available Image contrast maximization and entropy minimization are two commonly used techniques for ISAR image autofocusing. When the signal phase history due to the target radial motion has to be approximated with high order polynomial models, classic optimization techniques fail when attempting to either maximize the image contrast or minimize the image entropy. In this paper a solution of this problem is proposed by using genetic algorithms. The performances of the new algorithms that make use of genetic algorithms overcome the problem with previous implementations based on deterministic approaches. Tests on real data of airplanes and ships confirm the insight.

  1. Maximum Simulated Likelihood and Expectation-Maximization Methods to Estimate Random Coefficients Logit with Panel Data

    DEFF Research Database (Denmark)

    Cherchi, Elisabetta; Guevara, Cristian

    2012-01-01

    with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all......The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... simulated likelihood (MSL) method is compared with the alternative expectation- maximization (EM) method, which does not require simulation. Previous literature had shown that for cross-sectional data, MSL outperforms the EM method in the ability to recover the true parameters and estimation time...

  2. Maximization of Energy Efficiency in Wireless ad hoc and Sensor Networks With SERENA

    Directory of Open Access Journals (Sweden)

    Saoucene Mahfoudh

    2009-01-01

    Full Text Available In wireless ad hoc and sensor networks, an analysis of the node energy consumption distribution shows that the largest part is due to the time spent in the idle state. This result is at the origin of SERENA, an algorithm to SchEdule RoutEr Nodes Activity. SERENA allows router nodes to sleep, while ensuring end-to-end communication in the wireless network. It is a localized and decentralized algorithm assigning time slots to nodes. Any node stays awake only during its slot and the slots assigned to its neighbors, it sleeps the remaining time. Simulation results show that SERENA enables us to maximize network lifetime while increasing the number of user messages delivered. SERENA is based on a two-hop coloring algorithm, whose complexity in terms of colors and rounds is evaluated. We then quantify the slot reuse. Finally, we show how SERENA improves the node energy consumption distribution and maximizes the energy efficiency of wireless ad hoc and sensor networks. We compare SERENA with classical TDMA and optimized variants such as USAP in wireless ad hoc and sensor networks.

  3. On Maximizing the Throughput of Packet Transmission under Energy Constraints.

    Science.gov (United States)

    Wu, Weiwei; Dai, Guangli; Li, Yan; Shan, Feng

    2018-06-23

    More and more Internet of Things (IoT) wireless devices have been providing ubiquitous services over the recent years. Since most of these devices are powered by batteries, a fundamental trade-off to be addressed is the depleted energy and the achieved data throughput in wireless data transmission. By exploiting the rate-adaptive capacities of wireless devices, most existing works on energy-efficient data transmission try to design rate-adaptive transmission policies to maximize the amount of transmitted data bits under the energy constraints of devices. Such solutions, however, cannot apply to scenarios where data packets have respective deadlines and only integrally transmitted data packets contribute. Thus, this paper introduces a notion of weighted throughput, which measures how much total value of data packets are successfully and integrally transmitted before their own deadlines. By designing efficient rate-adaptive transmission policies, this paper aims to make the best use of the energy and maximize the weighted throughput. What is more challenging but with practical significance, we consider the fading effect of wireless channels in both offline and online scenarios. In the offline scenario, we develop an optimal algorithm that computes the optimal solution in pseudo-polynomial time, which is the best possible solution as the problem undertaken is NP-hard. In the online scenario, we propose an efficient heuristic algorithm based on optimal properties derived for the optimal offline solution. Simulation results validate the efficiency of the proposed algorithm.

  4. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  5. Maximal Bell's inequality violation for non-maximal entanglement

    International Nuclear Information System (INIS)

    Kobayashi, M.; Khanna, F.; Mann, A.; Revzen, M.; Santana, A.

    2004-01-01

    Bell's inequality violation (BIQV) for correlations of polarization is studied for a product state of two two-mode squeezed vacuum (TMSV) states. The violation allowed is shown to attain its maximal limit for all values of the squeezing parameter, ζ. We show via an explicit example that a state whose entanglement is not maximal allow maximal BIQV. The Wigner function of the state is non-negative and the average value of either polarization is nil

  6. A very fast implementation of 2D iterative reconstruction algorithms

    DEFF Research Database (Denmark)

    Toft, Peter Aundal; Jensen, Peter James

    1996-01-01

    that iterative reconstruction algorithms can be implemented and run almost as fast as direct reconstruction algorithms. The method has been implemented in a software package that is available for free, providing reconstruction algorithms using ART, EM, and the Least Squares Conjugate Gradient Method...

  7. Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems

    Directory of Open Access Journals (Sweden)

    Weeraddana Chathuranga

    2010-01-01

    Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.

  8. An EM Algorithm for Double-Pareto-Lognormal Generalized Linear Model Applied to Heavy-Tailed Insurance Claims

    Directory of Open Access Journals (Sweden)

    Enrique Calderín-Ojeda

    2017-11-01

    Full Text Available Generalized linear models might not be appropriate when the probability of extreme events is higher than that implied by the normal distribution. Extending the method for estimating the parameters of a double Pareto lognormal distribution (DPLN in Reed and Jorgensen (2004, we develop an EM algorithm for the heavy-tailed Double-Pareto-lognormal generalized linear model. The DPLN distribution is obtained as a mixture of a lognormal distribution with a double Pareto distribution. In this paper the associated generalized linear model has the location parameter equal to a linear predictor which is used to model insurance claim amounts for various data sets. The performance is compared with those of the generalized beta (of the second kind and lognorma distributions.

  9. Using Reversed MFCC and IT-EM for Automatic Speaker Verification

    Directory of Open Access Journals (Sweden)

    Sheeraz Memon

    2012-01-01

    Full Text Available This paper proposes text independent automatic speaker verification system using IMFCC (Inverse/ Reverse Mel Frequency Coefficients and IT-EM (Information Theoretic Expectation Maximization. To perform speaker verification, feature extraction using Mel scale has been widely applied and has established better results. The IMFCC is based on inverse Mel-scale. The IMFCC effectively captures information available at the high frequency formants which is ignored by the MFCC. In this paper the fusion of MFCC and IMFCC at input level is proposed. GMMs (Gaussian Mixture Models based on EM (Expectation Maximization have been widely used for classification of text independent verification. However EM comes across the convergence issue. In this paper we use our proposed IT-EM which has faster convergence, to train speaker models. IT-EM uses information theory principles such as PDE (Parzen Density Estimation and KL (Kullback-Leibler divergence measure. IT-EM acclimatizes the weights, means and covariances, like EM. However, IT-EM process is not performed on feature vector sets but on a set of centroids obtained using IT (Information Theoretic metric. The IT-EM process at once diminishes divergence measure between PDE estimates of features distribution within a given class and the centroids distribution within the same class. The feature level fusion and IT-EM is tested for the task of speaker verification using NIST2001 and NIST2004. The experimental evaluation validates that MFCC/IMFCC has better results than the conventional delta/MFCC feature set. The MFCC/IMFCC feature vector size is also much smaller than the delta MFCC thus reducing the computational burden as well. IT-EM method also showed faster convergence, than the conventional EM method, and thus it leads to higher speaker recognition scores.

  10. Generation of Referring Expressions: Assessing the Incremental Algorithm

    Science.gov (United States)

    van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard

    2012-01-01

    A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…

  11. Maximizing and customer loyalty: Are maximizers less loyal?

    Directory of Open Access Journals (Sweden)

    Linda Lai

    2011-06-01

    Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.

  12. Power maximization of a point absorber wave energy converter using improved model predictive control

    Science.gov (United States)

    Milani, Farideh; Moghaddam, Reihaneh Kardehi

    2017-08-01

    This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.

  13. Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering

    KAUST Repository

    Fonarev, Alexander

    2017-02-07

    Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method\\'s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.

  14. A Smoothing-Type Algorithm for Solving Linear Complementarity Problems with Strong Convergence Properties

    International Nuclear Information System (INIS)

    Huang Zhenghai; Gu Weizhe

    2008-01-01

    In this paper, we construct an augmented system of the standard monotone linear complementarity problem (LCP), and establish the relations between the augmented system and the LCP. We present a smoothing-type algorithm for solving the augmented system. The algorithm is shown to be globally convergent without assuming any prior knowledge of feasibility/infeasibility of the problem. In particular, if the LCP has a solution, then the algorithm either generates a maximal complementary solution of the LCP or detects correctly solvability of the LCP, and in the latter case, an existing smoothing-type algorithm can be directly applied to solve the LCP without any additional assumption and it generates a maximal complementary solution of the LCP; and that if the LCP is infeasible, then the algorithm detect correctly infeasibility of the LCP. To the best of our knowledge, such properties have not appeared in the existing literature for smoothing-type algorithms

  15. A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm

    DEFF Research Database (Denmark)

    Bork, Lasse

    This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...

  16. Hemodynamic segmentation of brain perfusion images with delay and dispersion effects using an expectation-maximization algorithm.

    Directory of Open Access Journals (Sweden)

    Chia-Feng Lu

    Full Text Available Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified.

  17. Automatic control algorithm effects on energy production

    Science.gov (United States)

    Mcnerney, G. M.

    1981-01-01

    A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.

  18. Application of Expectation Maximization Method for Purchase Decision-Making Support in Welding Branch

    Directory of Open Access Journals (Sweden)

    Kujawińska Agnieszka

    2016-06-01

    Full Text Available The article presents a study of applying the proposed method of cluster analysis to support purchasing decisions in the welding industry. The authors analyze the usefulness of the non-hierarchical method, Expectation Maximization (EM, in the selection of material (212 combinations of flux and wire melt for the SAW (Submerged Arc Welding method process. The proposed approach to cluster analysis is proved as useful in supporting purchase decisions.

  19. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    Science.gov (United States)

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in

  20. A Lyapunov based approach to energy maximization in renewable energy technologies

    Science.gov (United States)

    Iyasere, Erhun

    This dissertation describes the design and implementation of Lyapunov-based control strategies for the maximization of the power captured by renewable energy harnessing technologies such as (i) a variable speed, variable pitch wind turbine, (ii) a variable speed wind turbine coupled to a doubly fed induction generator, and (iii) a solar power generating system charging a constant voltage battery. First, a torque control strategy is presented to maximize wind energy captured in variable speed, variable pitch wind turbines at low to medium wind speeds. The proposed strategy applies control torque to the wind turbine pitch and rotor subsystems to simultaneously control the blade pitch and tip speed ratio, via the rotor angular speed, to an optimum point at which the capture efficiency is maximum. The control method allows for aerodynamic rotor power maximization without exact knowledge of the wind turbine model. A series of numerical results show that the wind turbine can be controlled to achieve maximum energy capture. Next, a control strategy is proposed to maximize the wind energy captured in a variable speed wind turbine, with an internal induction generator, at low to medium wind speeds. The proposed strategy controls the tip speed ratio, via the rotor angular speed, to an optimum point at which the efficiency constant (or power coefficient) is maximal for a particular blade pitch angle and wind speed by using the generator rotor voltage as a control input. This control method allows for aerodynamic rotor power maximization without exact wind turbine model knowledge. Representative numerical results demonstrate that the wind turbine can be controlled to achieve near maximum energy capture. Finally, a power system consisting of a photovoltaic (PV) array panel, dc-to-dc switching converter, charging a battery is considered wherein the environmental conditions are time-varying. A backstepping PWM controller is developed to maximize the power of the solar generating

  1. Principles of maximally classical and maximally realistic quantum ...

    Indian Academy of Sciences (India)

    Principles of maximally classical and maximally realistic quantum mechanics. S M ROY. Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India. Abstract. Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2N-dimensional phase space, ...

  2. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    Energy Technology Data Exchange (ETDEWEB)

    Maitree, R; Guzman, G; Chundury, A; Roach, M; Yang, D [Washington University School of Medicine, St Louis, MO (United States)

    2016-06-15

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness of available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and

  3. SU-C-207B-02: Maximal Noise Reduction Filter with Anatomical Structures Preservation

    International Nuclear Information System (INIS)

    Maitree, R; Guzman, G; Chundury, A; Roach, M; Yang, D

    2016-01-01

    Purpose: All medical images contain noise, which can result in an undesirable appearance and can reduce the visibility of anatomical details. There are varieties of techniques utilized to reduce noise such as increasing the image acquisition time and using post-processing noise reduction algorithms. However, these techniques are increasing the imaging time and cost or reducing tissue contrast and effective spatial resolution which are useful diagnosis information. The three main focuses in this study are: 1) to develop a novel approach that can adaptively and maximally reduce noise while preserving valuable details of anatomical structures, 2) to evaluate the effectiveness of available noise reduction algorithms in comparison to the proposed algorithm, and 3) to demonstrate that the proposed noise reduction approach can be used clinically. Methods: To achieve a maximal noise reduction without destroying the anatomical details, the proposed approach automatically estimated the local image noise strength levels and detected the anatomical structures, i.e. tissue boundaries. Such information was used to adaptively adjust strength of the noise reduction filter. The proposed algorithm was tested on 34 repeating swine head datasets and 54 patients MRI and CT images. The performance was quantitatively evaluated by image quality metrics and manually validated for clinical usages by two radiation oncologists and one radiologist. Results: Qualitative measurements on repeated swine head images demonstrated that the proposed algorithm efficiently removed noise while preserving the structures and tissues boundaries. In comparisons, the proposed algorithm obtained competitive noise reduction performance and outperformed other filters in preserving anatomical structures. Assessments from the manual validation indicate that the proposed noise reduction algorithm is quite adequate for some clinical usages. Conclusion: According to both clinical evaluation (human expert ranking) and

  4. Modeling Multioperator Multi-UAV Operator Attention Allocation Problem Based on Maximizing the Global Reward

    Directory of Open Access Journals (Sweden)

    Yuhang Wu

    2016-01-01

    Full Text Available This paper focuses on the attention allocation problem (AAP in modeling multioperator multi-UAV (MOMU, with the operator model and task properties taken into consideration. The model of MOMU operator AAP based on maximizing the global reward is established and used to allocate tasks to all operators as well as set work time and rest time to each task simultaneously for operators. The proposed model is validated in Matlab simulation environment, using the immune algorithm and dynamic programming algorithm to evaluate the performance of the model in terms of the reward value with regard to the work time, rest time, and task allocation. The result shows that the total reward of the proposed model is larger than the one obtained from previously published methods using local maximization and the total reward of our method has an exponent-like relation with the task arrival rate. The proposed model can improve the operators’ task processing efficiency in the MOMU command and control scenarios.

  5. Profit maximization mitigates competition

    DEFF Research Database (Denmark)

    Dierker, Egbert; Grodal, Birgit

    1996-01-01

    We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...

  6. Implications of maximal Jarlskog invariant and maximal CP violation

    International Nuclear Information System (INIS)

    Rodriguez-Jauregui, E.; Universidad Nacional Autonoma de Mexico

    2001-04-01

    We argue here why CP violating phase Φ in the quark mixing matrix is maximal, that is, Φ=90 . In the Standard Model CP violation is related to the Jarlskog invariant J, which can be obtained from non commuting Hermitian mass matrices. In this article we derive the conditions to have Hermitian mass matrices which give maximal Jarlskog invariant J and maximal CP violating phase Φ. We find that all squared moduli of the quark mixing elements have a singular point when the CP violation phase Φ takes the value Φ=90 . This special feature of the Jarlskog invariant J and the quark mixing matrix is a clear and precise indication that CP violating Phase Φ is maximal in order to let nature treat democratically all of the quark mixing matrix moduli. (orig.)

  7. A Multi-Hop Energy Neutral Clustering Algorithm for Maximizing Network Information Gathering in Energy Harvesting Wireless Sensor Networks.

    Science.gov (United States)

    Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X

    2015-12-26

    Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.

  8. A Multi-Hop Energy Neutral Clustering Algorithm for Maximizing Network Information Gathering in Energy Harvesting Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Liu Yang

    2015-12-01

    Full Text Available Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs in the network act as routers to transmit data to base station (BS cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.

  9. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  10. Region of interest evaluation of SPECT image reconstruction methods using a realistic brain phantom

    International Nuclear Information System (INIS)

    Xia, Weishi; Glick, S.J.; Soares, E.J.

    1996-01-01

    A realistic numerical brain phantom, developed by Zubal et al, was used for a region-of-interest evaluation of the accuracy and noise variance of the following SPECT reconstruction methods: (1) Maximum-Likelihood reconstruction using the Expectation-Maximization (ML-EM) algorithm; (2) an EM algorithm using ordered-subsets (OS-EM); (3) a re-scaled block iterative EM algorithm (RBI-EM); and (4) a filtered backprojection algorithm that uses a combination of the Bellini method for attenuation compensation and an iterative spatial blurring correction method using the frequency-distance principle (FDP). The Zubal phantom was made from segmented MRI slices of the brain, so that neuro-anatomical structures are well defined and indexed. Small regions-of-interest (ROIs) from the white matter, grey matter in the center of the brain and grey matter from the peripheral area of the brain were selected for the evaluation. Photon attenuation and distance-dependent collimator blurring were modeled. Multiple independent noise realizations were generated for two different count levels. The simulation study showed that the ROI bias measured for the EM-based algorithms decreased as the iteration number increased, and that the OS-EM and RBI-EM algorithms (16 and 64 subsets were used) achieved the equivalent accuracy of the ML-EM algorithm at about the same noise variance, with much fewer number of iterations. The Bellini-FDP restoration algorithm converged fast and required less computation per iteration. The ML-EM algorithm had a slightly better ROI bias vs. variance trade-off than the other algorithms

  11. SU(2) gauge theory in the maximally Abelian gauge without monopoles

    International Nuclear Information System (INIS)

    Shmakov, S.Yu.; Zadorozhnyj, A.M.

    1995-01-01

    We present an algorithm for simulation of SU(2) lattice gauge theory under the maximally Abelian (MA) gauge and first numerical results for the theory without Abelian monopoles. The results support the idea that nonperturbative interaction arises between monopoles and residual Abelian field and the other interactions are perturbative. It is shown that the Gribov region for the theory with the MA gauge fixed is non-connected. 12 refs., 1 tab

  12. MAXIMIZING HYDROPOWER PRODUCTION FROM RESERVOIRS:THE CASE STUDY OF MARKABA

    International Nuclear Information System (INIS)

    Jaafar, H.

    2014-01-01

    Hydropower is a form of renewable energy that is clean and cheap. Under uncertain climatic conditions, maximization of hydropower production becomes a challenging task.Stochastic Dynamic programming (SDP) is a promising optimization algorithm that is usedfor complex non-linear reservoir operational policies and strategies.In this research, a combined simulation-SDPoptimization model isdeveloped andverified for maximizing large-scale hydropower production in a monthly time step. The model isdeveloped to generate optimal operational policies for the Qarawn reservoir in Lebanon and test these policies in real time conditions. The model isused to derive operational regimes for the Qarawn reservoirunder varying flows using transitional probability matrices. Simulating the derived rules and the generated operational policies proved effective in maximizingthe hydropower production from the Markaba power plant. The model could be successfully applied to other hydropower dams in the region. (author)

  13. Maximizers versus satisficers

    Directory of Open Access Journals (Sweden)

    Andrew M. Parker

    2007-12-01

    Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.

  14. Pharmacokinetic modelling of intravenous tobramycin in adolescent and adult patients with cystic fibrosis using the nonparametric expectation maximization (NPEM) algorithm.

    Science.gov (United States)

    Touw, D J; Vinks, A A; Neef, C

    1997-06-01

    The availability of personal computer programs for individualizing drug dosage regimens has stimulated the interest in modelling population pharmacokinetics. Data from 82 adolescent and adult patients with cystic fibrosis (CF) who were treated with intravenous tobramycin because of an exacerbation of their pulmonary infection were analysed with a non-parametric expectation maximization (NPEM) algorithm. This algorithm estimates the entire discrete joint probability density of the pharmacokinetic parameters. It also provides traditional parametric statistics such as the means, standard deviation, median, covariances and correlations among the various parameters. It also provides graphic-2- and 3-dimensional representations of the marginal densities of the parameters investigated. Several models for intravenous tobramycin in adolescent and adult patients with CF were compared. Covariates were total body weight (for the volume of distribution) and creatinine clearance (for the total body clearance and elimination rate). Because of lack of data on patients with poor renal function, restricted models with non-renal clearance and the non-renal elimination rate constant fixed at literature values of 0.15 L/h and 0.01 h-1 were also included. In this population, intravenous tobramycin could be best described by median (+/-dispersion factor) volume of distribution per unit of total body weight of 0.28 +/- 0.05 L/kg, elimination rate constant of 0.25 +/- 0.10 h-1 and elimination rate constant per unit of creatinine clearance of 0.0008 +/- 0.0009 h-1/(ml/min/1.73 m2). Analysis of populations of increasing size showed that using a restricted model with a non-renal elimination rate constant fixed at 0.01 h-1, a model based on a population of only 10 to 20 patients, contained parameter values similar to those of the entire population and, using the full model, a larger population (at least 40 patients) was needed.

  15. Coverage maximization under resource constraints using a nonuniform proliferating random walk.

    Science.gov (United States)

    Saha, Sudipta; Ganguly, Niloy

    2013-02-01

    Information management services on networks, such as search and dissemination, play a key role in any large-scale distributed system. One of the most desirable features of these services is the maximization of the coverage, i.e., the number of distinctly visited nodes under constraints of network resources as well as time. However, redundant visits of nodes by different message packets (modeled, e.g., as walkers) initiated by the underlying algorithms for these services cause wastage of network resources. In this work, using results from analytical studies done in the past on a K-random-walk-based algorithm, we identify that redundancy quickly increases with an increase in the density of the walkers. Based on this postulate, we design a very simple distributed algorithm which dynamically estimates the density of the walkers and thereby carefully proliferates walkers in sparse regions. We use extensive computer simulations to test our algorithm in various kinds of network topologies whereby we find it to be performing particularly well in networks that are highly clustered as well as sparse.

  16. Blood detection in wireless capsule endoscopy using expectation maximization clustering

    Science.gov (United States)

    Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.

    2006-03-01

    Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.

  17. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.

    Science.gov (United States)

    Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho

    2017-09-18

    In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.

  18. Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System

    Directory of Open Access Journals (Sweden)

    Sunil Chinnadurai

    2017-09-01

    Full Text Available In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE maximization problem in a 5G massive multiple-input multiple-output (MIMO-non-orthogonal multiple access (NOMA downlink system with imperfect channel state information (CSI at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM. A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA scheme.

  19. Simulated annealing algorithm for optimal capital growth

    Science.gov (United States)

    Luo, Yong; Zhu, Bo; Tang, Yong

    2014-08-01

    We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.

  20. Constructing Optimal Coarse-Grained Sites of Huge Biomolecules by Fluctuation Maximization.

    Science.gov (United States)

    Li, Min; Zhang, John Zenghui; Xia, Fei

    2016-04-12

    Coarse-grained (CG) models are valuable tools for the study of functions of large biomolecules on large length and time scales. The definition of CG representations for huge biomolecules is always a formidable challenge. In this work, we propose a new method called fluctuation maximization coarse-graining (FM-CG) to construct the CG sites of biomolecules. The defined residual in FM-CG converges to a maximal value as the number of CG sites increases, allowing an optimal CG model to be rigorously defined on the basis of the maximum. More importantly, we developed a robust algorithm called stepwise local iterative optimization (SLIO) to accelerate the process of coarse-graining large biomolecules. By means of the efficient SLIO algorithm, the computational cost of coarse-graining large biomolecules is reduced to within the time scale of seconds, which is far lower than that of conventional simulated annealing. The coarse-graining of two huge systems, chaperonin GroEL and lengsin, indicates that our new methods can coarse-grain huge biomolecular systems with up to 10,000 residues within the time scale of minutes. The further parametrization of CG sites derived from FM-CG allows us to construct the corresponding CG models for studies of the functions of huge biomolecular systems.

  1. Developing maximal neuromuscular power: Part 1--biological basis of maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-01-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances, the ability to generate maximal muscular power. Part 1 focuses on the factors that affect maximal power production, while part 2, which will follow in a forthcoming edition of Sports Medicine, explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability of the neuromuscular system to generate maximal power is affected by a range of interrelated factors. Maximal muscular power is defined and limited by the force-velocity relationship and affected by the length-tension relationship. The ability to generate maximal power is influenced by the type of muscle action involved and, in particular, the time available to develop force, storage and utilization of elastic energy, interactions of contractile and elastic elements, potentiation of contractile and elastic filaments as well as stretch reflexes. Furthermore, maximal power production is influenced by morphological factors including fibre type contribution to whole muscle area, muscle architectural features and tendon properties as well as neural factors including motor unit recruitment, firing frequency, synchronization and inter-muscular coordination. In addition, acute changes in the muscle environment (i.e. alterations resulting from fatigue, changes in hormone milieu and muscle temperature) impact the ability to generate maximal power. Resistance training has been shown to impact each of these neuromuscular factors in quite specific ways. Therefore, an understanding of the biological basis of maximal power production is essential for developing training programmes that effectively enhance maximal power production in the human.

  2. A new simple iterative reconstruction algorithm for SPECT transmission measurement

    International Nuclear Information System (INIS)

    Hwang, D.S.; Zeng, G.L.

    2005-01-01

    This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases

  3. A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model

    Directory of Open Access Journals (Sweden)

    Apisit Eiumnoh

    2013-10-01

    Full Text Available Traditionally, image registration of multi-modal and multi-temporal images is performed satisfactorily before land cover mapping. However, since multi-modal and multi-temporal images are likely to be obtained from different satellite platforms and/or acquired at different times, perfect alignment is very difficult to achieve. As a result, a proper land cover mapping algorithm must be able to correct registration errors as well as perform an accurate classification. In this paper, we propose a joint classification and registration technique based on a Markov random field (MRF model to simultaneously align two or more images and obtain a land cover map (LCM of the scene. The expectation maximization (EM algorithm is employed to solve the joint image classification and registration problem by iteratively estimating the map parameters and approximate posterior probabilities. Then, the maximum a posteriori (MAP criterion is used to produce an optimum land cover map. We conducted experiments on a set of four simulated images and one pair of remotely sensed images to investigate the effectiveness and robustness of the proposed algorithm. Our results show that, with proper selection of a critical MRF parameter, the resulting LCMs derived from an unregistered image pair can achieve an accuracy that is as high as when images are perfectly aligned. Furthermore, the registration error can be greatly reduced.

  4. Link adaptation algorithm for distributed coded transmissions in cooperative OFDMA systems

    DEFF Research Database (Denmark)

    Varga, Mihaly; Badiu, Mihai Alin; Bota, Vasile

    2015-01-01

    This paper proposes a link adaptation algorithm for cooperative transmissions in the down-link connection of an OFDMA-based wireless system. The algorithm aims at maximizing the spectral efficiency of a relay-aided communication link, while satisfying the block error rate constraints at both...... adaptation algorithm has linear complexity with the number of available resource blocks, while still provides a very good performance, as shown by simulation results....

  5. Teaching learning based optimization algorithm and its engineering applications

    CERN Document Server

    Rao, R Venkata

    2016-01-01

    Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.

  6. Maximizers versus satisficers

    OpenAIRE

    Andrew M. Parker; Wandi Bruine de Bruin; Baruch Fischhoff

    2007-01-01

    Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...

  7. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer.

    Science.gov (United States)

    Yu, Hongyan; Zhang, Yongqiang; Guo, Songtao; Yang, Yuanyuan; Ji, Luyue

    2017-08-18

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively.

  8. Energy Efficiency Maximization for WSNs with Simultaneous Wireless Information and Power Transfer

    Science.gov (United States)

    Yu, Hongyan; Zhang, Yongqiang; Yang, Yuanyuan; Ji, Luyue

    2017-01-01

    Recently, the simultaneous wireless information and power transfer (SWIPT) technique has been regarded as a promising approach to enhance performance of wireless sensor networks with limited energy supply. However, from a green communication perspective, energy efficiency optimization for SWIPT system design has not been investigated in Wireless Rechargeable Sensor Networks (WRSNs). In this paper, we consider the tradeoffs between energy efficiency and three factors including spectral efficiency, the transmit power and outage target rate for two different modes, i.e., power splitting (PS) and time switching modes (TS), at the receiver. Moreover, we formulate the energy efficiency maximization problem subject to the constraints of minimum Quality of Service (QoS), minimum harvested energy and maximum transmission power as non-convex optimization problem. In particular, we focus on optimizing power control and power allocation policy in PS and TS modes to maximize energy efficiency of data transmission. For PS and TS modes, we propose the corresponding algorithm to characterize a non-convex optimization problem that takes into account the circuit power consumption and the harvested energy. By exploiting nonlinear fractional programming and Lagrangian dual decomposition, we propose suboptimal iterative algorithms to obtain the solutions of non-convex optimization problems. Furthermore, we derive the outage probability and effective throughput from the scenarios that the transmitter does not or partially know the channel state information (CSI) of the receiver. Simulation results illustrate that the proposed optimal iterative algorithm can achieve optimal solutions within a small number of iterations and various tradeoffs between energy efficiency and spectral efficiency, transmit power and outage target rate, respectively. PMID:28820496

  9. An Efficient Approach to Mining Maximal Contiguous Frequent Patterns from Large DNA Sequence Databases

    Directory of Open Access Journals (Sweden)

    Md. Rezaul Karim

    2012-03-01

    Full Text Available Mining interesting patterns from DNA sequences is one of the most challenging tasks in bioinformatics and computational biology. Maximal contiguous frequent patterns are preferable for expressing the function and structure of DNA sequences and hence can capture the common data characteristics among related sequences. Biologists are interested in finding frequent orderly arrangements of motifs that are responsible for similar expression of a group of genes. In order to reduce mining time and complexity, however, most existing sequence mining algorithms either focus on finding short DNA sequences or require explicit specification of sequence lengths in advance. The challenge is to find longer sequences without specifying sequence lengths in advance. In this paper, we propose an efficient approach to mining maximal contiguous frequent patterns from large DNA sequence datasets. The experimental results show that our proposed approach is memory-efficient and mines maximal contiguous frequent patterns within a reasonable time.

  10. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  11. Mean-variance analysis of block-iterative reconstruction algorithms modeling 3D detector response in SPECT

    Science.gov (United States)

    Lalush, D. S.; Tsui, B. M. W.

    1998-06-01

    We study the statistical convergence properties of two fast iterative reconstruction algorithms, the rescaled block-iterative (RBI) and ordered subset (OS) EM algorithms, in the context of cardiac SPECT with 3D detector response modeling. The Monte Carlo method was used to generate nearly noise-free projection data modeling the effects of attenuation, detector response, and scatter from the MCAT phantom. One thousand noise realizations were generated with an average count level approximating a typical T1-201 cardiac study. Each noise realization was reconstructed using the RBI and OS algorithms for cases with and without detector response modeling. For each iteration up to twenty, we generated mean and variance images, as well as covariance images for six specific locations. Both OS and RBI converged in the mean to results that were close to the noise-free ML-EM result using the same projection model. When detector response was not modeled in the reconstruction, RBI exhibited considerably lower noise variance than OS for the same resolution. When 3D detector response was modeled, the RBI-EM provided a small improvement in the tradeoff between noise level and resolution recovery, primarily in the axial direction, while OS required about half the number of iterations of RBI to reach the same resolution. We conclude that OS is faster than RBI, but may be sensitive to errors in the projection model. Both OS-EM and RBI-EM are effective alternatives to the EVIL-EM algorithm, but noise level and speed of convergence depend on the projection model used.

  12. Clustering algorithms for Stokes space modulation format recognition

    DEFF Research Database (Denmark)

    Boada, Ricard; Borkowski, Robert; Tafur Monroy, Idelfonso

    2015-01-01

    influences the performance of the detection process, particularly at low signal-to-noise ratios. This paper reports on an extensive study of six different clustering algorithms: k-means, expectation maximization, density-based DBSCAN and OPTICS, spectral clustering and maximum likelihood clustering, used...

  13. PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture

    Directory of Open Access Journals (Sweden)

    Kanokmon Rujirakul

    2014-01-01

    Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  14. PEM-PCA: a parallel expectation-maximization PCA face recognition architecture.

    Science.gov (United States)

    Rujirakul, Kanokmon; So-In, Chakchai; Arnonkijpanich, Banchar

    2014-01-01

    Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages' complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.

  15. Runway Operations Planning: A Two-Stage Heuristic Algorithm

    Science.gov (United States)

    Anagnostakis, Ioannis; Clarke, John-Paul

    2003-01-01

    The airport runway is a scarce resource that must be shared by different runway operations (arrivals, departures and runway crossings). Given the possible sequences of runway events, careful Runway Operations Planning (ROP) is required if runway utilization is to be maximized. From the perspective of departures, ROP solutions are aircraft departure schedules developed by optimally allocating runway time for departures given the time required for arrivals and crossings. In addition to the obvious objective of maximizing throughput, other objectives, such as guaranteeing fairness and minimizing environmental impact, can also be incorporated into the ROP solution subject to constraints introduced by Air Traffic Control (ATC) procedures. This paper introduces a two stage heuristic algorithm for solving the Runway Operations Planning (ROP) problem. In the first stage, sequences of departure class slots and runway crossings slots are generated and ranked based on departure runway throughput under stochastic conditions. In the second stage, the departure class slots are populated with specific flights from the pool of available aircraft, by solving an integer program with a Branch & Bound algorithm implementation. Preliminary results from this implementation of the two-stage algorithm on real-world traffic data are presented.

  16. Profit maximization with customer satisfaction control for electric vehicle charging in smart grids

    Directory of Open Access Journals (Sweden)

    Edwin Collado

    2017-05-01

    Full Text Available As the market of electric vehicles is gaining popularity, large-scale commercialized or privately-operated charging stations are expected to play a key role as a technology enabler. In this paper, we study the problem of charging electric vehicles at stations with limited charging machines and power resources. The purpose of this study is to develop a novel profit maximization framework for station operation in both offline and online charging scenarios, under certain customer satisfaction constraints. The main goal is to maximize the profit obtained by the station owner and provide a satisfactory charging service to the customers. The framework includes not only the vehicle scheduling and charging power control, but also the managing of user satisfaction factors, which are defined as the percentages of finished charging targets. The profit maximization problem is proved to be NPcomplete in both scenarios (NP refers to “nondeterministic polynomial time”, for which two-stage charging strategies are proposed to obtain efficient suboptimal solutions. Competitive analysis is also provided to analyze the performance of the proposed online two-stage charging algorithm against the offline counterpart under non-congested and congested charging scenarios. Finally, the simulation results show that the proposed two-stage charging strategies achieve performance close to that with exhaustive search. Also, the proposed algorithms provide remarkable performance gains compared to the other conventional charging strategies with respect to not only the unified profit, but also other practical interests, such as the computational time, the user satisfaction factor, the power consumption, and the competitive ratio.

  17. Fast algorithm of adaptive Fourier series

    Science.gov (United States)

    Gao, You; Ku, Min; Qian, Tao

    2018-05-01

    Adaptive Fourier decomposition (AFD, precisely 1-D AFD or Core-AFD) was originated for the goal of positive frequency representations of signals. It achieved the goal and at the same time offered fast decompositions of signals. There then arose several types of AFDs. AFD merged with the greedy algorithm idea, and in particular, motivated the so-called pre-orthogonal greedy algorithm (Pre-OGA) that was proven to be the most efficient greedy algorithm. The cost of the advantages of the AFD type decompositions is, however, the high computational complexity due to the involvement of maximal selections of the dictionary parameters. The present paper offers one formulation of the 1-D AFD algorithm by building the FFT algorithm into it. Accordingly, the algorithm complexity is reduced, from the original $\\mathcal{O}(M N^2)$ to $\\mathcal{O}(M N\\log_2 N)$, where $N$ denotes the number of the discretization points on the unit circle and $M$ denotes the number of points in $[0,1)$. This greatly enhances the applicability of AFD. Experiments are carried out to show the high efficiency of the proposed algorithm.

  18. A Quantitative Three-Dimensional Image Analysis Tool for Maximal Acquisition of Spatial Heterogeneity Data.

    Science.gov (United States)

    Allenby, Mark C; Misener, Ruth; Panoskaltsis, Nicki; Mantalaris, Athanasios

    2017-02-01

    Three-dimensional (3D) imaging techniques provide spatial insight into environmental and cellular interactions and are implemented in various fields, including tissue engineering, but have been restricted by limited quantification tools that misrepresent or underutilize the cellular phenomena captured. This study develops image postprocessing algorithms pairing complex Euclidean metrics with Monte Carlo simulations to quantitatively assess cell and microenvironment spatial distributions while utilizing, for the first time, the entire 3D image captured. Although current methods only analyze a central fraction of presented confocal microscopy images, the proposed algorithms can utilize 210% more cells to calculate 3D spatial distributions that can span a 23-fold longer distance. These algorithms seek to leverage the high sample cost of 3D tissue imaging techniques by extracting maximal quantitative data throughout the captured image.

  19. Research on Improved NSGA-II Algorithm and Its Application in Emergency Management

    Directory of Open Access Journals (Sweden)

    Xi Fang

    2018-01-01

    Full Text Available This paper constructs a dynamic multiobjective location model; three objectives are considered: the first objective maximizes the total utility of relief supplies, the second objective minimizes the number of temporary facilities needed to operate, and the third objective maximizes the satisfaction for all demand points. We propose an improved NSGA-II to solve the optimization problem. The computational experiments are divided into two sections: In the first procedure, the numerical experiment is constructed by the classical functions ZDT1, ZDT2, and DTLZ2; the results show that the proposed algorithm generates the exact Pareto front, and the convergence and uniformity of the proposed algorithm are better than the NSGA-II and MOEA/D. In the second procedure, the simulation experiment is constructed by a case in emergency management; the results show that the proposed algorithm is more reasonable than the traditional algorithms NSGA-II and MOEA/D in terms of the three objectives. It is proved that the improved NSGA-II algorithm, which is proposed in this paper, has high precision application for the sudden disaster crisis and emergency management.

  20. Entropy maximization

    Indian Academy of Sciences (India)

    Abstract. It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf f that satisfy. ∫ fhi dμ = λi for i = 1, 2,...,...k the maximizer of entropy is an f0 that is pro- portional to exp(. ∑ ci hi ) for some choice of ci . An extension of this to a continuum of.

  1. Algorithm comparison for schedule optimization in MR fingerprinting.

    Science.gov (United States)

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. An Evolutionary Algorithm to Mine High-Utility Itemsets

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available High-utility itemset mining (HUIM is a critical issue in recent years since it can be used to reveal the profitable products by considering both the quantity and profit factors instead of frequent itemset mining (FIM of association rules (ARs. In this paper, an evolutionary algorithm is presented to efficiently mine high-utility itemsets (HUIs based on the binary particle swarm optimization. A maximal pattern (MP-tree strcutrue is further designed to solve the combinational problem in the evolution process. Substantial experiments on real-life datasets show that the proposed binary PSO-based algorithm has better results compared to the state-of-the-art GA-based algorithm.

  3. Effective noise-suppressed and artifact-reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Li, Si; Xu, Yuesheng, E-mail: yxu06@syr.edu [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275 (China); Zhang, Jiahan; Lipson, Edward [Department of Physics, Syracuse University, Syracuse, New York 13244 (United States); Krol, Andrzej; Feiglin, David [Department of Radiology, SUNY Upstate Medical University, Syracuse, New York 13210 (United States); Schmidtlein, C. Ross [Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, New York 10065 (United States); Vogelsang, Levon [Carestream Health, Rochester, New York 14608 (United States); Shen, Lixin [Guangdong Provincial Key Laboratory of Computational Science, School of Mathematics and Computational Sciences, Sun Yat-sen University, Guangzhou 510275, China and Department of Mathematics, Syracuse University, Syracuse, New York 13244 (United States)

    2015-08-15

    Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean

  4. Effective noise-suppressed and artifact-reduced reconstruction of SPECT data using a preconditioned alternating projection algorithm

    International Nuclear Information System (INIS)

    Li, Si; Xu, Yuesheng; Zhang, Jiahan; Lipson, Edward; Krol, Andrzej; Feiglin, David; Schmidtlein, C. Ross; Vogelsang, Levon; Shen, Lixin

    2015-01-01

    Purpose: The authors have recently developed a preconditioned alternating projection algorithm (PAPA) with total variation (TV) regularizer for solving the penalized-likelihood optimization model for single-photon emission computed tomography (SPECT) reconstruction. This algorithm belongs to a novel class of fixed-point proximity methods. The goal of this work is to investigate how PAPA performs while dealing with realistic noisy SPECT data, to compare its performance with more conventional methods, and to address issues with TV artifacts by proposing a novel form of the algorithm invoking high-order TV regularization, denoted as HOTV-PAPA, which has been explored and studied extensively in the present work. Methods: Using Monte Carlo methods, the authors simulate noisy SPECT data from two water cylinders; one contains lumpy “warm” background and “hot” lesions of various sizes with Gaussian activity distribution, and the other is a reference cylinder without hot lesions. The authors study the performance of HOTV-PAPA and compare it with PAPA using first-order TV regularization (TV-PAPA), the Panin–Zeng–Gullberg one-step-late method with TV regularization (TV-OSL), and an expectation–maximization algorithm with Gaussian postfilter (GPF-EM). The authors select penalty-weights (hyperparameters) by qualitatively balancing the trade-off between resolution and image noise separately for TV-PAPA and TV-OSL. However, the authors arrived at the same penalty-weight value for both of them. The authors set the first penalty-weight in HOTV-PAPA equal to the optimal penalty-weight found for TV-PAPA. The second penalty-weight needed for HOTV-PAPA is tuned by balancing resolution and the severity of staircase artifacts. The authors adjust the Gaussian postfilter to approximately match the local point spread function of GPF-EM and HOTV-PAPA. The authors examine hot lesion detectability, study local spatial resolution, analyze background noise properties, estimate mean

  5. A polynomial time algorithm for checking regularity of totally normed process algebra

    NARCIS (Netherlands)

    Yang, F.; Huang, H.

    2015-01-01

    A polynomial algorithm for the regularity problem of weak and branching bisimilarity on totally normed process algebra (PA) processes is given. Its time complexity is O(n 3 +mn) O(n3+mn), where n is the number of transition rules and m is the maximal length of the rules. The algorithm works for

  6. Multiuser TOA Estimation Algorithm in DS-CDMA Sparse Channel for Radiolocation

    Science.gov (United States)

    Kim, Sunwoo

    This letter considers multiuser time delay estimation in a sparse channel environment for radiolocation. The generalized successive interference cancellation (GSIC) algorithm is used to eliminate the multiple access interference (MAI). To adapt GSIC to sparse channels the alternating maximization (AM) algorithm is considered, and the continuous time delay of each path is estimated without requiring a priori known data sequences.

  7. A maximum-likelihood reconstruction algorithm for tomographic gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Estep, R.J.; Cole, R.A.; Sheppard, G.A.

    1994-01-01

    A new tomographic reconstruction algorithm for nondestructive assay with high resolution gamma-ray spectroscopy (HRGS) is presented. The reconstruction problem is formulated using a maximum-likelihood approach in which the statistical structure of both the gross and continuum measurements used to determine the full-energy response in HRGS is precisely modeled. An accelerated expectation-maximization algorithm is used to determine the optimal solution. The algorithm is applied to safeguards and environmental assays of large samples (for example, 55-gal. drums) in which high continuum levels caused by Compton scattering are routinely encountered. Details of the implementation of the algorithm and a comparative study of the algorithm's performance are presented

  8. Entropy Maximization

    Indian Academy of Sciences (India)

    It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy ∫ f h i d = i for i = 1 , 2 , … , … k the maximizer of entropy is an f 0 that is proportional to exp ⁡ ( ∑ c i h i ) for some choice of c i . An extension of this to a continuum of ...

  9. Algorithmic test design using classical item parameters

    NARCIS (Netherlands)

    van der Linden, Willem J.; Adema, Jos J.

    Two optimalization models for the construction of tests with a maximal value of coefficient alpha are given. Both models have a linear form and can be solved by using a branch-and-bound algorithm. The first model assumes an item bank calibrated under the Rasch model and can be used, for instance,

  10. 基于距离最大化和缺失数据聚类的填充算法%Missing value imputation algorithm based on distance maximization and missing data clustering

    Institute of Scientific and Technical Information of China (English)

    赵星; 王逊; 黄树成

    2018-01-01

    Through the improvement of the missing value filling algorithm based on K-means clustering,a filling algorithm based on distance maximization and missing data clustering is proposed in this paper.First of all,the original filling algorithm needs to enter the number of clusters in advance.To solve this problem,an improved K-means clustering algorithm is designed.It determines the cluster centers by the maximum distance between the data.It will automatically generate the number of clusters,and improve the efficiency of clustering.Secondly,the clustering distance function is improved.The improved algorithm can be used to cluster the missing value records,thus simplifying the steps of the original filling algorithm.Through experiments on STUDENT ALCOHOL CONSUMPTION data set,experimental results show that the proposed algorithm can improve efficiency and effectively fill the missing data at the same time.%通过对基于K-means聚类的缺失值填充算法的改进,文中提出了基于距离最大化和缺失数据聚类的填充算法.首先,针对原填充算法需要提前输入聚类个数这一缺点,设计了改进的K-means聚类算法:使用数据间的最大距离确定聚类中心,自动产生聚类个数,提高聚类效果;其次,对聚类的距离函数进行改进,采用部分距离度量方式,改进后的算法可以对含有缺失值的记录进行聚类,简化原填充算法步骤.通过对STUDENT ALCOHOL CONSUMPTION数据集的实验,结果证明了该算法能够在提高效率的同时,有效地填充缺失数据.

  11. The particle swarm optimization algorithm applied to nuclear systems surveillance test planning; Otimizacao aplicada ao planejamento de politicas de testes em sistemas nucleares por enxame de particulas

    Energy Technology Data Exchange (ETDEWEB)

    Siqueira, Newton Norat

    2006-12-15

    This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)

  12. Antiescravismo e epidemia: "O tráfico dos negros considerado como a causa da febre amarela", de Mathieu François Maxime Audouard, e o Rio de Janeiro em 1850 Antislavery and epidemic Mathieu François Maxime Audouard's "O tráfico dos negros considerado como a causa da febre amarela" and the city of Rio de Janeiro in 1850

    Directory of Open Access Journals (Sweden)

    Kaori Kodama

    2009-06-01

    Full Text Available O artigo "O tráfico dos negros considerado como a causa da febre amarela", de Mathieu François Maxime Audouard (1776-1856, foi publicado em 1850 no jornal O Philantropo, periódico de propaganda contra o tráfico que circulou no Rio de Janeiro entre 1849 e 1852, e contava com diversos médicos entre seus membros. O texto, traduzido do original do médico francês e publicado no contexto da epidemia de febre amarela na cidade, oferece elementos para refletir sobre a atuação dos médicos brasileiros na questão da escravidão, no momento em que era promulgada a cessação do tráfico no país.The article "O tráfico de negros considerado como a causa da febre amarela" [The Negro slave trade considered as the cause of yellow fever] , by French physician Mathieu François Maxime Audouard (1776-1856, was published in 1850 in the newspaper O Philantropo, an organ of anti-slave trade propaganda that circulated in Rio de Janeiro from 1849 to 1852, with a number of physicians as members. Translated from the original and published during the yellow fever epidemic that hit Rio de Janeiro, the text affords an opportunity to reflect on the positions about slavery that were held by Brazilian physicians at the time the law against the slave trade was promulgated in Brazil.

  13. First results with two light flavours of quarks with maximally twisted mass

    International Nuclear Information System (INIS)

    Jansen, K.; Urbach, C.

    2006-10-01

    We report on first results of an ongoing effort to simulate lattice QCD with two degenerate flavours of quarks by means of the twisted mass formulation tuned to maximal twist. By utilising recent improvements of the HMC algorithm, pseudo-scalar masses well below 300 MeV are simulated on volumes L 3 .T with T=2L and L>2 fm and at values of the lattice spacing a f =2+1+1 flavours are discussed. (orig.)

  14. Maximally incompatible quantum observables

    Energy Technology Data Exchange (ETDEWEB)

    Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)

    2014-05-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  15. Maximally incompatible quantum observables

    International Nuclear Information System (INIS)

    Heinosaari, Teiko; Schultz, Jussi; Toigo, Alessandro; Ziman, Mario

    2014-01-01

    The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.

  16. Loss-minimal Algorithmic Trading Based on Levy Processes

    Directory of Open Access Journals (Sweden)

    Farhad Kia

    2014-08-01

    Full Text Available In this paper we optimize portfolios assuming that the value of the portfolio follows a Lévy process. First we identify the parameters of the underlying Lévy process and then portfolio optimization is performed by maximizing the probability of positive return. The method has been tested by extensive performance analysis on Forex and SP 500 historical time series. The proposed trading algorithm has achieved 4.9\\% percent yearly return on average without leverage which proves its applicability to algorithmic trading.

  17. Survival associated pathway identification with group Lp penalized global AUC maximization

    Directory of Open Access Journals (Sweden)

    Liu Zhenqiu

    2010-08-01

    Full Text Available Abstract It has been demonstrated that genes in a cell do not act independently. They interact with one another to complete certain biological processes or to implement certain molecular functions. How to incorporate biological pathways or functional groups into the model and identify survival associated gene pathways is still a challenging problem. In this paper, we propose a novel iterative gradient based method for survival analysis with group Lp penalized global AUC summary maximization. Unlike LASSO, Lp (p 1. We first extend Lp for individual gene identification to group Lp penalty for pathway selection, and then develop a novel iterative gradient algorithm for penalized global AUC summary maximization (IGGAUCS. This method incorporates the genetic pathways into global AUC summary maximization and identifies survival associated pathways instead of individual genes. The tuning parameters are determined using 10-fold cross validation with training data only. The prediction performance is evaluated using test data. We apply the proposed method to survival outcome analysis with gene expression profile and identify multiple pathways simultaneously. Experimental results with simulation and gene expression data demonstrate that the proposed procedures can be used for identifying important biological pathways that are related to survival phenotype and for building a parsimonious model for predicting the survival times.

  18. Genetic algorithm based on qubits and quantum gates

    International Nuclear Information System (INIS)

    Silva, Joao Batista Rosa; Ramos, Rubens Viana

    2003-01-01

    Full text: Genetic algorithm, a computational technique based on the evolution of the species, in which a possible solution of the problem is coded in a binary string, called chromosome, has been used successfully in several kinds of problems, where the search of a minimal or a maximal value is necessary, even when local minima are present. A natural generalization of a binary string is a qubit string. Hence, it is possible to use the structure of a genetic algorithm having a sequence of qubits as a chromosome and using quantum operations in the reproduction in order to find the best solution in some problems of quantum information. For example, given a unitary matrix U what is the pair of qubits that, when applied at the input, provides the output state with maximal entanglement? In order to solve this problem, a population of chromosomes of two qubits was created. The crossover was performed applying the quantum gates CNOT and SWAP at the pair of qubits, while the mutation was performed applying the quantum gates Hadamard, Z and Not in a single qubit. The result was compared with a classical genetic algorithm used to solve the same problem. A hundred simulations using the same U matrix was performed. Both algorithms, hereafter named by CGA (classical) and QGA (using qu bits), reached good results close to 1 however, the number of generations needed to find the best result was lower for the QGA. Another problem where the QGA can be useful is in the calculation of the relative entropy of entanglement. We have tested our algorithm using 100 pure states chosen randomly. The stop criterion used was the error lower than 0.01. The main advantages of QGA are its good precision, robustness and very easy implementation. The main disadvantage is its low velocity, as happen for all kind of genetic algorithms. (author)

  19. Unsupervised learning of mixture models based on swarm intelligence and neural networks with optimal completion using incomplete data

    Directory of Open Access Journals (Sweden)

    Ahmed R. Abas

    2012-07-01

    Full Text Available In this paper, a new algorithm is presented for unsupervised learning of finite mixture models (FMMs using data set with missing values. This algorithm overcomes the local optima problem of the Expectation-Maximization (EM algorithm via integrating the EM algorithm with Particle Swarm Optimization (PSO. In addition, the proposed algorithm overcomes the problem of biased estimation due to overlapping clusters in estimating missing values in the input data set by integrating locally-tuned general regression neural networks with Optimal Completion Strategy (OCS. A comparison study shows the superiority of the proposed algorithm over other algorithms commonly used in the literature in unsupervised learning of FMM parameters that result in minimum mis-classification errors when used in clustering incomplete data set that is generated from overlapping clusters and these clusters are largely different in their sizes.

  20. Multi-sources model and control algorithm of an energy management system for light electric vehicles

    International Nuclear Information System (INIS)

    Hannan, M.A.; Azidin, F.A.; Mohamed, A.

    2012-01-01

    Highlights: ► An energy management system (EMS) is developed for a scooter under normal and heavy power load conditions. ► The battery, FC, SC, EMS, DC machine and vehicle dynamics are modeled and designed for the system. ► State-based logic control algorithms provide an efficient and feasible multi-source EMS for light electric vehicles. ► Vehicle’s speed and power are closely matched with the ECE-47 driving cycle under normal and heavy load conditions. ► Sources of energy changeover occurred at 50% of the battery state of charge level in heavy load conditions. - Abstract: This paper presents the multi-sources energy models and ruled based feedback control algorithm of an energy management system (EMS) for light electric vehicle (LEV), i.e., scooters. The multiple sources of energy, such as a battery, fuel cell (FC) and super-capacitor (SC), EMS and power controller, DC machine and vehicle dynamics are designed and modeled using MATLAB/SIMULINK. The developed control strategies continuously support the EMS of the multiple sources of energy for a scooter under normal and heavy power load conditions. The performance of the proposed system is analyzed and compared with that of the ECE-47 test drive cycle in terms of vehicle speed and load power. The results show that the designed vehicle’s speed and load power closely match those of the ECE-47 test driving cycle under normal and heavy load conditions. This study’s results suggest that the proposed control algorithm provides an efficient and feasible EMS for LEV.

  1. Maximal combustion temperature estimation

    International Nuclear Information System (INIS)

    Golodova, E; Shchepakina, E

    2006-01-01

    This work is concerned with the phenomenon of delayed loss of stability and the estimation of the maximal temperature of safe combustion. Using the qualitative theory of singular perturbations and canard techniques we determine the maximal temperature on the trajectories located in the transition region between the slow combustion regime and the explosive one. This approach is used to estimate the maximal temperature of safe combustion in multi-phase combustion models

  2. Developing maximal neuromuscular power: part 2 - training considerations for improving maximal power production.

    Science.gov (United States)

    Cormie, Prue; McGuigan, Michael R; Newton, Robert U

    2011-02-01

    This series of reviews focuses on the most important neuromuscular function in many sport performances: the ability to generate maximal muscular power. Part 1, published in an earlier issue of Sports Medicine, focused on the factors that affect maximal power production while part 2 explores the practical application of these findings by reviewing the scientific literature relevant to the development of training programmes that most effectively enhance maximal power production. The ability to generate maximal power during complex motor skills is of paramount importance to successful athletic performance across many sports. A crucial issue faced by scientists and coaches is the development of effective and efficient training programmes that improve maximal power production in dynamic, multi-joint movements. Such training is referred to as 'power training' for the purposes of this review. Although further research is required in order to gain a deeper understanding of the optimal training techniques for maximizing power in complex, sports-specific movements and the precise mechanisms underlying adaptation, several key conclusions can be drawn from this review. First, a fundamental relationship exists between strength and power, which dictates that an individual cannot possess a high level of power without first being relatively strong. Thus, enhancing and maintaining maximal strength is essential when considering the long-term development of power. Second, consideration of movement pattern, load and velocity specificity is essential when designing power training programmes. Ballistic, plyometric and weightlifting exercises can be used effectively as primary exercises within a power training programme that enhances maximal power. The loads applied to these exercises will depend on the specific requirements of each particular sport and the type of movement being trained. The use of ballistic exercises with loads ranging from 0% to 50% of one-repetition maximum (1RM) and

  3. Energy Aware Clustering Algorithms for Wireless Sensor Networks

    Science.gov (United States)

    Rakhshan, Noushin; Rafsanjani, Marjan Kuchaki; Liu, Chenglian

    2011-09-01

    The sensor nodes deployed in wireless sensor networks (WSNs) are extremely power constrained, so maximizing the lifetime of the entire networks is mainly considered in the design. In wireless sensor networks, hierarchical network structures have the advantage of providing scalable and energy efficient solutions. In this paper, we investigate different clustering algorithms for WSNs and also compare these clustering algorithms based on metrics such as clustering distribution, cluster's load balancing, Cluster Head's (CH) selection strategy, CH's role rotation, node mobility, clusters overlapping, intra-cluster communications, reliability, security and location awareness.

  4. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Xiaolei; Gao, Xin

    2013-01-01

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  5. Non-negative matrix factorization by maximizing correntropy for cancer clustering

    KAUST Repository

    Wang, Jim Jing-Yan

    2013-03-24

    Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.

  6. Computational performance of a projection and rescaling algorithm

    OpenAIRE

    Pena, Javier; Soheili, Negar

    2018-01-01

    This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...

  7. Techniques to maximize software reliability in radiation fields

    International Nuclear Information System (INIS)

    Eichhorn, G.; Piercey, R.B.

    1986-01-01

    Microprocessor system failures due to memory corruption by single event upsets (SEUs) and/or latch-up in RAM or ROM memory are common in environments where there is high radiation flux. Traditional methods to harden microcomputer systems against SEUs and memory latch-up have usually involved expensive large scale hardware redundancy. Such systems offer higher reliability, but they tend to be more complex and non-standard. At the Space Astronomy Laboratory the authors have developed general programming techniques for producing software which is resistant to such memory failures. These techniques, which may be applied to standard off-the-shelf hardware, as well as custom designs, include an implementation of Maximally Redundant Software (MRS) model, error detection algorithms and memory verification and management

  8. A Simulated Annealing method to solve a generalized maximal covering location problem

    Directory of Open Access Journals (Sweden)

    M. Saeed Jabalameli

    2011-04-01

    Full Text Available The maximal covering location problem (MCLP seeks to locate a predefined number of facilities in order to maximize the number of covered demand points. In a classical sense, MCLP has three main implicit assumptions: all or nothing coverage, individual coverage, and fixed coverage radius. By relaxing these assumptions, three classes of modelling formulations are extended: the gradual cover models, the cooperative cover models, and the variable radius models. In this paper, we develop a special form of MCLP which combines the characteristics of gradual cover models, cooperative cover models, and variable radius models. The proposed problem has many applications such as locating cell phone towers. The model is formulated as a mixed integer non-linear programming (MINLP. In addition, a simulated annealing algorithm is used to solve the resulted problem and the performance of the proposed method is evaluated with a set of randomly generated problems.

  9. Identifying multiple influential spreaders by a heuristic clustering algorithm

    International Nuclear Information System (INIS)

    Bao, Zhong-Kui; Liu, Jian-Guo; Zhang, Hai-Feng

    2017-01-01

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  10. Identifying multiple influential spreaders by a heuristic clustering algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Bao, Zhong-Kui [School of Mathematical Science, Anhui University, Hefei 230601 (China); Liu, Jian-Guo [Data Science and Cloud Service Research Center, Shanghai University of Finance and Economics, Shanghai, 200133 (China); Zhang, Hai-Feng, E-mail: haifengzhang1978@gmail.com [School of Mathematical Science, Anhui University, Hefei 230601 (China); Department of Communication Engineering, North University of China, Taiyuan, Shan' xi 030051 (China)

    2017-03-18

    The problem of influence maximization in social networks has attracted much attention. However, traditional centrality indices are suitable for the case where a single spreader is chosen as the spreading source. Many times, spreading process is initiated by simultaneously choosing multiple nodes as the spreading sources. In this situation, choosing the top ranked nodes as multiple spreaders is not an optimal strategy, since the chosen nodes are not sufficiently scattered in networks. Therefore, one ideal situation for multiple spreaders case is that the spreaders themselves are not only influential but also they are dispersively distributed in networks, but it is difficult to meet the two conditions together. In this paper, we propose a heuristic clustering (HC) algorithm based on the similarity index to classify nodes into different clusters, and finally the center nodes in clusters are chosen as the multiple spreaders. HC algorithm not only ensures that the multiple spreaders are dispersively distributed in networks but also avoids the selected nodes to be very “negligible”. Compared with the traditional methods, our experimental results on synthetic and real networks indicate that the performance of HC method on influence maximization is more significant. - Highlights: • A heuristic clustering algorithm is proposed to identify the multiple influential spreaders in complex networks. • The algorithm can not only guarantee the selected spreaders are sufficiently scattered but also avoid to be “insignificant”. • The performance of our algorithm is generally better than other methods, regardless of real networks or synthetic networks.

  11. Optimization of magnet sorting in a storage ring using genetic algorithms

    International Nuclear Information System (INIS)

    Chen Jia; Wang Lin; Li Weimin; Gao Weiwei

    2013-01-01

    In this paper, the genetic algorithms are applied to the optimization problem of magnet sorting in an electron storage ring, according to which the objectives are set so that the closed orbit distortion and beta beating can be minimized and the dynamic aperture maximized. The sorting of dipole, quadrupole and sextupole magnets is optimized while the optimization results show the power of the application of genetic algorithms in magnet sorting. (authors)

  12. Multiobjective Economic Load Dispatch in 3-D Space by Genetic Algorithm

    Science.gov (United States)

    Jain, N. K.; Nangia, Uma; Singh, Iqbal

    2017-10-01

    This paper presents the application of genetic algorithm to Multiobjective Economic Load Dispatch (MELD) problem considering fuel cost, transmission losses and environmental pollution as objective functions. The MELD problem has been formulated using constraint method. The non-inferior set for IEEE 5, 14 and 30-bus system has been generated by using genetic algorithm and the target point has been obtained by using maximization of minimum relative attainments.

  13. Reconstruction algorithm in compressed sensing based on maximum a posteriori estimation

    International Nuclear Information System (INIS)

    Takeda, Koujin; Kabashima, Yoshiyuki

    2013-01-01

    We propose a systematic method for constructing a sparse data reconstruction algorithm in compressed sensing at a relatively low computational cost for general observation matrix. It is known that the cost of ℓ 1 -norm minimization using a standard linear programming algorithm is O(N 3 ). We show that this cost can be reduced to O(N 2 ) by applying the approach of posterior maximization. Furthermore, in principle, the algorithm from our approach is expected to achieve the widest successful reconstruction region, which is evaluated from theoretical argument. We also discuss the relation between the belief propagation-based reconstruction algorithm introduced in preceding works and our approach

  14. A novel clustering algorithm based on quantum games

    International Nuclear Information System (INIS)

    Li Qiang; He Yan; Jiang Jingping

    2009-01-01

    Enormous successes have been made by quantum algorithms during the last decade. In this paper, we combine the quantum game with the problem of data clustering, and then develop a quantum-game-based clustering algorithm, in which data points in a dataset are considered as players who can make decisions and implement quantum strategies in quantum games. After each round of a quantum game, each player's expected payoff is calculated. Later, he uses a link-removing-and-rewiring (LRR) function to change his neighbors and adjust the strength of links connecting to them in order to maximize his payoff. Further, algorithms are discussed and analyzed in two cases of strategies, two payoff matrixes and two LRR functions. Consequently, the simulation results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the clustering algorithms have fast rates of convergence. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.

  15. Sort-Mid tasks scheduling algorithm in grid computing.

    Science.gov (United States)

    Reda, Naglaa M; Tawfik, A; Marzok, Mohamed A; Khamis, Soheir M

    2015-11-01

    Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  16. Uplink transmit beamforming design for SINR maximization with full multiuser channel state information

    Science.gov (United States)

    Xi, Songnan; Zoltowski, Michael D.

    2008-04-01

    Multiuser multiple-input multiple-output (MIMO) systems are considered in this paper. We continue our research on uplink transmit beamforming design for multiple users under the assumption that the full multiuser channel state information, which is the collection of the channel state information between each of the users and the base station, is known not only to the receiver but also to all the transmitters. We propose an algorithm for designing optimal beamforming weights in terms of maximizing the signal-to-interference-plus-noise ratio (SINR). Through statistical modeling, we decouple the original mathematically intractable optimization problem and achieved a closed-form solution. As in our previous work, the minimum mean-squared error (MMSE) receiver with successive interference cancellation (SIC) is adopted for multiuser detection. The proposed scheme is compared with an existing jointly optimized transceiver design, referred to as the joint transceiver in this paper, and our previously proposed eigen-beamforming algorithm. Simulation results demonstrate that our algorithm, with much less computational burden, accomplishes almost the same performance as the joint transceiver for spatially independent MIMO channel and even better performance for spatially correlated MIMO channels. And it always works better than our previously proposed eigen beamforming algorithm.

  17. Fisher information and statistical inference for phase-type distributions

    DEFF Research Database (Denmark)

    Bladt, Mogens; Esparza, Luz Judith R; Nielsen, Bo Friis

    2011-01-01

    This paper is concerned with statistical inference for both continuous and discrete phase-type distributions. We consider maximum likelihood estimation, where traditionally the expectation-maximization (EM) algorithm has been employed. Certain numerical aspects of this method are revised and we...

  18. A Comparison of Heuristics with Modularity Maximization Objective using Biological Data Sets

    Directory of Open Access Journals (Sweden)

    Pirim Harun

    2016-01-01

    Full Text Available Finding groups of objects exhibiting similar patterns is an important data analytics task. Many disciplines have their own terminologies such as cluster, group, clique, community etc. defining the similar objects in a set. Adopting the term community, many exact and heuristic algorithms are developed to find the communities of interest in available data sets. Here, three heuristic algorithms to find communities are compared using five gene expression data sets. The heuristics have a common objective function of maximizing the modularity that is a quality measure of a partition and a reflection of objects’ relevance in communities. Partitions generated by the heuristics are compared with the real ones using the adjusted rand index, one of the most commonly used external validation measures. The paper discusses the results of the partitions on the mentioned biological data sets.

  19. Blind Multiuser Detection by Kurtosis Maximization for Asynchronous Multirate DS/CDMA Systems

    Directory of Open Access Journals (Sweden)

    Peng Chun-Hsien

    2006-01-01

    Full Text Available Chi et al. proposed a fast kurtosis maximization algorithm (FKMA for blind equalization/deconvolution of multiple-input multiple-output (MIMO linear time-invariant systems. This algorithm has been applied to blind multiuser detection of single-rate direct-sequence/code-division multiple-access (DS/CDMA systems and blind source separation (or independent component analysis. In this paper, the FKMA is further applied to blind multiuser detection for multirate DS/CDMA systems. The ideas are to properly formulate discrete-time MIMO signal models by converting real multirate users into single-rate virtual users, followed by the use of FKMA for extraction of virtual users' data sequences associated with the desired user, and recovery of the data sequence of the desired user from estimated virtual users' data sequences. Assuming that all the users' spreading sequences are given a priori, two multirate blind multiuser detection algorithms (with either a single receive antenna or multiple antennas, which also enjoy the merits of superexponential convergence rate and guaranteed convergence of the FKMA, are proposed in the paper, one based on a convolutional MIMO signal model and the other based on an instantaneous MIMO signal model. Some simulation results are then presented to demonstrate their effectiveness and to provide a performance comparison with some existing algorithms.

  20. Consumer-driven profit maximization in broiler production and processing

    Directory of Open Access Journals (Sweden)

    Ecio de Farias Costa

    2004-01-01

    Full Text Available Increased emphasis on consumer markets in broiler profit-maximizing modeling generates results that differ from those by traditional profit-maximization models. This approach reveals that the adoption of step pricing and consideration of marketing options (examples of responsiveness to consumers affect the optimal feed formulation levels and types of broiler production to generate maximum profitability. The adoption of step pricing attests that higher profits can be obtained for targeted weights only if premium prices for broiler products are contracted.Um aumento na ênfase dada ao mercado de consumidores de carne de frango e modelos de maximização de lucros na produção de frangos de corte geram resultados que diferem daqueles obtidos em modelos tradicionais de maximização de lucros. Esta metodologia revela que a adoção de step-pricing e considerando opções de mercado (exemplos de resposta às preferências de consumidores afetam os níveis ótimos de formulação de rações e os tipos de produção de frangos de corte que geram uma lucratividade máxima. A adoção de step-pricing atesta que maiores lucros podem ser obtidos para pesos-alvo somente se preços-prêmio para produtos processados de carne de frango forem contratados.

  1. Hybrid Multi-Objective Optimization of Folsom Reservoir Operation to Maximize Storage in Whole Watershed

    Science.gov (United States)

    Goharian, E.; Gailey, R.; Maples, S.; Azizipour, M.; Sandoval Solis, S.; Fogg, G. E.

    2017-12-01

    The drought incidents and growing water scarcity in California have a profound effect on human, agricultural, and environmental water needs. California experienced multi-year droughts, which have caused groundwater overdraft and dropping groundwater levels, and dwindling of major reservoirs. These concerns call for a stringent evaluation of future water resources sustainability and security in the state. To answer to this call, Sustainable Groundwater Management Act (SGMA) was passed in 2014 to promise a sustainable groundwater management in California by 2042. SGMA refers to managed aquifer recharge (MAR) as a key management option, especially in areas with high variation in water availability intra- and inter-annually, to secure the refill of underground water storage and return of groundwater quality to a desirable condition. The hybrid optimization of an integrated water resources system provides an opportunity to adapt surface reservoir operations for enhancement in groundwater recharge. Here, to re-operate Folsom Reservoir, objectives are maximizing the storage in the whole American-Cosumnes watershed and maximizing hydropower generation from Folsom Reservoir. While a linear programing (LP) module tends to maximize the total groundwater recharge by distributing and spreading water over suitable lands in basin, a genetic based algorithm, Non-dominated Sorting Genetic Algorithm II (NSGA-II), layer above it controls releases from the reservoir to secure the hydropower generation, carry-over storage in reservoir, available water for replenishment, and downstream water requirements. The preliminary results show additional releases from the reservoir for groundwater recharge during high flow seasons. Moreover, tradeoffs between the objectives describe that new operation performs satisfactorily to increase the storage in the basin, with nonsignificant effects on other objectives.

  2. KONVERGENSI ESTIMATOR DALAM MODEL MIXTURE BERBASIS MISSING DATA

    Directory of Open Access Journals (Sweden)

    N Dwidayati

    2014-06-01

    Full Text Available Abstrak __________________________________________________________________________________________ Model mixture dapat mengestimasi proporsi pasien yang sembuh (cured dan fungsi survival pasien tak sembuh (uncured. Pada kajian ini, model mixture dikembangkan untuk  analisis cure rate berbasis missing data. Ada beberapa metode yang dapat digunakan untuk analisis missing data. Salah satu metode yang dapat digunakan adalah Algoritma EM, Metode ini didasarkan pada 2 (dua langkah, yaitu: (1 Expectation Step dan (2 Maximization Step. Algoritma EM merupakan pendekatan iterasi untuk mempelajari model dari data dengan nilai hilang melalui 4 (empat langkah, yaitu(1 pilih himpunan inisial dari parameter untuk sebuah model, (2 tentukan nilai ekspektasi untuk data hilang, (3 buat induksi parameter model baru dari gabungan nilai ekspekstasi dan data asli, dan (4 jika parameter tidak converged, ulangi langkah 2 menggunakan model baru. Berdasar kajian yang dilakukan dapat ditunjukkan bahwa pada algoritma EM, log-likelihood untuk missing data mengalami kenaikan setelah dilakukan setiap iterasi dari algoritmanya. Dengan demikian berdasar algoritma EM, barisan likelihood konvergen jika likelihood terbatas ke bawah.   Abstract __________________________________________________________________________________________ Model mixture can estimate proportion of recovering patient  and function of patient survival do not recover. At this study, model mixture developed to analyse cure rate bases on missing data. There are some method which applicable to analyse missing data. One of method which can be applied is Algoritma EM, This method based on 2 ( two step, that is: ( 1 Expectation Step and ( 2 Maximization Step. EM Algorithm is approach of iteration to study model from data with value loses through 4 ( four step, yaitu(1 select;chooses initial gathering from parameter for a model, ( 2 determines expectation value for data to lose, ( 3 induce newfangled parameter

  3. Algorithm for Controlling a Centrifugal Compressor

    Science.gov (United States)

    Benedict, Scott M.

    2004-01-01

    An algorithm has been developed for controlling a centrifugal compressor that serves as the prime mover in a heatpump system. Experimental studies have shown that the operating conditions for maximum compressor efficiency are close to the boundary beyond which surge occurs. Compressor surge is a destructive condition in which there are instantaneous reversals of flow associated with a high outlet-to-inlet pressure differential. For a given cooling load, the algorithm sets the compressor speed at the lowest possible value while adjusting the inlet guide vane angle and diffuser vane angle to maximize efficiency, subject to an overriding requirement to prevent surge. The onset of surge is detected via the onset of oscillations of the electric current supplied to the compressor motor, associated with surge-induced oscillations of the torque exerted by and on the compressor rotor. The algorithm can be implemented in any of several computer languages.

  4. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  5. Land-cover classification with an expert classification algorithm using digital aerial photographs

    Directory of Open Access Journals (Sweden)

    José L. de la Cruz

    2010-05-01

    Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (<em>Zea maysem> L., oats (<em>Avena sativaem> L., rye (<em>Secale cereale em>L., wheat (<em>Triticum aestivum em>L. and barley (<em>Hordeun vulgareem> L., (3 high protein crops, such as peas (<em>Pisum sativumem> L. and beans (<em>Vicia fabaem> L., (4 alfalfa (<em>Medicago sativaem> L., (5 woodlands and scrublands, including holly oak (<em>Quercus ilexem> L. and common retama (<em>Retama sphaerocarpaem> L., (6 urban soil, (7 olive groves (<em>Olea europaeaem> L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.

  6. Implementation and evaluation of an ordered subsets reconstruction algorithm for transmission PET studies using median root prior and inter-update median filtering

    International Nuclear Information System (INIS)

    Bettinardi, V.; Gilardi, M.C.; Fazio, F.; Alenius, S.; Ruotsalainen, U.; Numminen, P.; Teraes, M.

    2003-01-01

    An ordered subsets (OS) reconstruction algorithm based on the median root prior (MRP) and inter-update median filtering was implemented for the reconstruction of low count statistics transmission (TR) scans. The OS-MRP-TR algorithm was evaluated using an experimental phantom, simulating positron emission tomography (PET) whole-body (WB) studies, as well as patient data. Various experimental conditions, in terms of TR scan time (from 1 h to 1 min), covering a wide range of TR count statistics were evaluated. The performance of the algorithm was assessed by comparing the mean value of the attenuation coefficient (MVAC) of known tissue types and the coefficient of variation (CV) for low-count TR images, reconstructed with the OS-MRP-TR algorithm, with reference values obtained from high-count TR images reconstructed with a filtered back-projection (FBP) algorithm. The reconstructed OS-MRP-TR images were then used for attenuation correction of the corresponding emission (EM) data. EM images reconstructed with attenuation correction generated by OS-MRP-TR images, of low count statistics, were compared with the EM images corrected for attenuation using reference (high statistics) TR data. In all the experimental situations considered, the OS-MRP-TR algorithm showed: (1) a tendency towards a stable solution in terms of MVAC; (2) a difference in the MVAC of within 5% for a TR scan of 1 min reconstructed with the OS-MRP-TR and a TR scan of 1 h reconstructed with the FBP algorithm; (3) effectiveness in noise reduction, particularly for low count statistics data [using a specific parameter configuration the TR images reconstructed with OS-MRP-TR(1 min) had a lower CV than the corresponding TR images of a 1-h scan reconstructed with the FBP algorithm]; (4) a difference of within 3% between the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 min and the mean counts in the EM images attenuation corrected using the OS-MRP-TR images of 1 h; (5

  7. Is CP violation maximal

    International Nuclear Information System (INIS)

    Gronau, M.

    1984-01-01

    Two ambiguities are noted in the definition of the concept of maximal CP violation. The phase convention ambiguity is overcome by introducing a CP violating phase in the quark mixing matrix U which is invariant under rephasing transformations. The second ambiguity, related to the parametrization of U, is resolved by finding a single empirically viable definition of maximal CP violation when assuming that U does not single out one generation. Considerable improvement in the calculation of nonleptonic weak amplitudes is required to test the conjecture of maximal CP violation. 21 references

  8. Shareholder, stakeholder-owner or broad stakeholder maximization

    OpenAIRE

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating stakeholder-owner. Maximization of shareholder value is a special case of owner-maximization, and only under quite re-strictive assumptions shareholder maximization is larger or equal to stakeholder-owner...

  9. Quantitative validation of a new coregistration algorithm

    International Nuclear Information System (INIS)

    Pickar, R.D.; Esser, P.D.; Pozniakoff, T.A.; Van Heertum, R.L.; Stoddart, H.A. Jr.

    1995-01-01

    A new coregistration software package, Neuro9OO Image Coregistration software, has been developed specifically for nuclear medicine. With this algorithm, the correlation coefficient is maximized between volumes generated from sets of transaxial slices. No localization markers or segmented surfaces are needed. The coregistration program was evaluated for translational and rotational registration accuracy. A Tc-99m HM-PAO split-dose study (0.53 mCi low dose, L, and 1.01 mCi high dose, H) was simulated with a Hoffman Brain Phantom with five fiducial markers. Translation error was determined by a shift in image centroid, and rotation error was determined by a simplified two-axis approach. Changes in registration accuracy were measured with respect to: (1) slice spacing, using the four different combinations LL, LH, HL, HH, (2) translational and rotational misalignment before coregistration, (3) changes in the step size of the iterative parameters. In all the cases the algorithm converged with only small difference in translation offset, 0 and 0. At 6 nun slice spacing, translational efforts ranged from 0.9 to 2.8 mm (system resolution at 100 mm, 6.8 mm). The converged parameters showed little sensitivity to count density. In addition the correlation coefficient increased with decreasing iterative step size, as expected. From these experiments, the authors found that this algorithm based on the maximization of the correlation coefficient between studies was an accurate way to coregister SPECT brain images

  10. Dopaminergic balance between reward maximization and policy complexity

    Directory of Open Access Journals (Sweden)

    Naama eParush

    2011-05-01

    Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.

  11. Task-oriented maximally entangled states

    International Nuclear Information System (INIS)

    Agrawal, Pankaj; Pradhan, B

    2010-01-01

    We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.

  12. Mapping quantitative trait loci in a selectively genotyped outbred population using a mixture model approach

    NARCIS (Netherlands)

    Johnson, David L.; Jansen, Ritsert C.; Arendonk, Johan A.M. van

    1999-01-01

    A mixture model approach is employed for the mapping of quantitative trait loci (QTL) for the situation where individuals, in an outbred population, are selectively genotyped. Maximum likelihood estimation of model parameters is obtained from an Expectation-Maximization (EM) algorithm facilitated by

  13. Density estimation from local structure

    CSIR Research Space (South Africa)

    Van der Walt, Christiaan M

    2009-11-01

    Full Text Available Mixture Model (GMM) density function of the data and the log-likelihood scores are compared to the scores of a GMM trained with the expectation maximization (EM) algorithm on 5 real-world classification datasets (from the UCI collection). They show...

  14. Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter

    Directory of Open Access Journals (Sweden)

    Xiaoxi Yan

    2014-01-01

    Full Text Available As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

  15. ITEM-QM solutions for EM problems in image reconstruction exemplary for the Compton Camera

    CERN Document Server

    Pauli, Josef; Anton, G

    2002-01-01

    Imaginary time expectation maximation (ITEM), a new algorithm for expectation maximization problems based on the quantum mechanics energy minimalization via imaginary (euclidian) time evolution is presented. Both (the algorithm as well as the implementation (http://www.johannes-pauli.de/item/index.html) are published under the terms of General GNU public License (http://www.gnu.org/copyleft/gpl.html). Due to its generality ITEM is applicable to various image reconstruction problems like CT, PET, SPECT, NMR, Compton Camera, tomosynthesis as well as any other energy minimization problem. The choice of the optimal ITEM Hamiltonian is discussed and numerical results are presented for the Compton Camera.

  16. Beam-induced motion correction for sub-megadalton cryo-EM particles.

    Science.gov (United States)

    Scheres, Sjors Hw

    2014-08-13

    In electron cryo-microscopy (cryo-EM), the electron beam that is used for imaging also causes the sample to move. This motion blurs the images and limits the resolution attainable by single-particle analysis. In a previous Research article (Bai et al., 2013) we showed that correcting for this motion by processing movies from fast direct-electron detectors allowed structure determination to near-atomic resolution from 35,000 ribosome particles. In this Research advance article, we show that an improved movie processing algorithm is applicable to a much wider range of specimens. The new algorithm estimates straight movement tracks by considering multiple particles that are close to each other in the field of view, and models the fall-off of high-resolution information content by radiation damage in a dose-dependent manner. Application of the new algorithm to four data sets illustrates its potential for significantly improving cryo-EM structures, even for particles that are smaller than 200 kDa. Copyright © 2014, Scheres.

  17. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim

    2013-01-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known

  18. High-dimensional cluster analysis with the Masked EM Algorithm

    Science.gov (United States)

    Kadir, Shabnam N.; Goodman, Dan F. M.; Harris, Kenneth D.

    2014-01-01

    Cluster analysis faces two problems in high dimensions: first, the “curse of dimensionality” that can lead to overfitting and poor generalization performance; and second, the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of “spike sorting” for next-generation high channel-count neural probes. In this problem, only a small subset of features provide information about the cluster member-ship of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective. We introduce a “Masked EM” algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data, and to real-world high-channel-count spike sorting data. PMID:25149694

  19. FLOUTING MAXIMS IN INDONESIA LAWAK KLUB CONVERSATION

    Directory of Open Access Journals (Sweden)

    Rahmawati Sukmaningrum

    2017-04-01

    Full Text Available This study aims to identify the types of maxims flouted in the conversation in famous comedy show, Indonesia Lawak Club. Likewise, it also tries to reveal the speakers‘ intention of flouting the maxim in the conversation during the show. The writers use descriptive qualitative method in conducting this research. The data is taken from the dialogue of Indonesia Lawak club and then analyzed based on Grice‘s cooperative principles. The researchers read the dialogue‘s transcripts, identify the maxims, and interpret the data to find the speakers‘ intention for flouting the maxims in the communication. The results show that there are four types of maxims flouted in the dialogue. Those are maxim of quality (23%, maxim of quantity (11%, maxim of manner (31%, and maxim of relevance (35. Flouting the maxims in the conversations is intended to make the speakers feel uncomfortable with the conversation, show arrogances, show disagreement or agreement, and ridicule other speakers.

  20. Near-Optimal Resource Allocation in Cooperative Cellular Networks Using Genetic Algorithms

    OpenAIRE

    Luo, Zihan; Armour, Simon; McGeehan, Joe

    2015-01-01

    This paper shows how a genetic algorithm can be used as a method of obtaining the near-optimal solution of the resource block scheduling problem in a cooperative cellular network. An exhaustive search is initially implementedto guarantee that the optimal result, in terms of maximizing the bandwidth efficiency of the overall network, is found, and then the genetic algorithm with the properly selected termination conditions is used in the same network. The simulation results show that the genet...

  1. VIOLATION OF CONVERSATION MAXIM ON TV ADVERTISEMENTS

    Directory of Open Access Journals (Sweden)

    Desak Putu Eka Pratiwi

    2015-07-01

    Full Text Available Maxim is a principle that must be obeyed by all participants textually and interpersonally in order to have a smooth communication process. Conversation maxim is divided into four namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner of speaking. Violation of the maxim may occur in a conversation in which the information the speaker has is not delivered well to his speaking partner. Violation of the maxim in a conversation will result in an awkward impression. The example of violation is the given information that is redundant, untrue, irrelevant, or convoluted. Advertisers often deliberately violate the maxim to create unique and controversial advertisements. This study aims to examine the violation of maxims in conversations of TV ads. The source of data in this research is food advertisements aired on TV media. Documentation and observation methods are applied to obtain qualitative data. The theory used in this study is a maxim theory proposed by Grice (1975. The results of the data analysis are presented with informal method. The results of this study show an interesting fact that the violation of maxim in a conversation found in the advertisement exactly makes the advertisements very attractive and have a high value.

  2. Fuzzy 2-partition entropy threshold selection based on Big Bang–Big Crunch Optimization algorithm

    Directory of Open Access Journals (Sweden)

    Baljit Singh Khehra

    2015-03-01

    Full Text Available The fuzzy 2-partition entropy approach has been widely used to select threshold value for image segmenting. This approach used two parameterized fuzzy membership functions to form a fuzzy 2-partition of the image. The optimal threshold is selected by searching an optimal combination of parameters of the membership functions such that the entropy of fuzzy 2-partition is maximized. In this paper, a new fuzzy 2-partition entropy thresholding approach based on the technology of the Big Bang–Big Crunch Optimization (BBBCO is proposed. The new proposed thresholding approach is called the BBBCO-based fuzzy 2-partition entropy thresholding algorithm. BBBCO is used to search an optimal combination of parameters of the membership functions for maximizing the entropy of fuzzy 2-partition. BBBCO is inspired by the theory of the evolution of the universe; namely the Big Bang and Big Crunch Theory. The proposed algorithm is tested on a number of standard test images. For comparison, three different algorithms included Genetic Algorithm (GA-based, Biogeography-based Optimization (BBO-based and recursive approaches are also implemented. From experimental results, it is observed that the performance of the proposed algorithm is more effective than GA-based, BBO-based and recursion-based approaches.

  3. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    International Nuclear Information System (INIS)

    Guerrero, Rubén D.; Arango, Carlos A.; Reyes, Andrés

    2015-01-01

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding the interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions

  4. Analytical optimal pulse shapes obtained with the aid of genetic algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Guerrero, Rubén D., E-mail: rdguerrerom@unal.edu.co [Department of Physics, Universidad Nacional de Colombia, Bogota (Colombia); Arango, Carlos A. [Department of Chemical Sciences, Universidad Icesi, Cali (Colombia); Reyes, Andrés [Department of Chemistry, Universidad Nacional de Colombia, Bogota (Colombia)

    2015-09-28

    We propose a methodology to design optimal pulses for achieving quantum optimal control on molecular systems. Our approach constrains pulse shapes to linear combinations of a fixed number of experimentally relevant pulse functions. Quantum optimal control is obtained by maximizing a multi-target fitness function using genetic algorithms. As a first application of the methodology, we generated an optimal pulse that successfully maximized the yield on a selected dissociation channel of a diatomic molecule. Our pulse is obtained as a linear combination of linearly chirped pulse functions. Data recorded along the evolution of the genetic algorithm contained important information regarding the interplay between radiative and diabatic processes. We performed a principal component analysis on these data to retrieve the most relevant processes along the optimal path. Our proposed methodology could be useful for performing quantum optimal control on more complex systems by employing a wider variety of pulse shape functions.

  5. Evaluation of <em>HER2em> Gene Amplification in Breast Cancer Using Nuclei Microarray <em>in em>S>itu em>Hybridization

    Directory of Open Access Journals (Sweden)

    Xuefeng Zhang

    2012-05-01

    Full Text Available Fluorescence<em> em>>in situ em>hybridization (FISH assay is considered the “gold standard” in evaluating <em>HER2/neu (HER2em> gene status. However, FISH detection is costly and time consuming. Thus, we established nuclei microarray with extracted intact nuclei from paraffin embedded breast cancer tissues for FISH detection. The nuclei microarray FISH (NMFISH technology serves as a useful platform for analyzing <em>HER2em> gene/chromosome 17 centromere ratio. We examined <em>HER2em> gene status in 152 cases of invasive ductal carcinomas of the breast that were resected surgically with FISH and NMFISH. <em>HER2em> gene amplification status was classified according to the guidelines of the American Society of Clinical Oncology and College of American Pathologists (ASCO/CAP. Comparison of the cut-off values for <em>HER2em>/chromosome 17 centromere copy number ratio obtained by NMFISH and FISH showed that there was almost perfect agreement between the two methods (κ coefficient 0.920. The results of the two methods were almost consistent for the evaluation of <em>HER2em> gene counts. The present study proved that NMFISH is comparable with FISH for evaluating <em>HER2em> gene status. The use of nuclei microarray technology is highly efficient, time and reagent conserving and inexpensive.

  6. Neonatal Phosphate Nutrition Alters <em>in em>Vivo> and <em>in em>Vitro> Satellite Cell Activity in Pigs

    Directory of Open Access Journals (Sweden)

    Chad H. Stahl

    2012-05-01

    Full Text Available Satellite cell activity is necessary for postnatal skeletal muscle growth. Severe phosphate (PO4 deficiency can alter satellite cell activity, however the role of neonatal PO4 nutrition on satellite cell biology remains obscure. Twenty-one piglets (1 day of age, 1.8 ± 0.2 kg BW were pair-fed liquid diets that were either PO4 adequate (0.9% total P, supra-adequate (1.2% total P in PO4 requirement or deficient (0.7% total P in PO4 content for 12 days. Body weight was recorded daily and blood samples collected every 6 days. At day 12, pigs were orally dosed with BrdU and 12 h later, satellite cells were isolated. Satellite cells were also cultured <em>in vitroem> for 7 days to determine if PO4 nutrition alters their ability to proceed through their myogenic lineage. Dietary PO4 deficiency resulted in reduced (<em>P> < 0.05 sera PO4 and parathyroid hormone (PTH concentrations, while supra-adequate dietary PO4 improved (<em>P> < 0.05 feed conversion efficiency as compared to the PO4 adequate group. <em>In vivoem> satellite cell proliferation was reduced (<em>P> < 0.05 among the PO4 deficient pigs, and these cells had altered <em>in vitroem> expression of markers of myogenic progression. Further work to better understand early nutritional programming of satellite cells and the potential benefits of emphasizing early PO4 nutrition for future lean growth potential is warranted.

  7. Optimization Shape of Variable Capacitance Micromotor Using Differential Evolution Algorithm

    Directory of Open Access Journals (Sweden)

    A. Ketabi

    2010-01-01

    Full Text Available A new method for optimum shape design of variable capacitance micromotor (VCM using Differential Evolution (DE, a stochastic search algorithm, is presented. In this optimization exercise, the objective function aims to maximize torque value and minimize the torque ripple, where the geometric parameters are considered to be the variables. The optimization process is carried out using a combination of DE algorithm and FEM analysis. Fitness value is calculated by FEM analysis using COMSOL3.4, and the DE algorithm is realized by MATLAB7.4. The proposed method is applied to a VCM with 8 poles at the stator and 6 poles at the rotor. The results show that the optimized micromotor using DE algorithm had higher torque value and lower torque ripple, indicating the validity of this methodology for VCM design.

  8. Sort-Mid tasks scheduling algorithm in grid computing

    Directory of Open Access Journals (Sweden)

    Naglaa M. Reda

    2015-11-01

    Full Text Available Scheduling tasks on heterogeneous resources distributed over a grid computing system is an NP-complete problem. The main aim for several researchers is to develop variant scheduling algorithms for achieving optimality, and they have shown a good performance for tasks scheduling regarding resources selection. However, using of the full power of resources is still a challenge. In this paper, a new heuristic algorithm called Sort-Mid is proposed. It aims to maximizing the utilization and minimizing the makespan. The new strategy of Sort-Mid algorithm is to find appropriate resources. The base step is to get the average value via sorting list of completion time of each task. Then, the maximum average is obtained. Finally, the task has the maximum average is allocated to the machine that has the minimum completion time. The allocated task is deleted and then, these steps are repeated until all tasks are allocated. Experimental tests show that the proposed algorithm outperforms almost other algorithms in terms of resources utilization and makespan.

  9. EM for phylogenetic topology reconstruction on nonhomogeneous data.

    Science.gov (United States)

    Ibáñez-Marcelo, Esther; Casanellas, Marta

    2014-06-17

    The reconstruction of the phylogenetic tree topology of four taxa is, still nowadays, one of the main challenges in phylogenetics. Its difficulties lie in considering not too restrictive evolutionary models, and correctly dealing with the long-branch attraction problem. The correct reconstruction of 4-taxon trees is crucial for making quartet-based methods work and being able to recover large phylogenies. We adapt the well known expectation-maximization algorithm to evolutionary Markov models on phylogenetic 4-taxon trees. We then use this algorithm to estimate the substitution parameters, compute the corresponding likelihood, and to infer the most likely quartet. In this paper we consider an expectation-maximization method for maximizing the likelihood of (time nonhomogeneous) evolutionary Markov models on trees. We study its success on reconstructing 4-taxon topologies and its performance as input method in quartet-based phylogenetic reconstruction methods such as QFIT and QuartetSuite. Our results show that the method proposed here outperforms neighbor-joining and the usual (time-homogeneous continuous-time) maximum likelihood methods on 4-leaved trees with among-lineage instantaneous rate heterogeneity, and perform similarly to usual continuous-time maximum-likelihood when data satisfies the assumptions of both methods. The method presented in this paper is well suited for reconstructing the topology of any number of taxa via quartet-based methods and is highly accurate, specially regarding largely divergent trees and time nonhomogeneous data.

  10. A new hybrid metaheuristic algorithm for wind farm micrositing

    International Nuclear Information System (INIS)

    Massan, S.U.R.; Wagan, A.I.; Shaikh, M.M.

    2017-01-01

    This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm) for the solution of the WTO (Wind Turbine Optimization) problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm) and the FA (Firefly Algorithm). The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm) used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together. (author)

  11. A New Hybrid Metaheuristic Algorithm for Wind Farm Micrositing

    Directory of Open Access Journals (Sweden)

    SHAFIQ-UR-REHMAN MASSAN

    2017-07-01

    Full Text Available This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm for the solution of the WTO (Wind Turbine Optimization problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm and the FA (Firefly Algorithm. The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together.

  12. PARAMETRIZATION OF INNER STRUCTURE OF AGRICULTURAL SYSTEMS ON THE BASIS OF MAXIMAL YIELDS ISOLINES (ISOCARPS

    Directory of Open Access Journals (Sweden)

    K KUDRNA

    2004-07-01

    Full Text Available On the basis of analysis of yield time series from a ten-year period, isolines of maximal yields of crops (isocarps have been constructed, homogenized yield zones have been determined, and inner structures of the agricultural system have been calculated. The algorithm of a normal and an optimal structure calculation have been used, and differences in the structure of the agricultural system have been determined for every defi ned zone.

  13. Algorithms of maximum likelihood data clustering with applications

    Science.gov (United States)

    Giada, Lorenzo; Marsili, Matteo

    2002-12-01

    We address the problem of data clustering by introducing an unsupervised, parameter-free approach based on maximum likelihood principle. Starting from the observation that data sets belonging to the same cluster share a common information, we construct an expression for the likelihood of any possible cluster structure. The likelihood in turn depends only on the Pearson's coefficient of the data. We discuss clustering algorithms that provide a fast and reliable approximation to maximum likelihood configurations. Compared to standard clustering methods, our approach has the advantages that (i) it is parameter free, (ii) the number of clusters need not be fixed in advance and (iii) the interpretation of the results is transparent. In order to test our approach and compare it with standard clustering algorithms, we analyze two very different data sets: time series of financial market returns and gene expression data. We find that different maximization algorithms produce similar cluster structures whereas the outcome of standard algorithms has a much wider variability.

  14. Shareholder, stakeholder-owner or broad stakeholder maximization

    DEFF Research Database (Denmark)

    Mygind, Niels

    2004-01-01

    With reference to the discussion about shareholder versus stakeholder maximization it is argued that the normal type of maximization is in fact stakeholder-owner maxi-mization. This means maximization of the sum of the value of the shares and stake-holder benefits belonging to the dominating...... including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...

  15. Optomechanical parameter estimation

    International Nuclear Information System (INIS)

    Ang, Shan Zheng; Tsang, Mankei; Harris, Glen I; Bowen, Warwick P

    2013-01-01

    We propose a statistical framework for the problem of parameter estimation from a noisy optomechanical system. The Cramér–Rao lower bound on the estimation errors in the long-time limit is derived and compared with the errors of radiometer and expectation–maximization (EM) algorithms in the estimation of the force noise power. When applied to experimental data, the EM estimator is found to have the lowest error and follow the Cramér–Rao bound most closely. Our analytic results are envisioned to be valuable to optomechanical experiment design, while the EM algorithm, with its ability to estimate most of the system parameters, is envisioned to be useful for optomechanical sensing, atomic magnetometry and fundamental tests of quantum mechanics. (paper)

  16. On the maximal superalgebras of supersymmetric backgrounds

    International Nuclear Information System (INIS)

    Figueroa-O'Farrill, Jose; Hackett-Jones, Emily; Moutsopoulos, George; Simon, Joan

    2009-01-01

    In this paper we give a precise definition of the notion of a maximal superalgebra of certain types of supersymmetric supergravity backgrounds, including the Freund-Rubin backgrounds, and propose a geometric construction extending the well-known construction of its Killing superalgebra. We determine the structure of maximal Lie superalgebras and show that there is a finite number of isomorphism classes, all related via contractions from an orthosymplectic Lie superalgebra. We use the structure theory to show that maximally supersymmetric waves do not possess such a maximal superalgebra, but that the maximally supersymmetric Freund-Rubin backgrounds do. We perform the explicit geometric construction of the maximal superalgebra of AdS 4 X S 7 and find that it is isomorphic to osp(1|32). We propose an algebraic construction of the maximal superalgebra of any background asymptotic to AdS 4 X S 7 and we test this proposal by computing the maximal superalgebra of the M2-brane in its two maximally supersymmetric limits, finding agreement.

  17. Pre-emptive resource-constrained multimode project scheduling using genetic algorithm: A dynamic forward approach

    Directory of Open Access Journals (Sweden)

    Aidin Delgoshaei

    2016-09-01

    Full Text Available Purpose: The issue resource over-allocating is a big concern for project engineers in the process of scheduling project activities. Resource over-allocating drawback is frequently seen after scheduling of a project in practice which causes a schedule to be useless. Modifying an over-allocated schedule is very complicated and needs a lot of efforts and time. In this paper, a new and fast tracking method is proposed to schedule large scale projects which can help project engineers to schedule the project rapidly and with more confidence. Design/methodology/approach: In this article, a forward approach for maximizing net present value (NPV in multi-mode resource constrained project scheduling problem while assuming discounted positive cash flows (MRCPSP-DCF is proposed. The progress payment method is used and all resources are considered as pre-emptible. The proposed approach maximizes NPV using unscheduled resources through resource calendar in forward mode. For this purpose, a Genetic Algorithm is applied to solve. Findings: The findings show that the proposed method is an effective way to maximize NPV in MRCPSP-DCF problems while activity splitting is allowed. The proposed algorithm is very fast and can schedule experimental cases with 1000 variables and 100 resources in few seconds. The results are then compared with branch and bound method and simulated annealing algorithm and it is found the proposed genetic algorithm can provide results with better quality. Then algorithm is then applied for scheduling a hospital in practice. Originality/value: The method can be used alone or as a macro in Microsoft Office Project® Software to schedule MRCPSP-DCF problems or to modify resource over-allocated activities after scheduling a project. This can help project engineers to schedule project activities rapidly with more accuracy in practice.

  18. Research reactor loading pattern optimization using estimation of distribution algorithms

    International Nuclear Information System (INIS)

    Jiang, S.; Ziver, K.; Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H.; Franklin, S. J.; Phillips, H. J.

    2006-01-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K eff ) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K eff with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)

  19. Pareto optimization of an industrial ecosystem: sustainability maximization

    Directory of Open Access Journals (Sweden)

    J. G. M.-S. Monteiro

    2010-09-01

    Full Text Available This work investigates a procedure to design an Industrial Ecosystem for sequestrating CO2 and consuming glycerol in a Chemical Complex with 15 integrated processes. The Complex is responsible for the production of methanol, ethylene oxide, ammonia, urea, dimethyl carbonate, ethylene glycol, glycerol carbonate, β-carotene, 1,2-propanediol and olefins, and is simulated using UNISIM Design (Honeywell. The process environmental impact (EI is calculated using the Waste Reduction Algorithm, while Profit (P is estimated using classic cost correlations. MATLAB (The Mathworks Inc is connected to UNISIM to enable optimization. The objective is granting maximum process sustainability, which involves finding a compromise between high profitability and low environmental impact. Sustainability maximization is therefore understood as a multi-criteria optimization problem, addressed by means of the Pareto optimization methodology for trading off P vs. EI.

  20. Algorithmic Trading with Developmental and Linear Genetic Programming

    Science.gov (United States)

    Wilson, Garnett; Banzhaf, Wolfgang

    A developmental co-evolutionary genetic programming approach (PAM DGP) and a standard linear genetic programming (LGP) stock trading systemare applied to a number of stocks across market sectors. Both GP techniques were found to be robust to market fluctuations and reactive to opportunities associated with stock price rise and fall, with PAMDGP generating notably greater profit in some stock trend scenarios. Both algorithms were very accurate at buying to achieve profit and selling to protect assets, while exhibiting bothmoderate trading activity and the ability to maximize or minimize investment as appropriate. The content of the trading rules produced by both algorithms are also examined in relation to stock price trend scenarios.

  1. A New DG Multiobjective Optimization Method Based on an Improved Evolutionary Algorithm

    Directory of Open Access Journals (Sweden)

    Wanxing Sheng

    2013-01-01

    Full Text Available A distribution generation (DG multiobjective optimization method based on an improved Pareto evolutionary algorithm is investigated in this paper. The improved Pareto evolutionary algorithm, which introduces a penalty factor in the objective function constraints, uses an adaptive crossover and a mutation operator in the evolutionary process and combines a simulated annealing iterative process. The proposed algorithm is utilized to the optimize DG injection models to maximize DG utilization while minimizing system loss and environmental pollution. A revised IEEE 33-bus system with multiple DG units was used to test the multiobjective optimization algorithm in a distribution power system. The proposed algorithm was implemented and compared with the strength Pareto evolutionary algorithm 2 (SPEA2, a particle swarm optimization (PSO algorithm, and nondominated sorting genetic algorithm II (NGSA-II. The comparison of the results demonstrates the validity and practicality of utilizing DG units in terms of economic dispatch and optimal operation in a distribution power system.

  2. Maximally multipartite entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Parisi, Giorgio; Pascazio, Saverio

    2008-06-01

    We introduce the notion of maximally multipartite entangled states of n qubits as a generalization of the bipartite case. These pure states have a bipartite entanglement that does not depend on the bipartition and is maximal for all possible bipartitions. They are solutions of a minimization problem. Examples for small n are investigated, both analytically and numerically.

  3. Maximally Symmetric Composite Higgs Models.

    Science.gov (United States)

    Csáki, Csaba; Ma, Teng; Shu, Jing

    2017-09-29

    Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric SO(5)/SO(4) model and comment on its observational consequences.

  4. Otsu Based Optimal Multilevel Image Thresholding Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    N. Sri Madhava Raja

    2014-01-01

    Full Text Available Histogram based multilevel thresholding approach is proposed using Brownian distribution (BD guided firefly algorithm (FA. A bounded search technique is also presented to improve the optimization accuracy with lesser search iterations. Otsu’s between-class variance function is maximized to obtain optimal threshold level for gray scale images. The performances of the proposed algorithm are demonstrated by considering twelve benchmark images and are compared with the existing FA algorithms such as Lévy flight (LF guided FA and random operator guided FA. The performance assessment comparison between the proposed and existing firefly algorithms is carried using prevailing parameters such as objective function, standard deviation, peak-to-signal ratio (PSNR, structural similarity (SSIM index, and search time of CPU. The results show that BD guided FA provides better objective function, PSNR, and SSIM, whereas LF based FA provides faster convergence with relatively lower CPU time.

  5. Tools to identify linear combination of prognostic factors which maximizes area under receiver operator curve.

    Science.gov (United States)

    Todor, Nicolae; Todor, Irina; Săplăcan, Gavril

    2014-01-01

    The linear combination of variables is an attractive method in many medical analyses targeting a score to classify patients. In the case of ROC curves the most popular problem is to identify the linear combination which maximizes area under curve (AUC). This problem is complete closed when normality assumptions are met. With no assumption of normality search algorithm are avoided because it is accepted that we have to evaluate AUC n(d) times where n is the number of distinct observation and d is the number of variables. For d = 2, using particularities of AUC formula, we described an algorithm which lowered the number of evaluations of AUC from n(2) to n(n-1) + 1. For d > 2 our proposed solution is an approximate method by considering equidistant points on the unit sphere in R(d) where we evaluate AUC. The algorithms were applied to data from our lab to predict response of treatment by a set of molecular markers in cervical cancers patients. In order to evaluate the strength of our algorithms a simulation was added. In the case of no normality presented algorithms are feasible. For many variables computation time could be increased but acceptable.

  6. Maximizing Total Profit in Two-agent Problem of Order Acceptance and Scheduling

    Directory of Open Access Journals (Sweden)

    Mohammad Reisi-Nafchi

    2017-03-01

    Full Text Available In competitive markets, attracting potential customers and keeping current customers is a survival condition for each company. So, paying attention to the requests of customers is important and vital. In this paper, the problem of order acceptance and scheduling has been studied, in which two types of customers or agents compete in a single machine environment. The objective is maximizing sum of the total profit of first agent's accepted orders and the total revenue of second agent. Therefore, only the first agent has penalty and its penalty function is lateness and the second agent's orders have a common due date and this agent does not accept any tardy order. To solve the problem, a mathematical programming, a heuristic algorithm and a pseudo-polynomial dynamic programming algorithm are proposed. Computational results confirm the ability of solving all problem instances up to 70 orders size optimally and also 93.12% of problem instances up to 150 orders size by dynamic programming.

  7. Direct reconstruction of the source intensity distribution of a clinical linear accelerator using a maximum likelihood expectation maximization algorithm.

    Science.gov (United States)

    Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J

    2016-02-07

    Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.

  8. Maximal quantum Fisher information matrix

    International Nuclear Information System (INIS)

    Chen, Yu; Yuan, Haidong

    2017-01-01

    We study the existence of the maximal quantum Fisher information matrix in the multi-parameter quantum estimation, which bounds the ultimate precision limit. We show that when the maximal quantum Fisher information matrix exists, it can be directly obtained from the underlying dynamics. Examples are then provided to demonstrate the usefulness of the maximal quantum Fisher information matrix by deriving various trade-off relations in multi-parameter quantum estimation and obtaining the bounds for the scalings of the precision limit. (paper)

  9. AC-600 reactor reloading pattern optimization by using genetic algorithms

    International Nuclear Information System (INIS)

    Wu Hongchun; Xie Zhongsheng; Yao Dong; Li Dongsheng; Zhang Zongyao

    2000-01-01

    The use of genetic algorithms to optimize reloading pattern of the nuclear power plant reactor is proposed. And a new encoding and translating method is given. Optimization results of minimizing core power peak and maximizing cycle length for both low-leakage and out-in loading pattern of AC-600 reactor are obtained

  10. Understanding Violations of Gricean Maxims in Preschoolers and Adults

    Directory of Open Access Journals (Sweden)

    Mako eOkanda

    2015-07-01

    Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.

  11. Combinatorial optimization theory and algorithms

    CERN Document Server

    Korte, Bernhard

    2018-01-01

    This comprehensive textbook on combinatorial optimization places special emphasis on theoretical results and algorithms with provably good performance, in contrast to heuristics. It is based on numerous courses on combinatorial optimization and specialized topics, mostly at graduate level. This book reviews the fundamentals, covers the classical topics (paths, flows, matching, matroids, NP-completeness, approximation algorithms) in detail, and proceeds to advanced and recent topics, some of which have not appeared in a textbook before. Throughout, it contains complete but concise proofs, and also provides numerous exercises and references. This sixth edition has again been updated, revised, and significantly extended. Among other additions, there are new sections on shallow-light trees, submodular function maximization, smoothed analysis of the knapsack problem, the (ln 4+ɛ)-approximation for Steiner trees, and the VPN theorem. Thus, this book continues to represent the state of the art of combinatorial opti...

  12. Making the error-controlling algorithm of observable operator models constructive.

    Science.gov (United States)

    Zhao, Ming-Jie; Jaeger, Herbert; Thon, Michael

    2009-12-01

    Observable operator models (OOMs) are a class of models for stochastic processes that properly subsumes the class that can be modeled by finite-dimensional hidden Markov models (HMMs). One of the main advantages of OOMs over HMMs is that they admit asymptotically correct learning algorithms. A series of learning algorithms has been developed, with increasing computational and statistical efficiency, whose recent culmination was the error-controlling (EC) algorithm developed by the first author. The EC algorithm is an iterative, asymptotically correct algorithm that yields (and minimizes) an assured upper bound on the modeling error. The run time is faster by at least one order of magnitude than EM-based HMM learning algorithms and yields significantly more accurate models than the latter. Here we present a significant improvement of the EC algorithm: the constructive error-controlling (CEC) algorithm. CEC inherits from EC the main idea of minimizing an upper bound on the modeling error but is constructive where EC needs iterations. As a consequence, we obtain further gains in learning speed without loss in modeling accuracy.

  13. Application of the OS-EM method to the 123I-IMP ARG method. Comparison between FBP and OS-EM methods

    International Nuclear Information System (INIS)

    Sasaki, Kazuko; Satou, Ayuko; Oomura, Tomomi; Ono, Madoka; Hachiya, Takenori

    2004-01-01

    We investigated application of the ordered subsets-expectation maximization (OS-EM) method to the 123 I-IMP autoradiography (ARG) method to measure regional cerebral blood flow (rCBF). First, scan time and subsets were fixed at 20 min and 16, respectively, and the influence of iteration on the cross calibration factor (CCF) and quantitative rCBF values obtained by the ARG method was investigated when the iteration number was set at 2, 4, 8, 16, 32, and 90. Next, with the number of iterations set at 4, we compared the scanning times of OS-EM and filtered back projection (FBP). We determined that the CCF values remained at the same level irrespective of iteration number. Quantitative rCBF values had no association with iteration number, either. Using the quantitative rCBF values obtained by 20-min. scanning with FBP as a standard, the time period for collecting SPECT data was 10 min, without sacrificing image quality or quantification. Quantitative rCBF obtained by OS-EM was estimated to be higher than that by FBP. (author)

  14. DARAL: A Dynamic and Adaptive Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Francisco José Estévez

    2016-06-01

    Full Text Available The evolution of Smart City projects is pushing researchers and companies to develop more efficient embedded hardware and also more efficient communication technologies. These communication technologies are the focus of this work, presenting a new routing algorithm based on dynamically-allocated sub-networks and node roles. Among these features, our algorithm presents a fast set-up time, a reduced overhead and a hierarchical organization, which allows for the application of complex management techniques. This work presents a routing algorithm based on a dynamically-allocated hierarchical clustering, which uses the link quality indicator as a reference parameter, maximizing the network coverage and minimizing the control message overhead and the convergence time. The present work based its test scenario and analysis in the density measure, considered as a node degree. The routing algorithm is compared with some of the most well known routing algorithms for different scenario densities.

  15. Research reactor loading pattern optimization using estimation of distribution algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, S. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Ziver, K. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); AMCG Group, RM Consultants, Abingdon (United Kingdom); Carter, J. N.; Pain, C. C.; Eaton, M. D.; Goddard, A. J. H. [Dept. of Earth Science and Engineering, Applied Modeling and Computation Group AMCG, Imperial College, London, SW7 2AZ (United Kingdom); Franklin, S. J.; Phillips, H. J. [Imperial College, Reactor Centre, Silwood Park, Buckhurst Road, Ascot, Berkshire, SL5 7TE (United Kingdom)

    2006-07-01

    A new evolutionary search based approach for solving the nuclear reactor loading pattern optimization problems is presented based on the Estimation of Distribution Algorithms. The optimization technique developed is then applied to the maximization of the effective multiplication factor (K{sub eff}) of the Imperial College CONSORT research reactor (the last remaining civilian research reactor in the United Kingdom). A new elitism-guided searching strategy has been developed and applied to improve the local convergence together with some problem-dependent information based on the 'stand-alone K{sub eff} with fuel coupling calculations. A comparison study between the EDAs and a Genetic Algorithm with Heuristic Tie Breaking Crossover operator has shown that the new algorithm is efficient and robust. (authors)

  16. Designing lattice structures with maximal nearest-neighbor entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Navarro-Munoz, J C; Lopez-Sandoval, R [Instituto Potosino de Investigacion CientIfica y Tecnologica, Camino a la presa San Jose 2055, 78216 San Luis Potosi (Mexico); Garcia, M E [Theoretische Physik, FB 18, Universitaet Kassel and Center for Interdisciplinary Nanostructure Science and Technology (CINSaT), Heinrich-Plett-Str.40, 34132 Kassel (Germany)

    2009-08-07

    In this paper, we study the numerical optimization of nearest-neighbor concurrence of bipartite one- and two-dimensional lattices, as well as non-bipartite two-dimensional lattices. These systems are described in the framework of a tight-binding Hamiltonian while the optimization of concurrence was performed using genetic algorithms. Our results show that the concurrence of the optimized lattice structures is considerably higher than that of non-optimized systems. In the case of one-dimensional chains, the concurrence increases dramatically when the system begins to dimerize, i.e., it undergoes a structural phase transition (Peierls distortion). This result is consistent with the idea that entanglement is maximal or shows a singularity near quantum phase transitions. Moreover, the optimization of concurrence in two-dimensional bipartite and non-bipartite lattices is achieved when the structures break into smaller subsystems, which are arranged in geometrically distinguishable configurations.

  17. Distributed interference alignment iterative algorithms in symmetric wireless network

    Directory of Open Access Journals (Sweden)

    YANG Jingwen

    2015-02-01

    Full Text Available Interference alignment is a novel interference alignment way,which is widely noted all of the world.Interference alignment overlaps interference in the same signal space at receiving terminal by precoding so as to thoroughly eliminate the influence of interference impacted on expected signals,thus making the desire user achieve the maximum degree of freedom.In this paper we research three typical algorithms for realizing interference alignment,including minimizing the leakage interference,maximizing Signal to Interference plus Noise Ratio (SINR and minimizing mean square error(MSE.All of these algorithms utilize the reciprocity of wireless network,and iterate the precoders between original network and the reverse network so as to achieve interference alignment.We use the uplink transmit rate to analyze the performance of these three algorithms.Numerical simulation results show the advantages of these algorithms.which is the foundation for the further study in the future.The feasibility and future of interference alignment are also discussed at last.

  18. A new iterative algorithm to reconstruct the refractive index.

    Science.gov (United States)

    Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y

    2007-06-21

    The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.

  19. The logic of logistics: theory, algorithms and applications for logistics management

    Directory of Open Access Journals (Sweden)

    Claudio Barbieri da Cunha

    2010-04-01

    Full Text Available

    Nesse texto o autor apresenta uma resenha acerca do livro "The logic of logistics: theory, algorithms and applications for logistics management", de autoria de Julien Bramel e David Simchi-Levi, publicado pela Springer-Verlag, em 1997.

  20. 2-Phase NSGA II: An Optimized Reward and Risk Measurements Algorithm in Portfolio Optimization

    Directory of Open Access Journals (Sweden)

    Seyedeh Elham Eftekharian

    2017-11-01

    Full Text Available Portfolio optimization is a serious challenge for financial engineering and has pulled down special attention among investors. It has two objectives: to maximize the reward that is calculated by expected return and to minimize the risk. Variance has been considered as a risk measure. There are many constraints in the world that ultimately lead to a non–convex search space such as cardinality constraint. In conclusion, parametric quadratic programming could not be applied and it seems essential to apply multi-objective evolutionary algorithm (MOEA. In this paper, a new efficient multi-objective portfolio optimization algorithm called 2-phase NSGA II algorithm is developed and the results of this algorithm are compared with the NSGA II algorithm. It was found that 2-phase NSGA II significantly outperformed NSGA II algorithm.

  1. Reference Gene Selection in the Desert Plant <em>Eremosparton songoricuem>m>

    Directory of Open Access Journals (Sweden)

    Dao-Yuan Zhang

    2012-06-01

    Full Text Available <em>Eremosparton songoricum em>(Litv. Vass. (<em>E. songoricumem> is a rare and extremely drought-tolerant desert plant that holds promise as a model organism for the identification of genes associated with water deficit stress. Here, we cloned and evaluated the expression of eight candidate reference genes using quantitative real-time reverse transcriptase polymerase chain reactions. The expression of these candidate reference genes was analyzed in a diverse set of 20 samples including various <em>E. songoricumem> plant tissues exposed to multiple environmental stresses. GeNorm analysis indicated that expression stability varied between the reference genes in the different experimental conditions, but the two most stable reference genes were sufficient for normalization in most conditions.<em> EsEFem> and <em>Esα-TUB> were sufficient for various stress conditions, <em>EsEF> and <em>EsACT> were suitable for samples of differing germination stages, and <em>EsGAPDH>and <em>Es>UBQ em>were most stable across multiple adult tissue samples. The <em>Es18Sem> gene was unsuitable as a reference gene in our analysis. In addition, the expression level of the drought-stress related transcription factor <em>EsDREB2em>> em>verified the utility of<em> E. songoricumem> reference genes and indicated that no single gene was adequate for normalization on its own. This is the first systematic report on the selection of reference genes in <em>E. songoricumem>, and these data will facilitate future work on gene expression in this species.

  2. On maximal massive 3D supergravity

    OpenAIRE

    Bergshoeff , Eric A; Hohm , Olaf; Rosseel , Jan; Townsend , Paul K

    2010-01-01

    ABSTRACT We construct, at the linearized level, the three-dimensional (3D) N = 4 supersymmetric " general massive supergravity " and the maximally supersymmetric N = 8 " new massive supergravity ". We also construct the maximally supersymmetric linearized N = 7 topologically massive supergravity, although we expect N = 6 to be maximal at the non-linear level. (Bergshoeff, Eric A) (Hohm, Olaf) (Rosseel, Jan) P.K.Townsend@da...

  3. A Fast Detection Algorithm for the X-Ray Pulsar Signal

    Directory of Open Access Journals (Sweden)

    Hao Liang

    2017-01-01

    Full Text Available The detection of the X-ray pulsar signal is important for the autonomous navigation system using X-ray pulsars. In the condition of short observation time and limited number of photons for detection, the noise does not obey the Gaussian distribution. This fact has been little considered extant. In this paper, the model of the X-ray pulsar signal is rebuilt as the nonhomogeneous Poisson distribution and, in the condition of a fixed false alarm rate, a fast detection algorithm based on maximizing the detection probability is proposed. Simulation results show the effectiveness of the proposed detection algorithm.

  4. Sum Rate Maximization of D2D Communications in Cognitive Radio Network Using Cheating Strategy

    Directory of Open Access Journals (Sweden)

    Yanjing Sun

    2018-01-01

    Full Text Available This paper focuses on the cheating algorithm for device-to-device (D2D pairs that reuse the uplink channels of cellular users. We are concerned about the way how D2D pairs are matched with cellular users (CUs to maximize their sum rate. In contrast with Munkres’ algorithm which gives the optimal matching in terms of the maximum throughput, Gale-Shapley algorithm ensures the stability of the system on the same time and achieves a men-optimal stable matching. In our system, D2D pairs play the role of “men,” so that each D2D pair could be matched to the CU that ranks as high as possible in the D2D pair’s preference list. It is found by previous studies that, by unilaterally falsifying preference lists in a particular way, some men can get better partners, while no men get worse off. We utilize this theory to exploit the best cheating strategy for D2D pairs. We find out that to acquire such a cheating strategy, we need to seek as many and as large cabals as possible. To this end, we develop a cabal finding algorithm named RHSTLC, and also we prove that it reaches the Pareto optimality. In comparison with other algorithms proposed by related works, the results show that our algorithm can considerably improve the sum rate of D2D pairs.

  5. MAX-SLNR Precoding Algorithm for Massive MIMO System

    Directory of Open Access Journals (Sweden)

    Jiang Jing

    2016-01-01

    Full Text Available Pilot Contamination obviously degrades the system performance of Massive MIMO systems. In this paper, a downlink precoding algorithm based on the Signal-to- Leakage-plus-Noise-Ratio (SLNR criterion is put forward. First, the impact of Pilot Contamination on SLNR is analyzed,then the precoding matrix is calculated with the eigenvalues decomposition of SLNR, which not only maximize the array gains of the target user, but also minimize the impact of Pilot Contamination and the leak to the users of other cells. Further, a simplified solution is derived, in which the impact of Pilot Contamination can be suppressed only with the large-scale fading coefficients. Simulation results reveal that: in the scenario of the serious pilot contamination, the proposed algorithm can avoid the performance loss caused by the pilot contamination compared with the conventional Massive MIMO precoding algorithm. Thus the proposed algorithm can acquire the perfect performance gains of Massive MIMO system and has better practical value since the large-scale fading coefficients are easy to measure and feedback.

  6. Inclusive fitness maximization: An axiomatic approach.

    Science.gov (United States)

    Okasha, Samir; Weymark, John A; Bossert, Walter

    2014-06-07

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.

  7. Effects of Sorafenib on <em>C>-Terminally Truncated Androgen Receptor Variants in Human Prostate Cancer Cells

    Directory of Open Access Journals (Sweden)

    Mark Schrader

    2012-09-01

    Full Text Available Recent evidence suggests that the development of castration resistant prostate cancer (CRPCa is commonly associated with an aberrant, ligand-independent activation of the androgen receptor (AR. A putative mechanism allowing prostate cancer (PCa cells to grow under low levels of androgens, is the expression of constitutively active, <em>C>-terminally truncated AR lacking the AR-ligand binding domain (LBD. Due to the absence of a LBD, these receptors, termed ARΔLBD, are unable to respond to any form of anti-hormonal therapies. In this study we demonstrate that the multikinase inhibitor sorafenib inhibits AR as well as ARΔLBD-signalling in CRPCa cells. This inhibition was paralleled by proteasomal degradation of the AR- and ARΔLBD-molecules. In line with these observations, maximal antiproliferative effects of sorafenib were achieved in AR and ARΔLBD-positive PCa cells. The present findings warrant further investigations on sorafenib as an option for the treatment of advanced AR-positive PCa.

  8. A new home energy management algorithm with voltage control in a smart home environment

    International Nuclear Information System (INIS)

    Elma, Onur; Selamogullari, Ugur Savas

    2015-01-01

    Energy management in electrical systems is one of the important issues for energy efficiency and future grid systems. Energy management is defined as a HEM (home energy management) on the residential consumer side. The HEM system plays a key role in residential demand response applications. In this study, a new HEM algorithm is proposed for smart home environments to reduce peak demand and increase the energy efficiency. The proposed algorithm includes VC (voltage control) methodology to reduce the power consumption of residential appliances so that the shifting of appliances is minimized. The results of the survey are used to produce representative load profiles for a weekday and for a weekend. Then, case studies are completed to test the proposed HEM algorithm in reducing the peak demand in the house. The main aim of the proposed HEM algorithm is to minimize the number of turned-off appliances to decrease demand so that the customer comfort is maximized. The smart home laboratory at Yildiz Technical University, Istanbul, Turkey is used in case studies. Experimental results show that the proposed HEM algorithm reduces the peak demand by 17.5% with the voltage control and by 38% with both the voltage control and the appliance shifting. - Highlights: • A new HEM (home energy management) algorithm is proposed. • Voltage control in the HEM is introduced as a solution for peak load reduction. • Customer comfort is maximized by minimizing the number of turned-off appliances. • The proposed HEM algorithm is experimentally validated at a smart home laboratory. • A survey is completed to produce typical load profiles of a Turkish family.

  9. Maximal Entanglement in High Energy Physics

    Directory of Open Access Journals (Sweden)

    Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli

    2017-11-01

    Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.

  10. Maximal Inequalities for Dependent Random Variables

    DEFF Research Database (Denmark)

    Hoffmann-Jorgensen, Jorgen

    2016-01-01

    Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...

  11. An ethical justification of profit maximization

    DEFF Research Database (Denmark)

    Koch, Carsten Allan

    2010-01-01

    In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...... utility) maximizing actions are ruled out, e.g., by behavioural norms or formal institutions....

  12. Scheduling algorithms for saving energy and balancing load

    Energy Technology Data Exchange (ETDEWEB)

    Antoniadis, Antonios

    2012-08-03

    -processor result the competitive factor increases by an additive constant of 1. With respect to load balancing, we consider offline load balancing on identical machines, with the objective of minimizing the current load, for temporary unit-weight jobs. The problem can be seen as coloring n intervals with k colors, such that for each point on the line, the maximal difference between the number of intervals of any two colors is minimal. We prove that a coloring with maximal difference at most one is always possible, and develop a fast polynomial-time algorithm for generating such a coloring. Regarding the online version of the problem, we show that the maximal difference in the size of color classes can become arbitrary high for any online algorithm. Lastly, we prove that two generalizations of the problem are NP-hard. In the first we generalize from intervals to d-dimensional boxes while in the second we consider multiple-intervals, i.e., specific subsets of disjoint intervals must receive the same color.

  13. Fixed Parameter Evolutionary Algorithms and Maximum Leaf Spanning Trees: A Matter of Mutations

    DEFF Research Database (Denmark)

    Kratsch, Stefan; Lehre, Per Kristian; Neumann, Frank

    2011-01-01

    Evolutionary algorithms have been shown to be very successful for a wide range of NP-hard combinatorial optimization problems. We investigate the NP-hard problem of computing a spanning tree that has a maximal number of leaves by evolutionary algorithms in the context of fixed parameter tractabil...... two common mutation operators, we show that an operator related to spanning tree problems leads to an FPT running time in contrast to a general mutation operator that does not have this property....

  14. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  15. A probabilistic multi objective CLSC model with Genetic algorithm-ε_Constraint approach

    Directory of Open Access Journals (Sweden)

    Alireza TaheriMoghadam

    2014-05-01

    Full Text Available In this paper an uncertain multi objective closed-loop supply chain is developed. The first objective function is maximizing the total profit. The second objective function is minimizing the use of row materials. In the other word, the second objective function is maximizing the amount of remanufacturing and recycling. Genetic algorithm is used for optimization and for finding the pareto optimal line, Epsilon-constraint method is used. Finally a numerical example is solved with proposed approach and performance of the model is evaluated in different sizes. The results show that this approach is effective and useful for managerial decisions.

  16. A specimen of <em>Sorex> cfr. <em>samniticus> in Barn Owl's pellets from Murge plateau (Apulia, Italy / Su di un <em>Sorex> cfr. <em>samniticus> (Insectivora, Soricidae rinvenuto in borre di <em>Tyto albaem> delle Murge (Puglia, Italia

    Directory of Open Access Journals (Sweden)

    Giovanni Ferrara

    1992-07-01

    Full Text Available Abstract In a lot of Barn Owl's pellets from the Murge plateau a specimen of <em>Sorex> sp. was detected. Thank to some morphological and morphometrical features, the cranial bones can be tentatively attributed to <em>Sorex samniticusem> Altobello, 1926. The genus <em>Sorex> was not yet included in the Apulia's fauna southwards of the Gargano district; the origin and significance of the above record is briefly discussed, the actual presence of a natural population of <em>Sorex> in the Murge being not yet proved. Riassunto Viene segnalato il rinvenimento di un esemplare di <em>Sorex> cfr. <em>samniticus> da borre di <em>Tyto albaem> delle Murge. Poiché il genere non era stato ancora segnalato nella Puglia a sud del Gargano, viene discusso il significato faunistico del reperto.

  17. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    Science.gov (United States)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a

  18. Efecto de extractos vegetales de <em>Polygonum hydropiperoidesem>, <em>Solanum nigrumem> y <em>Calliandra pittieriem> sobre el gusano cogollero (<em>Spodoptera frugiperdaem>

    Directory of Open Access Journals (Sweden)

    Lizarazo H. Karol

    2008-12-01

    Full Text Available

    El gusano cogollero <em>Spodoptera frugiperdaem> es una de las plagas que más afectan los cultivos en la región de Sumapaz (Cundinamarca, Colombia. En la actualidad se controla principalmente aplicando productos de síntesis química, sin embargo la aplicación de extractos vegetales surge como una alternativa de menor impacto sobre el ambiente. Este control se emplea debido a que las plantas contienen metabolitos secundarios que pueden inhibir el desarrollo de los insectos. Por tal motivo, la presente investigación evaluó el efecto insecticida y antialimentario de extractos vegetales de barbasco <em>Polygonum hydropiperoidesem> (Polygonaceae, carbonero <em>Calliandra pittieriem> (Mimosaceae y hierba mora <em>Solanum nigrumem> (Solanaceae sobre larvas de <em>S. frugiperdaem> biotipo maíz. Se estableció una cría masiva del insecto en el laboratorio utilizando una dieta natural con hojas de maíz. Posteriormente se obtuvieron extractos vegetales utilizando solventes de alta polaridad (agua y etanol y media polaridad (diclorometano los cuales se aplicaron sobre las larvas de segundo instar. Los resultados más destacados se presentaron con extractos de <em>P. hydropiperoidesem>, obtenidos con diclorometano en sus diferentes dosis, con los cuales se alcanzó una mortalidad de 100% 12 días después de la aplicación y un efecto antialimentario representado por un consumo de follaje de maíz inferior al 4%, efectos similares a los del testigo comercial (Clorpiriphos.

  19. Sensor Data Quality and Angular Rate Down-Selection Algorithms on SLS EM-1

    Science.gov (United States)

    Park, Thomas; Smith, Austin; Oliver, T. Emerson

    2018-01-01

    The NASA Space Launch System Block 1 launch vehicle is equipped with an Inertial Navigation System (INS) and multiple Rate Gyro Assemblies (RGA) that are used in the Guidance, Navigation, and Control (GN&C) algorithms. The INS provides the inertial position, velocity, and attitude of the vehicle along with both angular rate and specific force measurements. Additionally, multiple sets of co-located rate gyros supply angular rate data. The collection of angular rate data, taken along the launch vehicle, is used to separate out vehicle motion from flexible body dynamics. Since the system architecture uses redundant sensors, the capability was developed to evaluate the health (or validity) of the independent measurements. A suite of Sensor Data Quality (SDQ) algorithms is responsible for assessing the angular rate data from the redundant sensors. When failures are detected, SDQ will take the appropriate action and disqualify or remove faulted sensors from forward processing. Additionally, the SDQ algorithms contain logic for down-selecting the angular rate data used by the GNC software from the set of healthy measurements. This paper explores the trades and analyses that were performed in selecting a set of robust fault-detection algorithms included in the GN&C flight software. These trades included both an assessment of hardware-provided health and status data as well as an evaluation of different algorithms based on time-to-detection, type of failures detected, and probability of detecting false positives. We then provide an overview of the algorithms used for both fault-detection and measurement down selection. We next discuss the role of trajectory design, flexible-body models, and vehicle response to off-nominal conditions in setting the detection thresholds. Lastly, we present lessons learned from software integration and hardware-in-the-loop testing.

  20. Rapid Development of Microsatellite Markers with 454 Pyrosequencing in a Vulnerable Fish<em>,> the Mottled Skate<em>, Raja em>pulchra>

    Directory of Open Access Journals (Sweden)

    Jung-Ha Kang

    2012-06-01

    Full Text Available The mottled skate, <em>Raja pulchraem>, is an economically valuable fish. However, due to a severe population decline, it is listed as a vulnerable species by the International Union for Conservation of Nature. To analyze its genetic structure and diversity, microsatellite markers were developed using 454 pyrosequencing. A total of 17,033 reads containing dinucleotide microsatellite repeat units (mean, 487 base pairs were identified from 453,549 reads. Among 32 loci containing more than nine repeat units, 20 primer sets (62% produced strong PCR products, of which 14 were polymorphic. In an analysis of 60 individuals from two <em>R. pulchra em>populations, the number of alleles per locus ranged from 1–10, and the mean allelic richness was 4.7. No linkage disequilibrium was found between any pair of loci, indicating that the markers were independent. The Hardy–Weinberg equilibrium test showed significant deviation in two of the 28 single-loci after sequential Bonferroni’s correction. Using 11 primer sets, cross-species amplification was demonstrated in nine related species from four families within two classes. Among the 11 loci amplified from three other <em>Rajidae> family species; three loci were polymorphic. A monomorphic locus was amplified in all three <em>Rajidae> family species and the <em>Dasyatidae> family. Two <em>Rajidae> polymorphic loci amplified monomorphic target DNAs in four species belonging to the Carcharhiniformes class, and another was polymorphic in two Carcharhiniformes species.

  1. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1998-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  2. Maximum entropy algorithm and its implementation for the neutral beam profile measurement

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seung Wook; Cho, Gyu Seong [Korea Advanced Institute of Science and Technology, Taejon (Korea, Republic of); Cho, Yong Sub [Korea Atomic Energy Research Institute, Taejon (Korea, Republic of)

    1997-12-31

    A tomography algorithm to maximize the entropy of image using Lagrangian multiplier technique and conjugate gradient method has been designed for the measurement of 2D spatial distribution of intense neutral beams of KSTAR NBI (Korea Superconducting Tokamak Advanced Research Neutral Beam Injector), which is now being designed. A possible detection system was assumed and a numerical simulation has been implemented to test the reconstruction quality of given beam profiles. This algorithm has the good applicability for sparse projection data and thus, can be used for the neutral beam tomography. 8 refs., 3 figs. (Author)

  3. Sparse Variational Bayesian SAGE Algorithm With Application to the Estimation of Multipath Wireless Channels

    DEFF Research Database (Denmark)

    Shutin, Dmitriy; Fleury, Bernard Henri

    2011-01-01

    In this paper, we develop a sparse variational Bayesian (VB) extension of the space-alternating generalized expectation-maximization (SAGE) algorithm for the high resolution estimation of the parameters of relevant multipath components in the response of frequency and spatially selective wireless...... channels. The application context of the algorithm considered in this contribution is parameter estimation from channel sounding measurements for radio channel modeling purpose. The new sparse VB-SAGE algorithm extends the classical SAGE algorithm in two respects: i) by monotonically minimizing...... parametric sparsity priors for the weights of the multipath components. We revisit the Gaussian sparsity priors within the sparse VB-SAGE framework and extend the results by considering Laplace priors. The structure of the VB-SAGE algorithm allows for an analytical stability analysis of the update expression...

  4. Inclusive Fitness Maximization:An Axiomatic Approach

    OpenAIRE

    Okasha, Samir; Weymark, John; Bossert, Walter

    2014-01-01

    Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of qu...

  5. Does mental exertion alter maximal muscle activation?

    Directory of Open Access Journals (Sweden)

    Vianney eRozand

    2014-09-01

    Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.

  6. Experimental validation of a real time energy management system for microgrids in islanded mode using a local day-ahead electricity market and MINLP

    International Nuclear Information System (INIS)

    Marzband, Mousa; Sumper, Andreas; Domínguez-García, José Luis; Gumara-Ferret, Ramon

    2013-01-01

    Highlights: • An algorithm is developed to enhance Microgrid performance. • Local energy market cost model is proposed to obtain the cheapest price. • Several real technical and market scenarios are considered in the study. • Simulation and experimental results demonstrate a significant reduction in cost. - Abstract: Energy management systems (EMS) are vital supervisory control tools used to optimally operate and schedule Microgrids (MG). In this paper, an EMS algorithm based on mixed-integer nonlinear programming (MINLP) is presented for MG in islanding mode considering different scenarios. A local energy market (LEM) is also proposed with in this EMS to obtain the cheapest price, maximizing the utilization of distributed energy resources. The proposed energy management is based on LEM and allows scheduling the MG generation with minimum information shared sent by generation units. Load demand management is carried out by demand response concept to improve reliability and efficiency as well as to reduce the total cost of energy (COE). Simulations are performed with real data to test the performance and accuracy of the proposed algorithm. The proposed algorithm is experimentally tested to evaluate processing speed as well as to validate the results obtained from the simulation setup on a real MG Testbed. The results of the EMS–MINLP based on LEM are compared with a conventional EMS based on LEM. Simulation and experimental results show the effectiveness of the proposed algorithm which provides a reduction of 15% in COE, in comparison with conventional EMS

  7. Hybrid Proximal-Point Methods for Zeros of Maximal Monotone Operators, Variational Inequalities and Mixed Equilibrium Problems

    Directory of Open Access Journals (Sweden)

    Kriengsak Wattanawitoon

    2011-01-01

    Full Text Available We prove strong and weak convergence theorems of modified hybrid proximal-point algorithms for finding a common element of the zero point of a maximal monotone operator, the set of solutions of equilibrium problems, and the set of solution of the variational inequality operators of an inverse strongly monotone in a Banach space under different conditions. Moreover, applications to complementarity problems are given. Our results modify and improve the recently announced ones by Li and Song (2008 and many authors.

  8. Results on the neutron energy distribution measurements at the RECH-1 Chilean nuclear reactor

    Energy Technology Data Exchange (ETDEWEB)

    Aguilera, P., E-mail: paguilera87@gmail.com; Romero-Barrientos, J. [Comisión Chilena de Energía Nuclear, Nueva Bilbao 12501, La Reina, Santiago (Chile); Universidad de Chile, Dpto. de Física, Facultad de Ciencias, Las Palmeras 3425, Nuñoa, Santiago (Chile); Molina, F. [Comisión Chilena de Energía Nuclear, Nueva Bilbao 12501, La Reina, Santiago (Chile)

    2016-07-07

    Neutron activations experiments has been perform at the RECH-1 Chilean Nuclear Reactor to measure its neutron flux energy distribution. Samples of pure elements was activated to obtain the saturation activities for each reaction. Using - ray spectroscopy we identify and measure the activity of the reaction product nuclei, obtaining the saturation activities of 20 reactions. GEANT4 and MCNP was used to compute the self shielding factor to correct the cross section for each element. With the Expectation-Maximization algorithm (EM) we were able to unfold the neutron flux energy distribution at dry tube position, near the RECH-1 core. In this work, we present the unfolding results using the EM algorithm.

  9. A New Natural Lactone from <em>Dimocarpus> <em>longan> Lour. Seeds

    Directory of Open Access Journals (Sweden)

    Zhongjun Li

    2012-08-01

    Full Text Available A new natural product named longanlactone was isolated from <em>Dimocarpus> <em>longan> Lour. seeds. Its structure was determined as 3-(2-acetyl-1<em>H>-pyrrol-1-yl-5-(prop-2-yn-1-yldihydrofuran-2(3H-one by spectroscopic methods and HRESIMS.

  10. Purification, Characterization and Antioxidant Activities <em>in Vitroem>> em>and <em>in Vivoem> of the Polysaccharides from <em>Boletus edulisem> Bull

    Directory of Open Access Journals (Sweden)

    Yijun Fan

    2012-07-01

    Full Text Available A water-soluble polysaccharide (BEBP was extracted from <em>Boletus edulis em>Bull using hot water extraction followed by ethanol precipitation. The polysaccharide BEBP was further purified by chromatography on a DEAE-cellulose column, giving three major polysaccharide fractions termed BEBP-1, BEBP-2 and BEBP-3. In the next experiment, the average molecular weight (Mw, IR and monosaccharide compositional analysis of the three polysaccharide fractions were determined. The evaluation of antioxidant activities both <em>in vitroem> and <em>in vivo em>suggested that BEBP-3 had good potential antioxidant activity, and should be explored as a novel potential antioxidant.

  11. Traffic sharing algorithms for hybrid mobile networks

    Science.gov (United States)

    Arcand, S.; Murthy, K. M. S.; Hafez, R.

    1995-01-01

    In a hybrid (terrestrial + satellite) mobile personal communications networks environment, a large size satellite footprint (supercell) overlays on a large number of smaller size, contiguous terrestrial cells. We assume that the users have either a terrestrial only single mode terminal (SMT) or a terrestrial/satellite dual mode terminal (DMT) and the ratio of DMT to the total terminals is defined gamma. It is assumed that the call assignments to and handovers between terrestrial cells and satellite supercells take place in a dynamic fashion when necessary. The objectives of this paper are twofold, (1) to propose and define a class of traffic sharing algorithms to manage terrestrial and satellite network resources efficiently by handling call handovers dynamically, and (2) to analyze and evaluate the algorithms by maximizing the traffic load handling capability (defined in erl/cell) over a wide range of terminal ratios (gamma) given an acceptable range of blocking probabilities. Two of the algorithms (G & S) in the proposed class perform extremely well for a wide range of gamma.

  12. A comparative study on using meta-heuristic algorithms for road maintenance planning: Insights from field study in a developing country

    Directory of Open Access Journals (Sweden)

    Ali Gerami Matin

    2017-10-01

    Full Text Available Optimized road maintenance planning seeks for solutions that can minimize the life-cycle cost of a road network and concurrently maximize pavement condition. Aiming at proposing an optimal set of road maintenance solutions, robust meta-heuristic algorithms are used in research. Two main optimization techniques are applied including single-objective and multi-objective optimization. Genetic algorithms (GA, particle swarm optimization (PSO, and combination of genetic algorithm and particle swarm optimization (GAPSO as single-objective techniques are used, while the non-domination sorting genetic algorithm II (NSGAII and multi-objective particle swarm optimization (MOPSO which are sufficient for solving computationally complex large-size optimization problems as multi-objective techniques are applied and compared. A real case study from the rural transportation network of Iran is employed to illustrate the sufficiency of the optimum algorithm. The formulation of the optimization model is carried out in such a way that a cost-effective maintenance strategy is reached by preserving the performance level of the road network at a desirable level. So, the objective functions are pavement performance maximization and maintenance cost minimization. It is concluded that multi-objective algorithms including non-domination sorting genetic algorithm II (NSGAII and multi-objective particle swarm optimization performed better than the single objective algorithms due to the capability to balance between both objectives. And between multi-objective algorithms the NSGAII provides the optimum solution for the road maintenance planning.

  13. On maximal surfaces in asymptotically flat space-times

    International Nuclear Information System (INIS)

    Bartnik, R.; Chrusciel, P.T.; O Murchadha, N.

    1990-01-01

    Existence of maximal and 'almost maximal' hypersurfaces in asymptotically flat space-times is established under boundary conditions weaker than those considered previously. We show in particular that every vacuum evolution of asymptotically flat data for Einstein equations can be foliated by slices maximal outside a spatially compact set and that every (strictly) stationary asymptotically flat space-time can be foliated by maximal hypersurfaces. Amongst other uniqueness results, we show that maximal hypersurface can be used to 'partially fix' an asymptotic Poincare group. (orig.)

  14. <em>Angiostrongylus vasorumem> in red foxes (<em>Vulpes vulpesem> and badgers (<em>Meles melesem> from Central and Northern Italy

    Directory of Open Access Journals (Sweden)

    Marta Magi

    2010-06-01

    Full Text Available Abstract During 2004-2005 and 2007-2008, 189 foxes (<em>Vulpes vulpesem> and 6 badgers (<em>Meles melesem> were collected in different areas of Central Northern Italy (Piedmont, Liguria and Tuscany and examined for <em>Angiostrongylus vasorumem> infection. The prevalence of the infection was significantly different in the areas considered, with the highest values in the district of Imperia (80%, Liguria and in Montezemolo (70%, southern Piedmont; the prevalence in Tuscany was 7%. One badger collected in the area of Imperia turned out to be infected, representing the first report of the parasite in this species in Italy. Further studies are needed to evaluate the role played by fox populations as reservoirs of infection and the probability of its spreading to domestic dogs.
    Riassunto <em>Angiostrongylus vasorumem> nella volpe (<em>Vulpes vulpesem> e nel tasso (<em>Meles melesem> in Italia centro-settentrionale. Nel 2004-2005 e 2007-2008, 189 volpi (<em>Vulpes vulpesem> e 6 tassi (<em>Meles melesem> provenienti da differenti aree dell'Italia settentrionale e centrale (Piemonte, Liguria Toscana, sono stati esaminati per la ricerca di <em>Angiostrongylus vasorumem>. La prevalenza del nematode è risultata significativamente diversa nelle varie zone, con valori elevati nelle zone di Imperia (80% e di Montezemolo (70%, provincia di Cuneo; la prevalenza in Toscana è risultata del 7%. Un tasso proveniente dall'area di Imperia è risultato positivo per A. vasorum; questa è la prima segnalazione del parassita in tale specie in Italia. Ulteriori studi sono necessari per valutare il potenziale della volpe come serbatoio e la possibilità di diffusione della parassitosi ai cani domestici.

    doi:10.4404/hystrix-20.2-4442

  15. Insulin resistance and maximal oxygen uptake

    DEFF Research Database (Denmark)

    Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans

    2003-01-01

    BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...

  16. Desempenho de cultivares de melão rendilhado em função do sistema de cultivo Performance of net melon cultivars depending on the cultivation system

    Directory of Open Access Journals (Sweden)

    Pablo F Vargas

    2008-06-01

    Full Text Available Objetivou-se com este trabalho avaliar cinco cultivares de melão rendilhado em dois sistemas de cultivo quanto às características produtivas. O experimento foi instalado em casa de vegetação na UNESP-FCAV, em Jaboticabal-SP, de novembro/05 a fevereiro/06. O delineamento experimental utilizado foi de blocos ao acaso, em esquema fatorial 5 x 2, com quatro repetições. Os fatores avaliados foram cinco híbridos de melão rendilhado (Maxim, Bônus nº2, Shinju 200, Fantasy e Louis e dois sistemas de cultivo (no solo e em substrato a base de fibra da casca de coco. Após a colheita, foram avaliados a produção por planta (kg planta-1; diâmetro transversal (DTF e longitudinal (DLF do fruto, em mm; índice de formato do fruto (IFF; diâmetro transversal (DTL e longitudinal (DLL do lóculo, em mm; índice de formato do lóculo (IFL; diâmetro de inserção do pedúnculo (DIP, em mm; e espessura do mesocarpo (EM, em mm. Não se verificou interação significativa entre os fatores avaliados. O cultivo em substrato proporcionou maior produtividade por planta em relação ao cultivo no solo (2,51 e 1,52 kg planta-1, respectivamente, tendo a cultivar Fantasy (2,44 kg planta-1 o melhor desempenho entre as cultivares, não diferindo de Louis e Maxim. Para as características DIP, DTL e EM, foram verificados melhores desempenhos em plantas cultivadas em substrato. Para as características DTF, DLF, IFF e DLL não foram encontradas diferenças entre os sistemas de cultivo. Assim, o cultivo em substrato se destacou em relação ao cultivo no solo, tendo a cultivar Fantasy apresentado melhor desempenho comparada a Shinju 200 e Bônus nº2.In this study the productive characteristics of five net melon cultivars, using two different cultivation systems were evaluated. The study was conducted in a greenhouse from November/05 to February/06. The experiments were carried out using randomized complete block design with a 5 x 2 factorial scheme and four

  17. Semi-supervised Learning for Phenotyping Tasks.

    Science.gov (United States)

    Dligach, Dmitriy; Miller, Timothy; Savova, Guergana K

    2015-01-01

    Supervised learning is the dominant approach to automatic electronic health records-based phenotyping, but it is expensive due to the cost of manual chart review. Semi-supervised learning takes advantage of both scarce labeled and plentiful unlabeled data. In this work, we study a family of semi-supervised learning algorithms based on Expectation Maximization (EM) in the context of several phenotyping tasks. We first experiment with the basic EM algorithm. When the modeling assumptions are violated, basic EM leads to inaccurate parameter estimation. Augmented EM attenuates this shortcoming by introducing a weighting factor that downweights the unlabeled data. Cross-validation does not always lead to the best setting of the weighting factor and other heuristic methods may be preferred. We show that accurate phenotyping models can be trained with only a few hundred labeled (and a large number of unlabeled) examples, potentially providing substantial savings in the amount of the required manual chart review.

  18. Maximize Minimum Utility Function of Fractional Cloud Computing System Based on Search Algorithm Utilizing the Mittag-Leffler Sum

    Directory of Open Access Journals (Sweden)

    Rabha W. Ibrahim

    2018-01-01

    Full Text Available The maximum min utility function (MMUF problem is an important representative of a large class of cloud computing systems (CCS. Having numerous applications in practice, especially in economy and industry. This paper introduces an effective solution-based search (SBS algorithm for solving the problem MMUF. First, we suggest a new formula of the utility function in term of the capacity of the cloud. We formulate the capacity in CCS, by using a fractional diffeo-integral equation. This equation usually describes the flow of CCS. The new formula of the utility function is modified recent active utility functions. The suggested technique first creates a high-quality initial solution by eliminating the less promising components, and then develops the quality of the achieved solution by the summation search solution (SSS. This method is considered by the Mittag-Leffler sum as hash functions to determine the position of the agent. Experimental results commonly utilized in the literature demonstrate that the proposed algorithm competes approvingly with the state-of-the-art algorithms both in terms of solution quality and computational efficiency.

  19. Maximal network reliability for a stochastic power transmission network

    International Nuclear Information System (INIS)

    Lin, Yi-Kuei; Yeh, Cheng-Ta

    2011-01-01

    Many studies regarded a power transmission network as a binary-state network and constructed it with several arcs and vertices to evaluate network reliability. In practice, the power transmission network should be stochastic because each arc (transmission line) combined with several physical lines is multistate. Network reliability is the probability that the network can transmit d units of electric power from a power plant (source) to a high voltage substation at a specific area (sink). This study focuses on searching for the optimal transmission line assignment to the power transmission network such that network reliability is maximized. A genetic algorithm based method integrating the minimal paths and the Recursive Sum of Disjoint Products is developed to solve this assignment problem. A real power transmission network is adopted to demonstrate the computational efficiency of the proposed method while comparing with the random solution generation approach.

  20. On Maximal Hard-Core Thinnings of Stationary Particle Processes

    Science.gov (United States)

    Hirsch, Christian; Last, Günter

    2018-02-01

    The present paper studies existence and distributional uniqueness of subclasses of stationary hard-core particle systems arising as thinnings of stationary particle processes. These subclasses are defined by natural maximality criteria. We investigate two specific criteria, one related to the intensity of the hard-core particle process, the other one being a local optimality criterion on the level of realizations. In fact, the criteria are equivalent under suitable moment conditions. We show that stationary hard-core thinnings satisfying such criteria exist and are frequently distributionally unique. More precisely, distributional uniqueness holds in subcritical and barely supercritical regimes of continuum percolation. Additionally, based on the analysis of a specific example, we argue that fluctuations in grain sizes can play an important role for establishing distributional uniqueness at high intensities. Finally, we provide a family of algorithmically constructible approximations whose volume fractions are arbitrarily close to the maximum.

  1. Assessment of Genetic Fidelity in <em>Rauvolfia em>s>erpentina em>Plantlets Grown from Synthetic (Encapsulated Seeds Following <em>in Vitroem> Storage at 4 °C

    Directory of Open Access Journals (Sweden)

    Mohammad Anis

    2012-05-01

    Full Text Available An efficient method was developed for plant regeneration and establishment from alginate encapsulated synthetic seeds of <em>Rauvolfia serpentinaem>. Synthetic seeds were produced using <em>in vitroem> proliferated microshoots upon complexation of 3% sodium alginate prepared in Llyod and McCown woody plant medium (WPM and 100 mM calcium chloride. Re-growth ability of encapsulated nodal segments was evaluated after storage at 4 °C for 0, 1, 2, 4, 6 and 8 weeks and compared with non-encapsulated buds. Effects of different media <em>viz>; Murashige and Skoog medium; Lloyd and McCown woody Plant medium, Gamborg’s B5 medium and Schenk and Hildebrandt medium was also investigated for conversion into plantlets. The maximum frequency of conversion into plantlets from encapsulated nodal segments stored at 4 °C for 4 weeks was achieved on woody plant medium supplement with 5.0 μM BA and 1.0 μM NAA. Rooting in plantlets was achieved in half-strength Murashige and Skoog liquid medium containing 0.5 μM indole-3-acetic acid (IAA on filter paper bridges. Plantlets obtained from stored synseeds were hardened, established successfully <em>ex vitroem> and were morphologically similar to each other as well as their mother plant. The genetic fidelity of <em>Rauvolfia em>clones raised from synthetic seeds following four weeks of storage at 4 °C were assessed by using random amplified polymorphic<em> em>DNA (RAPD and inter-simple sequence repeat<em> em>(ISSR markers. All the RAPD and ISSR profiles from generated plantlets were monomorphic and comparable<em> em>to the mother plant, which confirms the genetic<em> em>stability among the clones. This synseed protocol could be useful for establishing a particular system for conservation, short-term storage and production of genetically identical and stable plants before it is released for commercial purposes.

  2. POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN

    Directory of Open Access Journals (Sweden)

    Sang Ayu Isnu Maharani

    2017-06-01

    Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim

  3. Sulla presenza di <em>Sorex antinoriiem>, <em>Neomys anomalusem> (Insectivora, Soricidae e <em>Talpa caecaem> (Insectivora, Talpidae in Umbria

    Directory of Open Access Journals (Sweden)

    A.M. Paci

    2003-10-01

    Full Text Available Lo scopo del contributo è di fornire un aggiornamento sulla presenza del Toporagno del Vallese <em>Sorex antinoriiem>, del Toporagno acquatico di Miller <em>Neomys anomalusem> e della Talpa cieca <em>Talpa caecaem> in Umbria, dove le specie risultano accertate ormai da qualche anno. A tal fine sono stati rivisitati i reperti collezionati e la bibliografia conosciuta. Toporagno del Vallese: elevato di recente a livello di specie da Brünner et al. (2002, altrimenti considerato sottospecie del Toporagno comune (<em>S. araneus antinoriiem>. È conservato uno di tre crani incompleti (mancano mandibole ed incisivi superiori al momento prudenzialmente riferiti a <em>Sorex> cfr. <em>antinorii>, provenienti dall?Appennino umbro-marchigiano settentrionale (dintorni di Scalocchio - PG, 590 m. s.l.m. e determinati sulla base della pigmentazione rossa degli ipoconi del M1 e M2; Toporagno acquatico di Miller: tre crani (Breda in Paci e Romano op. cit. e un esemplare intero (Paci, ined. sono stati trovati a pochi chilometri di distanza gli uni dall?altro tra i comuni di Assisi e Valfabbrica, in ambienti mediocollinari limitrofi al Parco Regionale del M.te Subasio (Perugia. In provincia di Terni la specie viene segnalata da Isotti (op. cit. per i dintorni di Orvieto. Talpa cieca: sono noti una femmina e un maschio raccolti nel comune di Pietralunga (PG, rispettivamente in una conifereta a <em>Pinus nigraem> (m. 630 s.l.m. e nelle vicinanze di un bosco misto collinare a prevalenza di <em>Quercus cerrisem> (m. 640 s.l.m.. Recentemente un terzo individuo è stato rinvenuto nel comune di Sigillo (PG, all?interno del Parco Regionale di M.te Cucco, sul margine di una faggeta a 1100 m s.l.m. In entrambi i casi l?areale della specie è risultato parapatrico con quello di <em>Talpa europaeaem>.

  4. A hybrid flower pollination algorithm based modified randomized location for multi-threshold medical image segmentation.

    Science.gov (United States)

    Wang, Rui; Zhou, Yongquan; Zhao, Chengyan; Wu, Haizhou

    2015-01-01

    Multi-threshold image segmentation is a powerful image processing technique that is used for the preprocessing of pattern recognition and computer vision. However, traditional multilevel thresholding methods are computationally expensive because they involve exhaustively searching the optimal thresholds to optimize the objective functions. To overcome this drawback, this paper proposes a flower pollination algorithm with a randomized location modification. The proposed algorithm is used to find optimal threshold values for maximizing Otsu's objective functions with regard to eight medical grayscale images. When benchmarked against other state-of-the-art evolutionary algorithms, the new algorithm proves itself to be robust and effective through numerical experimental results including Otsu's objective values and standard deviations.

  5. A Hybrid Genetic Wind Driven Heuristic Optimization Algorithm for Demand Side Management in Smart Grid

    Directory of Open Access Journals (Sweden)

    Nadeem Javaid

    2017-03-01

    Full Text Available In recent years, demand side management (DSM techniques have been designed for residential, industrial and commercial sectors. These techniques are very effective in flattening the load profile of customers in grid area networks. In this paper, a heuristic algorithms-based energy management controller is designed for a residential area in a smart grid. In essence, five heuristic algorithms (the genetic algorithm (GA, the binary particle swarm optimization (BPSO algorithm, the bacterial foraging optimization algorithm (BFOA, the wind-driven optimization (WDO algorithm and our proposed hybrid genetic wind-driven (GWD algorithm are evaluated. These algorithms are used for scheduling residential loads between peak hours (PHs and off-peak hours (OPHs in a real-time pricing (RTP environment while maximizing user comfort (UC and minimizing both electricity cost and the peak to average ratio (PAR. Moreover, these algorithms are tested in two scenarios: (i scheduling the load of a single home and (ii scheduling the load of multiple homes. Simulation results show that our proposed hybrid GWD algorithm performs better than the other heuristic algorithms in terms of the selected performance metrics.

  6. A Novel Spectrum Scheduling Scheme with Ant Colony Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Liping Liu

    2018-01-01

    Full Text Available Cognitive radio is a promising technology for improving spectrum utilization, which allows cognitive users access to the licensed spectrum while primary users are absent. In this paper, we design a resource allocation framework based on graph theory for spectrum assignment in cognitive radio networks. The framework takes into account the constraints that interference for primary users and possible collision among cognitive users. Based on the proposed model, we formulate a system utility function to maximize the system benefit. Based on the proposed model and objective problem, we design an improved ant colony optimization algorithm (IACO from two aspects: first, we introduce differential evolution (DE process to accelerate convergence speed by monitoring mechanism; then we design a variable neighborhood search (VNS process to avoid the algorithm falling into the local optimal. Simulation results demonstrate that the improved algorithm achieves better performance.

  7. Maximal bite force in young adults with temporomandibular disorders and bruxism Força de mordida máxima em adultos jovens com disfunção temporomandibular e bruxismo

    Directory of Open Access Journals (Sweden)

    Raquel Aparecida Pizolato

    2007-09-01

    Full Text Available Parafunctional habits, such as bruxism, are contributory factors for temporomandibular disorders (TMD. The aim of this study was to evaluate the maximal bite force (MBF in the presence of TMD and bruxism (TMDB in young adults. Twelve women (mean age 21.5 years and 7 men (mean age 22.4 years, composed the TMDB group. Ten healthy women and 9 men (mean age 21.4 and 22.4 years, respectively formed the control group. TMD symptoms were evaluated by a structured questionnaire and clinical signs/symptoms were evaluated during clinical examination. A visual analogical scale (VAS was applied for stress assessment. MBF was measured with a gnatodynamometer. The subjects were asked to bite 2 times with maximal effort, during 5 seconds, with a rest interval of about one minute. The highest values were considered. The data were analyzed with Shapiro-Wilks W-test, descriptive statistics, paired or unpaired t tests or Mann-Whitney tests when indicated, and Fisher's exact test (p Hábitos parafuncionais, como o bruxismo, podem contribuir para a disfunção temporomandibular (DTM. O objetivo deste trabalho foi avaliar a força de mordida máxima (FMM na presença de DTM e bruxismo (DTMB em adultos jovens. Doze mulheres (idade média de 21,5 anos e sete homens (idade média 22,4 anos compuseram o grupo DTMB. O grupo controle foi formado por 10 mulheres e 9 homens saudáveis, com idades médias de 21,4 e 22,4 anos, respectivamente. Os sintomas de DTM foram avaliados com um questionário estruturado, e os sinais/sintomas clínicos foram avaliados no exame clínico. Para avaliar estresse, utilizou-se a escala analógica visual (VAS. A FMM foi mensurada com gnatodinamômetro, e o participante foi orientado a morder com o máximo esforço durante 5 segundos, duas vezes, com intervalo de aproximadamente 1 minuto, considerando-se os valores máximos. Os dados foram analisados pelo teste de Shapiro-Wilks, estatística descritiva, teste t pareado e independente, Mann

  8. Adding large EM stack support

    KAUST Repository

    Holst, Glendon

    2016-12-01

    Serial section electron microscopy (SSEM) image stacks generated using high throughput microscopy techniques are an integral tool for investigating brain connectivity and cell morphology. FIB or 3View scanning electron microscopes easily generate gigabytes of data. In order to produce analyzable 3D dataset from the imaged volumes, efficient and reliable image segmentation is crucial. Classical manual approaches to segmentation are time consuming and labour intensive. Semiautomatic seeded watershed segmentation algorithms, such as those implemented by ilastik image processing software, are a very powerful alternative, substantially speeding up segmentation times. We have used ilastik effectively for small EM stacks – on a laptop, no less; however, ilastik was unable to carve the large EM stacks we needed to segment because its memory requirements grew too large – even for the biggest workstations we had available. For this reason, we refactored the carving module of ilastik to scale it up to large EM stacks on large workstations, and tested its efficiency. We modified the carving module, building on existing blockwise processing functionality to process data in manageable chunks that can fit within RAM (main memory). We review this refactoring work, highlighting the software architecture, design choices, modifications, and issues encountered.

  9. Four Novel Cellulose Synthase (CESA Genes from <em>Birch> (<em>Betula platyphylla em>Suk. Involved in Primary and Secondary Cell Wall Biosynthesis

    Directory of Open Access Journals (Sweden)

    Xuemei Liu

    2012-09-01

    Full Text Available Cellulose synthase (CESA, which is an essential catalyst for the generation of plant cell wall biomass, is mainly encoded by the <em>CesA> gene family that contains ten or more members. In this study; four full-length cDNAs encoding CESA were isolated from<em> Betula platyphyllaem> Suk., which is an important timber species, using RT-PCR combined with the RACE method and were named as <em>BplCesA3em>, <em>−4em>,> −7 em>and> −8em>. These deduced CESAs contained the same typical domains and regions as their <em>Arabidopsis> homologs. The cDNA lengths differed among these four genes, as did the locations of the various protein domains inferred from the deduced amino acid sequences, which shared amino acid sequence identities ranging from only 63.8% to 70.5%. Real-time RT-PCR showed that all four <em>BplCesAs> were expressed at different levels in diverse tissues. Results indicated that BplCESA8 might be involved in secondary cell wall biosynthesis and floral development. BplCESA3 appeared in a unique expression pattern and was possibly involved in primary cell wall biosynthesis and seed development; it might also be related to the homogalacturonan synthesis. BplCESA7 and BplCESA4 may be related to the formation of a cellulose synthase complex and participate mainly in secondary cell wall biosynthesis. The extremely low expression abundance of the four BplCESAs in mature pollen suggested very little involvement of them in mature pollen formation in <em>Betula>. The distinct expression pattern of the four <em>BplCesAs> suggested they might participate in developments of various tissues and that they are possibly controlled by distinct mechanisms in <em>Betula.>

  10. Microsatellite Loci in the Gypsophyte <em>Lepidium subulatum em>(Brassicaceae, and Transferability to Other <em>Lepidieae>

    Directory of Open Access Journals (Sweden)

    José Gabriel Segarra-Moragues

    2012-09-01

    Full Text Available Polymorphic microsatellite markers were developed for the Ibero-North African, strict gypsophyte <em>Lepidium subulatumem> to unravel the effects of habitat fragmentation in levels of genetic diversity, genetic structure and gene flow among its populations. Using 454 pyrosequencing 12 microsatellite loci including di- and tri-nucleotide repeats were characterized in <em>L. subulatumem>. They amplified a total of 80 alleles (2–12 alleles per locus in a sample of 35 individuals of <em>L. subulatumem>, showing relatively high levels of genetic diversity, <em>H>O = 0.645, <em>H>E = 0.627. Cross-species transferability of all 12 loci was successful for the Iberian endemics <em>Lepidium cardaminesem>, <em>Lepidium stylatumem>, and the widespread, <em>Lepidium graminifoliumem> and one species each of two related genera, <em>Cardaria drabaem> and <em>Coronopus didymusem>. These microsatellite primers will be useful to investigate genetic diversity, population structure and to address conservation genetics in species of <em>Lepidium>.

  11. Steam condenser optimization using Real-parameter Genetic Algorithm for Prototype Fast Breeder Reactor

    Energy Technology Data Exchange (ETDEWEB)

    Jayalal, M.L., E-mail: jayalal@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Kumar, L. Satish, E-mail: satish@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Jehadeesan, R., E-mail: jeha@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Rajeswari, S., E-mail: raj@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Satya Murty, S.A.V., E-mail: satya@igcar.gov.in [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India); Balasubramaniyan, V.; Chetal, S.C. [Indira Gandhi Centre for Atomic Research, Kalpakkam 603102, Tamil Nadu (India)

    2011-10-15

    Highlights: > We model design optimization of a vital reactor component using Genetic Algorithm. > Real-parameter Genetic Algorithm is used for steam condenser optimization study. > Comparison analysis done with various Genetic Algorithm related mechanisms. > The results obtained are validated with the reference study results. - Abstract: This work explores the use of Real-parameter Genetic Algorithm and analyses its performance in the steam condenser (or Circulating Water System) optimization study of a 500 MW fast breeder nuclear reactor. Choice of optimum design parameters for condenser for a power plant from among a large number of technically viable combination is a complex task. This is primarily due to the conflicting nature of the economic implications of the different system parameters for maximizing the capitalized profit. In order to find the optimum design parameters a Real-parameter Genetic Algorithm model is developed and applied. The results obtained are validated with the reference study results.

  12. Natural maximal νμ-ντ mixing

    International Nuclear Information System (INIS)

    Wetterich, C.

    1999-01-01

    The naturalness of maximal mixing between myon- and tau-neutrinos is investigated. A spontaneously broken nonabelian generation symmetry can explain a small parameter which governs the deviation from maximal mixing. In many cases all three neutrino masses are almost degenerate. Maximal ν μ -ν τ -mixing suggests that the leading contribution to the light neutrino masses arises from the expectation value of a heavy weak triplet rather than from the seesaw mechanism. In this scenario the deviation from maximal mixing is predicted to be less than about 1%. (Copyright (c) 1999 Elsevier Science B.V., Amsterdam. All rights reserved.)

  13. Gaussian maximally multipartite-entangled states

    Science.gov (United States)

    Facchi, Paolo; Florio, Giuseppe; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio

    2009-12-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7 .

  14. Gaussian maximally multipartite-entangled states

    International Nuclear Information System (INIS)

    Facchi, Paolo; Florio, Giuseppe; Pascazio, Saverio; Lupo, Cosmo; Mancini, Stefano

    2009-01-01

    We study maximally multipartite-entangled states in the context of Gaussian continuous variable quantum systems. By considering multimode Gaussian states with constrained energy, we show that perfect maximally multipartite-entangled states, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of these states and their frustration for n≤7.

  15. Utility maximization and mode of payment

    NARCIS (Netherlands)

    Koning, R.H.; Ridder, G.; Heijmans, R.D.H.; Pollock, D.S.G.; Satorra, A.

    2000-01-01

    The implications of stochastic utility maximization in a model of choice of payment are examined. Three types of compatibility with utility maximization are distinguished: global compatibility, local compatibility on an interval, and local compatibility on a finite set of points. Keywords:

  16. Modeling and Design of MPPT Controller Using Stepped P&O Algorithm in Solar Photovoltaic System

    OpenAIRE

    R. Prakash; B. Meenakshipriya; R. Kumaravelan

    2014-01-01

    This paper presents modeling and simulation of Grid Connected Photovoltaic (PV) system by using improved mathematical model. The model is used to study different parameter variations and effects on the PV array including operating temperature and solar irradiation level. In this paper stepped P&O algorithm is proposed for MPPT control. This algorithm will identify the suitable duty ratio in which the DC-DC converter should be operated to maximize the power output. Photo voltaic array with pro...

  17. Constituents from <em>Vigna em>vexillata> and Their Anti-Inflammatory Activity

    Directory of Open Access Journals (Sweden)

    Guo-Feng Chen

    2012-08-01

    Full Text Available The seeds of <em>Vigna em>genus are important food resources and there have already been many reports regarding their bioactivities. In our preliminary bioassay, the chloroform layer of methanol extracts of<em> V. vexillata em>demonstrated significant anti-inflammatory bioactivity. Therefore, the present research is aimed to purify and identify the anti-inflammatory principles of <em>V. vexillataem>. One new sterol (1 and two new isoflavones (2,3 were reported from the natural sources for the first time and their chemical structures were determined by the spectroscopic and mass spectrometric analyses. In addition, 37 known compounds were identified by comparison of their physical and spectroscopic data with those reported in the literature. Among the isolates, daidzein (23, abscisic acid (25, and quercetin (40 displayed the most significant inhibition of superoxide anion generation and elastase release.

  18. Algorithm for image retrieval based on edge gradient orientation statistical code.

    Science.gov (United States)

    Zeng, Jiexian; Zhao, Yonggang; Li, Weiye; Fu, Xiang

    2014-01-01

    Image edge gradient direction not only contains important information of the shape, but also has a simple, lower complexity characteristic. Considering that the edge gradient direction histograms and edge direction autocorrelogram do not have the rotation invariance, we put forward the image retrieval algorithm which is based on edge gradient orientation statistical code (hereinafter referred to as EGOSC) by sharing the application of the statistics method in the edge direction of the chain code in eight neighborhoods to the statistics of the edge gradient direction. Firstly, we construct the n-direction vector and make maximal summation restriction on EGOSC to make sure this algorithm is invariable for rotation effectively. Then, we use Euclidean distance of edge gradient direction entropy to measure shape similarity, so that this method is not sensitive to scaling, color, and illumination change. The experimental results and the algorithm analysis demonstrate that the algorithm can be used for content-based image retrieval and has good retrieval results.

  19. Synthesis, Crystal Structure and Luminescent Property of Cd (II Complex with <em>N-Benzenesulphonyl-L>-leucine

    Directory of Open Access Journals (Sweden)

    Xishi Tai

    2012-09-01

    Full Text Available A new trinuclear Cd (II complex [Cd3(L6(2,2-bipyridine3] [L =<em> Nem>-phenylsulfonyl-L>-leucinato] has been synthesized and characterized by elemental analysis, IR and X-ray single crystal diffraction analysis. The results show that the complex belongs to the orthorhombic, space group<em> Pem>212121 with<em> aem> = 16.877(3 Å, <em>b> em>= 22.875(5 Å, <em>c em>= 29.495(6 Å, <em>α> em>= <emem>= <emem>= 90°, <em>V> em>= 11387(4 Å3, <em>Z> em>= 4, <em>Dc>= 1.416 μg·m−3, <emem>= 0.737 mm−1, <em>F> em>(000 = 4992, and final <em>R>1 = 0.0390, <em>ωR>2 = 0.0989. The complex comprises two seven-coordinated Cd (II atoms, with a N2O5 distorted pengonal bipyramidal coordination environment and a six-coordinated Cd (II atom, with a N2O4 distorted octahedral coordination environment. The molecules form one dimensional chain structure by the interaction of bridged carboxylato groups, hydrogen bonds and p-p interaction of 2,2-bipyridine. The luminescent properties of the Cd (II complex and <em>N-Benzenesulphonyl-L>-leucine in solid and in CH3OH solution also have been investigated.

  20. Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner

    Science.gov (United States)

    Ram Yu, A.; Kim, Jin Su

    2015-10-01

    Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.

  1. SU-E-T-304: Dosimetric Comparison of Cavernous Sinus Tumors: Heterogeneity Corrected Pencil Beam (PB-Hete) Vs. X-Ray Voxel Monte Carlo (XVMC) Algorithms for Stereotactic Radiotherapy (SRT)

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, D; Sood, S; Badkul, R; Jiang, H; Saleh, H; Wang, F [University of Kansas Hospital, Kansas City, KS (United States)

    2015-06-15

    Purpose: To compare dose distributions calculated using PB-hete vs. XVMC algorithms for SRT treatments of cavernous sinus tumors. Methods: Using PB-hete SRT, five patients with cavernous sinus tumors received the prescription dose of 25 Gy in 5 fractions for planning target volume PTV(V100%)=95%. Gross tumor volume (GTV) and organs at risk (OARs) were delineated on T1/T2 MRI-CT-fused images. PTV (range 2.1–84.3cc, mean=21.7cc) was generated using a 5mm uniform-margin around GTV. PB-hete SRT plans included a combination of non-coplanar conformal arcs/static beams delivered by Novalis-TX consisting of HD-MLCs and a 6MV-SRS(1000 MU/min) beam. Plans were re-optimized using XVMC algorithm with identical beam geometry and MLC positions. Comparison of plan specific PTV(V99%), maximal, mean, isocenter doses, and total monitor units(MUs) were evaluated. Maximal dose to OARs such as brainstem, optic-pathway, spinal cord, and lenses as well as normal tissue volume receiving 12Gy(V12) were compared between two algorithms. All analysis was performed using two-tailed paired t-tests of an upper-bound p-value of <0.05. Results: Using either algorithm, no dosimetrically significant differences in PTV coverage (PTVV99%,maximal, mean, isocenter doses) and total number of MUs were observed (all p-values >0.05, mean ratios within 2%). However, maximal doses to optic-chiasm and nerves were significantly under-predicted using PB-hete (p=0.04). Maximal brainstem, spinal cord, lens dose and V12 were all comparable between two algorithms, with exception of one patient with the largest PTV who exhibited 11% higher V12 with XVMC. Conclusion: Unlike lung tumors, XVMC and PB-hete treatment plans provided similar PTV coverage for cavernous sinus tumors. Majority of OARs doses were comparable between two algorithms, except for small structures such as optic chiasm/nerves which could potentially receive higher doses when using XVMC algorithm. Special attention may need to be paid on a case

  2. Activity versus outcome maximization in time management.

    Science.gov (United States)

    Malkoc, Selin A; Tonietto, Gabriela N

    2018-04-30

    Feeling time-pressed has become ubiquitous. Time management strategies have emerged to help individuals fit in more of their desired and necessary activities. We provide a review of these strategies. In doing so, we distinguish between two, often competing, motives people have in managing their time: activity maximization and outcome maximization. The emerging literature points to an important dilemma: a given strategy that maximizes the number of activities might be detrimental to outcome maximization. We discuss such factors that might hinder performance in work tasks and enjoyment in leisure tasks. Finally, we provide theoretically grounded recommendations that can help balance these two important goals in time management. Published by Elsevier Ltd.

  3. An intelligent identification algorithm for the monoclonal picking instrument

    Science.gov (United States)

    Yan, Hua; Zhang, Rongfu; Yuan, Xujun; Wang, Qun

    2017-11-01

    The traditional colony selection is mainly operated by manual mode, which takes on low efficiency and strong subjectivity. Therefore, it is important to develop an automatic monoclonal-picking instrument. The critical stage of the automatic monoclonal-picking and intelligent optimal selection is intelligent identification algorithm. An auto-screening algorithm based on Support Vector Machine (SVM) is proposed in this paper, which uses the supervised learning method, which combined with the colony morphological characteristics to classify the colony accurately. Furthermore, through the basic morphological features of the colony, system can figure out a series of morphological parameters step by step. Through the establishment of maximal margin classifier, and based on the analysis of the growth trend of the colony, the selection of the monoclonal colony was carried out. The experimental results showed that the auto-screening algorithm could screen out the regular colony from the other, which meets the requirement of various parameters.

  4. <em>N>-Substituted 5-Chloro-6-phenylpyridazin-3(2<em>H>-ones: Synthesis, Insecticidal Activity Against <em>Plutella xylostella em>(L. and SAR Study

    Directory of Open Access Journals (Sweden)

    Song Yang

    2012-08-01

    Full Text Available A series of <em>N>-substituted 5-chloro-6-phenylpyridazin-3(2<em>H>-one derivatives were synthesized based on our previous work; all compounds were characterized by spectral data and tested for <em>in vitroem> insecticidal activity against <em>Plutella xylostellaem>. The results showed that the synthesized pyridazin-3(2<em>H>-one compounds possessed good insecticidal activities, especially the compounds 4b, 4d, and 4h which showed > 90% activity at 100 mg/L. The structure-activity relationships (SAR for these compounds were also discussed.

  5. Computational intelligence-based optimization of maximally stable extremal region segmentation for object detection

    Science.gov (United States)

    Davis, Jeremy E.; Bednar, Amy E.; Goodin, Christopher T.; Durst, Phillip J.; Anderson, Derek T.; Bethel, Cindy L.

    2017-05-01

    Particle swarm optimization (PSO) and genetic algorithms (GAs) are two optimization techniques from the field of computational intelligence (CI) for search problems where a direct solution can not easily be obtained. One such problem is finding an optimal set of parameters for the maximally stable extremal region (MSER) algorithm to detect areas of interest in imagery. Specifically, this paper describes the design of a GA and PSO for optimizing MSER parameters to detect stop signs in imagery produced via simulation for use in an autonomous vehicle navigation system. Several additions to the GA and PSO are required to successfully detect stop signs in simulated images. These additions are a primary focus of this paper and include: the identification of an appropriate fitness function, the creation of a variable mutation operator for the GA, an anytime algorithm modification to allow the GA to compute a solution quickly, the addition of an exponential velocity decay function to the PSO, the addition of an "execution best" omnipresent particle to the PSO, and the addition of an attractive force component to the PSO velocity update equation. Experimentation was performed with the GA using various combinations of selection, crossover, and mutation operators and experimentation was also performed with the PSO using various combinations of neighborhood topologies, swarm sizes, cognitive influence scalars, and social influence scalars. The results of both the GA and PSO optimized parameter sets are presented. This paper details the benefits and drawbacks of each algorithm in terms of detection accuracy, execution speed, and additions required to generate successful problem specific parameter sets.

  6. Maximizing the productivity of the microalgae Scenedesmus AMDD cultivated in a continuous photobioreactor using an online flow rate control.

    Science.gov (United States)

    McGinn, Patrick J; MacQuarrie, Scott P; Choi, Jerome; Tartakovsky, Boris

    2017-01-01

    In this study, production of the microalga Scenedesmus AMDD in a 300 L continuous flow photobioreactor was maximized using an online flow (dilution rate) control algorithm. To enable online control, biomass concentration was estimated in real time by measuring chlorophyll-related culture fluorescence. A simple microalgae growth model was developed and used to solve the optimization problem aimed at maximizing the photobioreactor productivity. When optimally controlled, Scenedesmus AMDD culture demonstrated an average volumetric biomass productivity of 0.11 g L -1  d -1 over a 25 day cultivation period, equivalent to a 70 % performance improvement compared to the same photobioreactor operated as a turbidostat. The proposed approach for optimizing photobioreactor flow can be adapted to a broad range of microalgae cultivation systems.

  7. Recursive smoothers for hidden discrete-time Markov chains

    Directory of Open Access Journals (Sweden)

    Lakhdar Aggoun

    2005-01-01

    Full Text Available We consider a discrete-time Markov chain observed through another Markov chain. The proposed model extends models discussed by Elliott et al. (1995. We propose improved recursive formulae to update smoothed estimates of processes related to the model. These recursive estimates are used to update the parameter of the model via the expectation maximization (EM algorithm.

  8. Optimal Wind Turbines Micrositing in Onshore Wind Farms Using Fuzzy Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Jun Yang

    2015-01-01

    Full Text Available With the fast growth in the number and size of installed wind farms (WFs around the world, optimal wind turbines (WTs micrositing has become a challenge from both technological and mathematical points of view. An appropriate layout of wind turbines is crucial to obtain adequate performance with respect to the development and operation of the wind power plant during its life span. This work presents a fuzzy genetic algorithm (FGA for maximizing the economic profitability of the project. The algorithm considers a new WF model including several important factors to the design of the layout. The model consists of wake loss, terrain effect, and economic benefits, which can be calculated by locations of wind turbines. The results demonstrate that the algorithm performs better than genetic algorithm, in terms of maximum values of net annual value of wind power plants and computational burden.

  9. A performance comparison of multi-objective optimization algorithms for solving nearly-zero-energy-building design problems

    NARCIS (Netherlands)

    Hamdy, M.; Nguyen, A.T. (Anh Tuan); Hensen, J.L.M.

    2016-01-01

    Integrated building design is inherently a multi-objective optimization problem where two or more conflicting objectives must be minimized and/or maximized concurrently. Many multi-objective optimization algorithms have been developed; however few of them are tested in solving building design

  10. Unsupervised classification of multivariate geostatistical data: Two algorithms

    Science.gov (United States)

    Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques

    2015-12-01

    With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.

  11. Evaluation of tomographic image quality of extended and conventional parallel hole collimators using maximum likelihood expectation maximization algorithm by Monte Carlo simulations.

    Science.gov (United States)

    Moslemi, Vahid; Ashoor, Mansour

    2017-10-01

    One of the major problems associated with parallel hole collimators (PCs) is the trade-off between their resolution and sensitivity. To solve this problem, a novel PC - namely, extended parallel hole collimator (EPC) - was proposed, in which particular trapezoidal denticles were increased upon septa on the side of the detector. In this study, an EPC was designed and its performance was compared with that of two PCs, PC35 and PC41, with a hole size of 1.5 mm and hole lengths of 35 and 41 mm, respectively. The Monte Carlo method was used to calculate the important parameters such as resolution, sensitivity, scattering, and penetration ratio. A Jaszczak phantom was also simulated to evaluate the resolution and contrast of tomographic images, which were produced by the EPC6, PC35, and PC41 using the Monte Carlo N-particle version 5 code, and tomographic images were reconstructed by using maximum likelihood expectation maximization algorithm. Sensitivity of the EPC6 was increased by 20.3% in comparison with that of the PC41 at the identical spatial resolution and full-width at tenth of maximum here. Moreover, the penetration and scattering ratio of the EPC6 was 1.2% less than that of the PC41. The simulated phantom images show that the EPC6 increases contrast-resolution and contrast-to-noise ratio compared with those of PC41 and PC35. When compared with PC41 and PC35, EPC6 improved trade-off between resolution and sensitivity, reduced penetrating and scattering ratios, and produced images with higher quality. EPC6 can be used to increase detectability of more details in nuclear medicine images.

  12. A Study of Wind Turbine Comprehensive Operational Assessment Model Based on EM-PCA Algorithm

    Science.gov (United States)

    Zhou, Minqiang; Xu, Bin; Zhan, Yangyan; Ren, Danyuan; Liu, Dexing

    2018-01-01

    To assess wind turbine performance accurately and provide theoretical basis for wind farm management, a hybrid assessment model based on Entropy Method and Principle Component Analysis (EM-PCA) was established, which took most factors of operational performance into consideration and reach to a comprehensive result. To verify the model, six wind turbines were chosen as the research objects, the ranking obtained by the method proposed in the paper were 4#>6#>1#>5#>2#>3#, which are completely in conformity with the theoretical ranking, which indicates that the reliability and effectiveness of the EM-PCA method are high. The method could give guidance for processing unit state comparison among different units and launching wind farm operational assessment.

  13. HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL

    CERN Document Server

    HR Division

    2000-01-01

    Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...

  14. Online EM with weight-based forgetting

    OpenAIRE

    Celaya, Enric; Agostini, Alejandro

    2015-01-01

    In the on-line version of the EM algorithm introduced by Sato and Ishii (2000), a time-dependent discount factor is introduced for forgetting the effect of the old posterior values obtained with an earlier, inaccurate estimator. In their approach, forgetting is uniformly applied to the estimators of each mixture component depending exclusively on time, irrespective of the weight attributed to each unit for the observed sample. This causes an excessive forgetting in the less frequently sampled...

  15. On the maximal diphoton width

    CERN Document Server

    Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo

    2016-01-01

    Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.

  16. Steam condenser optimization using Real-parameter Genetic Algorithm for Prototype Fast Breeder Reactor

    International Nuclear Information System (INIS)

    Jayalal, M.L.; Kumar, L. Satish; Jehadeesan, R.; Rajeswari, S.; Satya Murty, S.A.V.; Balasubramaniyan, V.; Chetal, S.C.

    2011-01-01

    Highlights: → We model design optimization of a vital reactor component using Genetic Algorithm. → Real-parameter Genetic Algorithm is used for steam condenser optimization study. → Comparison analysis done with various Genetic Algorithm related mechanisms. → The results obtained are validated with the reference study results. - Abstract: This work explores the use of Real-parameter Genetic Algorithm and analyses its performance in the steam condenser (or Circulating Water System) optimization study of a 500 MW fast breeder nuclear reactor. Choice of optimum design parameters for condenser for a power plant from among a large number of technically viable combination is a complex task. This is primarily due to the conflicting nature of the economic implications of the different system parameters for maximizing the capitalized profit. In order to find the optimum design parameters a Real-parameter Genetic Algorithm model is developed and applied. The results obtained are validated with the reference study results.

  17. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Directory of Open Access Journals (Sweden)

    Jiayi Wu

    Full Text Available Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM. We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  18. Massively parallel unsupervised single-particle cryo-EM data clustering via statistical manifold learning.

    Science.gov (United States)

    Wu, Jiayi; Ma, Yong-Bei; Congdon, Charles; Brett, Bevin; Chen, Shuobing; Xu, Yaofang; Ouyang, Qi; Mao, Youdong

    2017-01-01

    Structural heterogeneity in single-particle cryo-electron microscopy (cryo-EM) data represents a major challenge for high-resolution structure determination. Unsupervised classification may serve as the first step in the assessment of structural heterogeneity. However, traditional algorithms for unsupervised classification, such as K-means clustering and maximum likelihood optimization, may classify images into wrong classes with decreasing signal-to-noise-ratio (SNR) in the image data, yet demand increased computational costs. Overcoming these limitations requires further development of clustering algorithms for high-performance cryo-EM data processing. Here we introduce an unsupervised single-particle clustering algorithm derived from a statistical manifold learning framework called generative topographic mapping (GTM). We show that unsupervised GTM clustering improves classification accuracy by about 40% in the absence of input references for data with lower SNRs. Applications to several experimental datasets suggest that our algorithm can detect subtle structural differences among classes via a hierarchical clustering strategy. After code optimization over a high-performance computing (HPC) environment, our software implementation was able to generate thousands of reference-free class averages within hours in a massively parallel fashion, which allows a significant improvement on ab initio 3D reconstruction and assists in the computational purification of homogeneous datasets for high-resolution visualization.

  19. Optimization of Second Fault Detection Thresholds to Maximize Mission POS

    Science.gov (United States)

    Anzalone, Evan

    2018-01-01

    both magnitude and time. As such, the Navigation team is taking advantage of the INS's capability to schedule and change fault detection thresholds in flight. These values are optimized along a nominal trajectory in order to maximize probability of mission success, and reducing the probability of false positives (defined as when the INS would report a second fault condition resulting in loss of mission, but the vehicle would still meet insertion requirements within system-level margins). This paper will describe an optimization approach using Genetic Algorithms to tune the threshold parameters to maximize vehicle resilience to second fault events as a function of potential fault magnitude and time of fault over an ascent mission profile. The analysis approach, and performance assessment of the results will be presented to demonstrate the applicability of this process to second fault detection to maximize mission probability of success.

  20. INFLUÊNCIA ESTOICA NA CONCEPÇÃO DE <em>STATUS> E <em>DICTUM> COMO <em> QUASI RES EM> (ὡσανεì τινά EM ABERLARDO STOIC INFLUENCE IN ABELARD'S CONCEPTION OF <em>STATUS> AND <em>DICTUM> AS <em>QUASI RESem> (ὡσανεì τινά.

    Directory of Open Access Journals (Sweden)

    Guy Hamelin

    2011-09-01

    Full Text Available Na sua obra, Pedro Abelardo (1079-1142 destaca duas noções metafísicas que fundamentam sua teoria lógica: o <em style="mso-bidi-font-style: normal;">statusem> e o <em style="mso-bidi-font-style: normal;">dictum propositionisem>, ao causar, respectivamente, a imposição (<em style="mso-bidi-font-style: normal;">impositioem> dos termos universais e o valor de verdade das proposições. Trata-se de expressões que se referem a naturezas ontológicas peculiares, na medida em que não são consideradas coisas (<em style="mso-bidi-font-style: normal;">resem>, mesmo que constituem causas. Todavia, também não são nada. Abelardo as chama de ‘quase coisas’ (<em style="mso-bidi-font-style: normal;">quasi resem>. No presente artigo, explicamos, primeiro, essas duas noções essenciais da lógica abelardiana, antes de tentar, em seguida, encontrar a fonte dessa metafísica particular. Em oposição a comentadores importantes da lógica de Abelardo, que estimam que haja uma forte influência platônica sobre essa concepção específica, defendemos antes, com apoio de textos significativos e de acordo com o nominalismo abelardiano, que a maior ascendência sobre a metafísica do nosso autor é a do estoicismo, sobretudo, antigo.In his work, Peter Abelard (1079-1142 highlights two metaphysical notions, which sustain his logical theory: the <em>status> and the <em>dictum propositionisem>, causing respectively both the imposition (<em>impositio> of universal terms and the thuth-value of propositions. Both expressions refer to peculiar ontological natures, in so far as they are not considered things (<em>res>, even if they constitute causes. Nevertheless, neither are they ‘nothing’. Abelard calls them ‘quasi-things’ (<em>quasi resem>. In the present article, we expound first these two essential notions of Abelardian logic before then trying to find the source of this particular metaphysics. Contrary to some important

  1. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  2. Joint Temperature-Lasing Mode Compensation for Time-of-Flight LiDAR Sensors

    Directory of Open Access Journals (Sweden)

    Anas Alhashimi

    2015-12-01

    Full Text Available We propose an expectation maximization (EM strategy for improving the precision of time of flight (ToF light detection and ranging (LiDAR scanners. The novel algorithm statistically accounts not only for the bias induced by temperature changes in the laser diode, but also for the multi-modality of the measurement noises that is induced by mode-hopping effects. Instrumental to the proposed EM algorithm, we also describe a general thermal dynamics model that can be learned either from just input-output data or from a combination of simple temperature experiments and information from the laser’s datasheet. We test the strategy on a SICK LMS 200 device and improve its average absolute error by a factor of three.

  3. An Interval Type-2 Fuzzy System with a Species-Based Hybrid Algorithm for Nonlinear System Control Design

    Directory of Open Access Journals (Sweden)

    Chung-Ta Li

    2014-01-01

    Full Text Available We propose a species-based hybrid of the electromagnetism-like mechanism (EM and back-propagation algorithms (SEMBP for an interval type-2 fuzzy neural system with asymmetric membership functions (AIT2FNS design. The interval type-2 asymmetric fuzzy membership functions (IT2 AFMFs and the TSK-type consequent part are adopted to implement the network structure in AIT2FNS. In addition, the type reduction procedure is integrated into an adaptive network structure to reduce computational complexity. Hence, the AIT2FNS can enhance the approximation accuracy effectively by using less fuzzy rules. The AIT2FNS is trained by the SEMBP algorithm, which contains the steps of uniform initialization, species determination, local search, total force calculation, movement, and evaluation. It combines the advantages of EM and back-propagation (BP algorithms to attain a faster convergence and a lower computational complexity. The proposed SEMBP algorithm adopts the uniform method (which evenly scatters solution agents over the feasible solution region and the species technique to improve the algorithm’s ability to find the global optimum. Finally, two illustrative examples of nonlinear systems control are presented to demonstrate the performance and the effectiveness of the proposed AIT2FNS with the SEMBP algorithm.

  4. Multi-objective optimal design of sandwich panels using a genetic algorithm

    Science.gov (United States)

    Xu, Xiaomei; Jiang, Yiping; Pueh Lee, Heow

    2017-10-01

    In this study, an optimization problem concerning sandwich panels is investigated by simultaneously considering the two objectives of minimizing the panel mass and maximizing the sound insulation performance. First of all, the acoustic model of sandwich panels is discussed, which provides a foundation to model the acoustic objective function. Then the optimization problem is formulated as a bi-objective programming model, and a solution algorithm based on the non-dominated sorting genetic algorithm II (NSGA-II) is provided to solve the proposed model. Finally, taking an example of a sandwich panel that is expected to be used as an automotive roof panel, numerical experiments are carried out to verify the effectiveness of the proposed model and solution algorithm. Numerical results demonstrate in detail how the core material, geometric constraints and mechanical constraints impact the optimal designs of sandwich panels.

  5. Control and EMS of a Grid-Connected Microgrid with Economical Analysis

    Directory of Open Access Journals (Sweden)

    Mohamed El-Hendawi

    2018-01-01

    Full Text Available Recently, significant development has occurred in the field of microgrid and renewable energy systems (RESs. Integrating microgrids and renewable energy sources facilitates a sustainable energy future. This paper proposes a control algorithm and an optimal energy management system (EMS for a grid-connected microgrid to minimize its operating cost. The microgrid includes photovoltaic (PV, wind turbine (WT, and energy storage systems (ESS. The interior search algorithm (ISA optimization technique determines the optimal hour-by-hour scheduling for the microgrid system, while it meets the required load demand based on 24-h ahead forecast data. The control system consists of three stages: EMS, supervisory control and local control. EMS is responsible for providing the control system with the optimum day-ahead scheduling power flow between the microgrid (MG sources, batteries, loads and the main grid based on an economic analysis. The supervisory control stage is responsible for compensating the mismatch between the scheduled power and the real microgrid power. In addition, this paper presents the local control design to regulate the local power, current and DC voltage of the microgrid. For verification, the proposed model was applied on a real case study in Oshawa (Ontario, Canada with various load conditions.

  6. Parallel optimization of IDW interpolation algorithm on multicore platform

    Science.gov (United States)

    Guan, Xuefeng; Wu, Huayi

    2009-10-01

    Due to increasing power consumption, heat dissipation, and other physical issues, the architecture of central processing unit (CPU) has been turning to multicore rapidly in recent years. Multicore processor is packaged with multiple processor cores in the same chip, which not only offers increased performance, but also presents significant challenges to application developers. As a matter of fact, in GIS field most of current GIS algorithms were implemented serially and could not best exploit the parallelism potential on such multicore platforms. In this paper, we choose Inverse Distance Weighted spatial interpolation algorithm (IDW) as an example to study how to optimize current serial GIS algorithms on multicore platform in order to maximize performance speedup. With the help of OpenMP, threading methodology is introduced to split and share the whole interpolation work among processor cores. After parallel optimization, execution time of interpolation algorithm is greatly reduced and good performance speedup is achieved. For example, performance speedup on Intel Xeon 5310 is 1.943 with 2 execution threads and 3.695 with 4 execution threads respectively. An additional output comparison between pre-optimization and post-optimization is carried out and shows that parallel optimization does to affect final interpolation result.

  7. An approach for maximizing the smallest eigenfrequency of structure vibration based on piecewise constant level set method

    Science.gov (United States)

    Zhang, Zhengfang; Chen, Weifeng

    2018-05-01

    Maximization of the smallest eigenfrequency of the linearized elasticity system with area constraint is investigated. The elasticity system is extended into a large background domain, but the void is vacuum and not filled with ersatz material. The piecewise constant level set (PCLS) method is applied to present two regions, the original material region and the void region. A quadratic PCLS function is proposed to represent the characteristic function. Consequently, the functional derivative of the smallest eigenfrequency with respect to PCLS function takes nonzero value in the original material region and zero in the void region. A penalty gradient algorithm is proposed, which initializes the whole background domain with the original material and decreases the area of original material region till the area constraint is satisfied. 2D and 3D numerical examples are presented, illustrating the validity of the proposed algorithm.

  8. Subsurface imaging by electrical and EM methods

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-01

    This report consists of 3 subjects. 1) Three dimensional inversion of resistivity data with topography : In this study, we developed a 3-D inversion method based on the finite element calculation of model responses, which can effectively accommodate the irregular topography. In solving the inverse problem, the iterative least-squares approach comprising the smoothness-constraints was taken along with the reciprocity approach in the calculation of Jacobian. Furthermore the Active Constraint Balancing, which has been recently developed by ourselves to enhance the resolving power of the inverse problem, was also employed. Since our new algorithm accounts for the topography in the inversion step, topography correction is not necessary as a preliminary processing and we can expect a more accurate image of the earth. 2) Electromagnetic responses due to a source in the borehole : The effects of borehole fluid and casing on the borehole EM responses should thoroughly be analyzed since they may affect the resultant image of the earth. In this study, we developed an accurate algorithm for calculating the EM responses containing the effects of borehole fluid and casing when a current-carrying ring is located on the borehole axis. An analytic expression for primary vertical magnetic field along the borehole axis was first formulated and the fast Fourier transform is to be applied to get the EM fields at any location in whole space. 3) High frequency electromagnetic impedance survey : At high frequencies the EM impedance becomes a function of the angle of incidence or the horizontal wavenumber, so the electrical properties cannot be readily extracted without first eliminating the effect of horizontal wavenumber on the impedance. For this purpose, this paper considers two independent methods for accurately determining the horizontal wavenumber, which in turn is used to correct the impedance data. The 'apparent' electrical properties derived from the corrected impedance

  9. Atmospheric dispersion prediction and source estimation of hazardous gas using artificial neural network, particle swarm optimization and expectation maximization

    Science.gov (United States)

    Qiu, Sihang; Chen, Bin; Wang, Rongxiao; Zhu, Zhengqiu; Wang, Yuan; Qiu, Xiaogang

    2018-04-01

    Hazardous gas leak accident has posed a potential threat to human beings. Predicting atmospheric dispersion and estimating its source become increasingly important in emergency management. Current dispersion prediction and source estimation models cannot satisfy the requirement of emergency management because they are not equipped with high efficiency and accuracy at the same time. In this paper, we develop a fast and accurate dispersion prediction and source estimation method based on artificial neural network (ANN), particle swarm optimization (PSO) and expectation maximization (EM). The novel method uses a large amount of pre-determined scenarios to train the ANN for dispersion prediction, so that the ANN can predict concentration distribution accurately and efficiently. PSO and EM are applied for estimating the source parameters, which can effectively accelerate the process of convergence. The method is verified by the Indianapolis field study with a SF6 release source. The results demonstrate the effectiveness of the method.

  10. Accelerated median root prior reconstruction for pinhole single-photon emission tomography (SPET)

    Energy Technology Data Exchange (ETDEWEB)

    Sohlberg, Antti [Department of Clinical Physiology and Nuclear Medicine, Kuopio University Hospital, PO Box 1777 FIN-70211, Kuopio (Finland); Ruotsalainen, Ulla [Institute of Signal Processing, DMI, Tampere University of Technology, PO Box 553 FIN-33101, Tampere (Finland); Watabe, Hiroshi [National Cardiovascular Center Research Institute, 5-7-1 Fujisihro-dai, Suita City, Osaka 565-8565 (Japan); Iida, Hidehiro [National Cardiovascular Center Research Institute, 5-7-1 Fujisihro-dai, Suita City, Osaka 565-8565 (Japan); Kuikka, Jyrki T [Department of Clinical Physiology and Nuclear Medicine, Kuopio University Hospital, PO Box 1777 FIN-70211, Kuopio (Finland)

    2003-07-07

    Pinhole collimation can be used to improve spatial resolution in SPET. However, the resolution improvement is achieved at the cost of reduced sensitivity, which leads to projection images with poor statistics. Images reconstructed from these projections using the maximum likelihood expectation maximization (ML-EM) algorithms, which have been used to reduce the artefacts generated by the filtered backprojection (FBP) based reconstruction, suffer from noise/bias trade-off: noise contaminates the images at high iteration numbers, whereas early abortion of the algorithm produces images that are excessively smooth and biased towards the initial estimate of the algorithm. To limit the noise accumulation we propose the use of the pinhole median root prior (PH-MRP) reconstruction algorithm. MRP is a Bayesian reconstruction method that has already been used in PET imaging and shown to possess good noise reduction and edge preservation properties. In this study the PH-MRP algorithm was accelerated with the ordered subsets (OS) procedure and compared to the FBP, OS-EM and conventional Bayesian reconstruction methods in terms of noise reduction, quantitative accuracy, edge preservation and visual quality. The results showed that the accelerated PH-MRP algorithm was very robust. It provided visually pleasing images with lower noise level than the FBP or OS-EM and with smaller bias and sharper edges than the conventional Bayesian methods.

  11. Natural Products from Antarctic Colonial Ascidians of the Genera <em>Aplidium> and <em>Synoicum>: Variability and Defensive Role

    Directory of Open Access Journals (Sweden)

    Conxita Avila

    2012-08-01

    Full Text Available Ascidians have developed multiple defensive strategies mostly related to physical, nutritional or chemical properties of the tunic. One of such is chemical defense based on secondary metabolites. We analyzed a series of colonial Antarctic ascidians from deep-water collections belonging to the genera <em>Aplidium> and <em>Synoicum> to evaluate the incidence of organic deterrents and their variability. The ether fractions from 15 samples including specimens of the species <em>A.> <em>falklandicum>, <em>A.> <em>fuegiense>, <em>A.> <em>meridianum>, <em>A.> <em>millari> and <em>S.> <em>adareanum> were subjected to feeding assays towards two relevant sympatric predators: the starfish <em>Odontaster> <em>validus>, and the amphipod <em>Cheirimedon> <em>femoratus>. All samples revealed repellency. Nonetheless, some colonies concentrated defensive chemicals in internal body-regions rather than in the tunic. Four ascidian-derived meroterpenoids, rossinones B and the three derivatives 2,3-epoxy-rossinone B, 3-epi-rossinone B, 5,6-epoxy-rossinone B, and the indole alkaloids meridianins A–G, along with other minoritary meridianin compounds were isolated from several samples. Some purified metabolites were tested in feeding assays exhibiting potent unpalatabilities, thus revealing their role in predation avoidance. Ascidian extracts and purified compound-fractions were further assessed in antibacterial tests against a marine Antarctic bacterium. Only the meridianins showed inhibition activity, demonstrating a multifunctional defensive role. According to their occurrence in nature and within our colonial specimens, the possible origin of both types of metabolites is discussed.

  12. The index-based subgraph matching algorithm (ISMA: fast subgraph enumeration in large networks using optimized search trees.

    Directory of Open Access Journals (Sweden)

    Sofie Demeyer

    Full Text Available Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA, a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/.

  13. The Index-Based Subgraph Matching Algorithm (ISMA): Fast Subgraph Enumeration in Large Networks Using Optimized Search Trees

    Science.gov (United States)

    Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet

    2013-01-01

    Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730

  14. A Scalable Gaussian Process Analysis Algorithm for Biomass Monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Chandola, Varun [ORNL; Vatsavai, Raju [ORNL

    2011-01-01

    Biomass monitoring is vital for studying the carbon cycle of earth's ecosystem and has several significant implications, especially in the context of understanding climate change and its impacts. Recently, several change detection methods have been proposed to identify land cover changes in temporal profiles (time series) of vegetation collected using remote sensing instruments, but do not satisfy one or both of the two requirements of the biomass monitoring problem, i.e., {\\em operating in online mode} and {\\em handling periodic time series}. In this paper, we adapt Gaussian process regression to detect changes in such time series in an online fashion. While Gaussian process (GP) have been widely used as a kernel based learning method for regression and classification, their applicability to massive spatio-temporal data sets, such as remote sensing data, has been limited owing to the high computational costs involved. We focus on addressing the scalability issues associated with the proposed GP based change detection algorithm. This paper makes several significant contributions. First, we propose a GP based online time series change detection algorithm and demonstrate its effectiveness in detecting different types of changes in {\\em Normalized Difference Vegetation Index} (NDVI) data obtained from a study area in Iowa, USA. Second, we propose an efficient Toeplitz matrix based solution which significantly improves the computational complexity and memory requirements of the proposed GP based method. Specifically, the proposed solution can analyze a time series of length $t$ in $O(t^2)$ time while maintaining a $O(t)$ memory footprint, compared to the $O(t^3)$ time and $O(t^2)$ memory requirement of standard matrix manipulation based methods. Third, we describe a parallel version of the proposed solution which can be used to simultaneously analyze a large number of time series. We study three different parallel implementations: using threads, MPI, and a

  15. A New Adaptive Hungarian Mating Scheme in Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Chanju Jung

    2016-01-01

    Full Text Available In genetic algorithms, selection or mating scheme is one of the important operations. In this paper, we suggest an adaptive mating scheme using previously suggested Hungarian mating schemes. Hungarian mating schemes consist of maximizing the sum of mating distances, minimizing the sum, and random matching. We propose an algorithm to elect one of these Hungarian mating schemes. Every mated pair of solutions has to vote for the next generation mating scheme. The distance between parents and the distance between parent and offspring are considered when they vote. Well-known combinatorial optimization problems, the traveling salesperson problem, and the graph bisection problem are used for the test bed of our method. Our adaptive strategy showed better results than not only pure and previous hybrid schemes but also existing distance-based mating schemes.

  16. A General Constant Power Generation Algorithm for Photovoltaic Systems

    DEFF Research Database (Denmark)

    Tafti, Hossein Dehghani; Maswood, Ali Iftekhar; Konstantinou, Georgios

    2018-01-01

    Photovoltaic power plants (PVPPs) typically operate by tracking the maximum power point in order to maximize conversion efficiency. However, with the continuous increase of installed grid-connected PVPPs, power system operators have been experiencing new challenges, like overloading, overvoltages...... on a hysteresis band controller in order to obtain fast dynamic response under transients and low power oscillation during steady-state operation. The performance of the proposed algorithm for both single- and two-stage PVPPs is examined on a 50-kVA simulation setup of these topologies. Moreover, experimental...... and operation during grid voltage disturbances. Consequently, constant power generation (CPG) is imposed by grid codes. An algorithm for the calculation of the photovoltaic panel voltage reference, which generates a constant power from the PVPP, is introduced in this paper. The key novelty of the proposed...

  17. Exergetic optimization of turbofan engine with genetic algorithm method

    Energy Technology Data Exchange (ETDEWEB)

    Turan, Onder [Anadolu University, School of Civil Aviation (Turkey)], e-mail: onderturan@anadolu.edu.tr

    2011-07-01

    With the growth of passenger numbers, emissions from the aeronautics sector are increasing and the industry is now working on improving engine efficiency to reduce fuel consumption. The aim of this study is to present the use of genetic algorithms, an optimization method based on biological principles, to optimize the exergetic performance of turbofan engines. The optimization was carried out using exergy efficiency, overall efficiency and specific thrust of the engine as evaluation criteria and playing on pressure and bypass ratio, turbine inlet temperature and flight altitude. Results showed exergy efficiency can be maximized with higher altitudes, fan pressure ratio and turbine inlet temperature; the turbine inlet temperature is the most important parameter for increased exergy efficiency. This study demonstrated that genetic algorithms are effective in optimizing complex systems in a short time.

  18. An efficient simulated annealing algorithm for the redundancy allocation problem with a choice of redundancy strategies

    International Nuclear Information System (INIS)

    Chambari, Amirhossain; Najafi, Amir Abbas; Rahmati, Seyed Habib A.; Karimi, Aida

    2013-01-01

    The redundancy allocation problem (RAP) is an important reliability optimization problem. This paper studies a specific RAP in which redundancy strategies are chosen. To do so, the choice of the redundancy strategies among active and cold standby is considered as decision variables. The goal is to select the redundancy strategy, component, and redundancy level for each subsystem such that the system reliability is maximized. Since RAP is a NP-hard problem, we propose an efficient simulated annealing algorithm (SA) to solve it. In addition, to evaluating the performance of the proposed algorithm, it is compared with well-known algorithms in the literature for different test problems. The results of the performance analysis show a relatively satisfactory efficiency of the proposed SA algorithm

  19. Fumigant Antifungal Activity of Myrtaceae Essential Oils and Constituents from <em>Leptospermum petersoniiem> against Three <em>Aspergillus> Species

    Directory of Open Access Journals (Sweden)

    Il-Kwon Park

    2012-09-01

    Full Text Available Commercial plant essential oils obtained from 11 Myrtaceae plant species were tested for their fumigant antifungal activity against <em>Aspergillus ochraceusem>, <em>A. flavusem>, and <em>A. nigerem>. Essential oils extracted from<em> em>Leptospermum> <em>petersonii> at air concentrations of 56 × 10−3 mg/mL and 28 × 10−3 mg/mL completely inhibited the growth of the three <em>Aspergillus> species. However, at an air concentration of 14 × 10−3 mg/mL, inhibition rates of <em>L. petersoniiem> essential oils were reduced to 20.2% and 18.8% in the case of <em>A. flavusem> and <em>A. nigerem>, respectively. The other Myrtaceae essential oils (56 × 10−3 mg/mL only weakly inhibited the fungi or had no detectable affect. Gas chromatography-mass spectrometry analysis identified 16 compounds in <em>L. petersoniiem>> em>essential> em>oil.> em>The antifungal activity of the identified compounds was tested individually by using standard or synthesized compounds. Of these, neral and geranial inhibited growth by 100%, at an air concentration of 56 × 10−3 mg/mL, whereas the activity of citronellol was somewhat lover (80%. The other compounds exhibited only moderate or weak antifungal activity. The antifungal activities of blends of constituents identified in <em>L. petersoniiem> oil indicated that neral and geranial were the major contributors to the fumigant and antifungal activities.

  20. Faster algorithms for RNA-folding using the Four-Russians method.

    Science.gov (United States)

    Venkatachalam, Balaji; Gusfield, Dan; Frid, Yelena

    2014-03-06

    The secondary structure that maximizes the number of non-crossing matchings between complimentary bases of an RNA sequence of length n can be computed in O(n3) time using Nussinov's dynamic programming algorithm. The Four-Russians method is a technique that reduces the running time for certain dynamic programming algorithms by a multiplicative factor after a preprocessing step where solutions to all smaller subproblems of a fixed size are exhaustively enumerated and solved. Frid and Gusfield designed an O(n3logn) algorithm for RNA folding using the Four-Russians technique. In their algorithm the preprocessing is interleaved with the algorithm computation. We simplify the algorithm and the analysis by doing the preprocessing once prior to the algorithm computation. We call this the two-vector method. We also show variants where instead of exhaustive preprocessing, we only solve the subproblems encountered in the main algorithm once and memoize the results. We give a simple proof of correctness and explore the practical advantages over the earlier method.The Nussinov algorithm admits an O(n2) time parallel algorithm. We show a parallel algorithm using the two-vector idea that improves the time bound to O(n2logn). We have implemented the parallel algorithm on graphics processing units using the CUDA platform. We discuss the organization of the data structures to exploit coalesced memory access for fast running times. The ideas to organize the data structures also help in improving the running time of the serial algorithms. For sequences of length up to 6000 bases the parallel algorithm takes only about 2.5 seconds and the two-vector serial method takes about 57 seconds on a desktop and 15 seconds on a server. Among the serial algorithms, the two-vector and memoized versions are faster than the Frid-Gusfield algorithm by a factor of 3, and are faster than Nussinov by up to a factor of 20. The source-code for the algorithms is available at http://github.com/ijalabv/FourRussiansRNAFolding.

  1. A Criterion to Identify Maximally Entangled Four-Qubit State

    International Nuclear Information System (INIS)

    Zha Xinwei; Song Haiyang; Feng Feng

    2011-01-01

    Paolo Facchi, et al. [Phys. Rev. A 77 (2008) 060304(R)] presented a maximally multipartite entangled state (MMES). Here, we give a criterion for the identification of maximally entangled four-qubit states. Using this criterion, we not only identify some existing maximally entangled four-qubit states in the literature, but also find several new maximally entangled four-qubit states as well. (general)

  2. Effect of stage development of mining operations on maximization the net present value in long-term planning of open pits

    OpenAIRE

    Kržanović, Daniel; Rajković, Radmilo; Mikić, Miomir; Ljubojev, Milenko

    2014-01-01

    Long-term planning in the mining industry has one main goal: maximizing the value that is realized by excavation and processing of mineral resources. When designing the open pits, determining the stages of development the mining operations, (eng. Pushback) is one of the important factors in the process of long-term production planning. Using the different scientific methods and mathematical algorithms underlying the operation of modern software for strategic planning of production, it is poss...

  3. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  4. Clinical Relevance of <em>CDH1em> and <em>CDH13em> DNA-Methylation in Serum of Cervical Cancer Patients

    Directory of Open Access Journals (Sweden)

    Günther K. Bonn

    2012-07-01

    Full Text Available This study was designed to investigate the DNA-methylation status of <em>E>-cadherin (<em>CDH1em> and <em>H>-cadherin (<em>CDH13em> in serum samples of cervical cancer patients and control patients with no malignant diseases and to evaluate the clinical utility of these markers. DNA-methylation status of <em>CDH1em> and <em>CDH13em> was analyzed by means of MethyLight-technology in serum samples from 49 cervical cancer patients and 40 patients with diseases other than cancer. To compare this methylation analysis with another technique, we analyzed the samples with a denaturing high performance liquid chromatography (DHPLC PCR-method. The specificity and sensitivity of <em>CDH1em> DNA-methylation measured by MethyLight was 75% and 55%, and for <em>CDH13em> DNA-methylation 95% and 10%. We identified a specificity of 92.5% and a sensitivity of only 27% for the <em>CDH1em> DHPLC-PCR analysis. Multivariate analysis showed that serum <em>CDH1em> methylation-positive patients had a 7.8-fold risk for death (95% CI: 2.2–27.7; <em>p> = 0.001 and a 92.8-fold risk for relapse (95% CI: 3.9–2207.1; <em>p> = 0.005. We concluded that the serological detection of <em>CDH1em> and <em>CDH13em> DNA-hypermethylation is not an ideal diagnostic tool due to low diagnostic specificity and sensitivity. However, it was validated that <em>CDH1em> methylation analysis in serum samples may be of potential use as a prognostic marker for cervical cancer patients.

  5. UN ALGORITMO SAEM PARA EL PROBLEMA DE COMPLETACIÓN DE MATRICES // A SAEM ALGORITHM FOR MATRIX COMPLETION PROBLEMS

    Directory of Open Access Journals (Sweden)

    Anaís Frangeline Acuña Sosa

    2015-06-01

    Full Text Available In this work we dealt with matrix completion problem. This problem arises in different fields, for example, systems and control theory, image processing and collaborative filtering. Given a probabilistic matrix factorization model, we present an approach based on Bayesian statistics and a stochastic expectation maximization algorithm to retrieve an array of data from a sample of its inputs. The proposed method does not require regularization parameters and estimates the rank of the matrix, in contrast to the BPMF method. Our results show that the proposed method outperforms to an augmented lagrangian algorithm and the BPMF method in its ability to find the rank of the matrix and in efficiency respectively. // RESUMEN En este trabajo estudiamos el problema de completación de matrices. Este problema se presenta en diversas áreas como la teoría de sistemas y control, procesamiento de imágenes y filtrado colaborativo. Considerando un modelo de factorización probabilística de matrices, establecemos una propuesta basada en estadística Bayesiana y un algoritmo EM estocástico para recubrir una matriz de datos a partir de una muestras de sus entradas. El método propuesto no requiere de parámetros de regularización y da un estimado del rango de la matriz, en contraste con el método BPMF. Los resultados muestran que el algoritmo propuesto da mejores estimados del rango de la matriz en comparación con un algoritmo basado en lagrangeanos aumentados y es más eficiente que el método BPMF.

  6. Analysis and enumeration algorithms for biological graphs

    CERN Document Server

    Marino, Andrea

    2015-01-01

    In this work we plan to revise the main techniques for enumeration algorithms and to show four examples of enumeration algorithms that can be applied to efficiently deal with some biological problems modelled by using biological networks: enumerating central and peripheral nodes of a network, enumerating stories, enumerating paths or cycles, and enumerating bubbles. Notice that the corresponding computational problems we define are of more general interest and our results hold in the case of arbitrary graphs. Enumerating all the most and less central vertices in a network according to their eccentricity is an example of an enumeration problem whose solutions are polynomial and can be listed in polynomial time, very often in linear or almost linear time in practice. Enumerating stories, i.e. all maximal directed acyclic subgraphs of a graph G whose sources and targets belong to a predefined subset of the vertices, is on the other hand an example of an enumeration problem with an exponential number of solutions...

  7. Consentimento informado e competência em pediatria: opiniões de uma amostra de médicos romenos em treinamento

    Directory of Open Access Journals (Sweden)

    Sorin Hostiuc

    2012-12-01

    Full Text Available OBJETIVOS: Analisar o ponto de vista de médicos em treinamento a respeito do consentimento informado como autorização autônoma em pediatria e discutir os efeitos limitantes da norma de competência nesse campo. MÉTODOS: Foi realizado um estudo multi-institucional com 158 residentes de medicina com o intuito de analisar o ponto de vista de médicos em treinamento a respeito do consentimento informado como autorização autônoma em pediatria. A participação no estudo foi voluntária, e os participantes eram provenientes de uma área geográfica limitada (Bucareste e arredores. RESULTADOS: A maioria dos respondentes concordou totalmente que um paciente entre 16 e 18 anos deve tomar decisões médicas informadas sobre qualquer tipo de procedimento (inclusive os referentes a escolhas reprodutivas; enquanto que pacientes entre 14 e 16 anos devem ser autorizados a tomar decisões médicas informadas apenas a respeito de procedimentos menores. A maioria concordou que transplantes de medula óssea devem ser permitidos entre irmãos se aprovados por ambos, enquanto que a maioria não concorda com o transplante de órgãos sólidos. A participação de crianças em estudos clínicos deve ser permitida apenas se a criança concordar. CONCLUSÕES: As respostas obtidas em nosso estudo sobre o consentimento informado aproximam-no mais do sentido de autorização autônoma do que do sentido de autorização efetiva. Portanto, a intuição moral dos participantes é mais bioética e menos jurídica, o que, embora maximize os benefícios do paciente, está associado a um aumento no risco de responsabilidade. No entanto, visto que as gerações mais novas tornam-se cada vez mais precoces, é preciso reavaliar os dogmas tradicionais a respeito da competência.

  8. Extraction of Dihydroquercetin<em> em>from <em>Larix gmeliniem>i> em>with Ultrasound-Assisted and Microwave-Assisted Alternant Digestion

    Directory of Open Access Journals (Sweden)

    Yuangang Zu

    2012-07-01

    Full Text Available An ultrasound and microwave assisted alternant extraction method (UMAE was applied for extracting dihydroquercetin (DHQ from <em>Larix gmeliniem>i> wood. This investigation was conducted using 60% ethanol as solvent, 1:12 solid to liquid ratio, and 3 h soaking time. The optimum treatment time was ultrasound 40 min, microwave 20 min, respectively, and the extraction was performed once. Under the optimized conditions, satisfactory extraction yield of the target analyte was obtained. Relative to ultrasound-assisted or microwave-assisted method, the proposed approach provides higher extraction yield. The effect of DHQ of different concentrations and synthetic antioxidants on oxidative stability in soy bean oil stored for 20 days at different temperatures (25 °C and 60 °C was compared. DHQ was more effective in restraining soy bean oil oxidation, and a dose-response relationship was observed. The antioxidant activity of DHQ was a little stronger than that of BHA and BHT. Soy bean oil supplemented with 0.08 mg/g DHQ exhibited favorable antioxidant effects and is preferable for effectively avoiding oxidation. The <em>L. gmeliniiem> wood samples before and after extraction were characterized by scanning electron microscopy. The results showed that the UMAE method is a simple and efficient technique for sample preparation.

  9. GASPACHO: a generic automatic solver using proximal algorithms for convex huge optimization problems

    Science.gov (United States)

    Goossens, Bart; Luong, Hiêp; Philips, Wilfried

    2017-08-01

    Many inverse problems (e.g., demosaicking, deblurring, denoising, image fusion, HDR synthesis) share various similarities: degradation operators are often modeled by a specific data fitting function while image prior knowledge (e.g., sparsity) is incorporated by additional regularization terms. In this paper, we investigate automatic algorithmic techniques for evaluating proximal operators. These algorithmic techniques also enable efficient calculation of adjoints from linear operators in a general matrix-free setting. In particular, we study the simultaneous-direction method of multipliers (SDMM) and the parallel proximal algorithm (PPXA) solvers and show that the automatically derived implementations are well suited for both single-GPU and multi-GPU processing. We demonstrate this approach for an Electron Microscopy (EM) deconvolution problem.

  10. Vacua of maximal gauged D=3 supergravities

    International Nuclear Information System (INIS)

    Fischbacher, T; Nicolai, H; Samtleben, H

    2002-01-01

    We analyse the scalar potentials of maximal gauged three-dimensional supergravities which reveal a surprisingly rich structure. In contrast to maximal supergravities in dimensions D≥4, all these theories possess a maximally supersymmetric (N=16) ground state with negative cosmological constant Λ 2 gauged theory, whose maximally supersymmetric groundstate has Λ = 0. We compute the mass spectra of bosonic and fermionic fluctuations around these vacua and identify the unitary irreducible representations of the relevant background (super)isometry groups to which they belong. In addition, we find several stationary points which are not maximally supersymmetric, and determine their complete mass spectra as well. In particular, we show that there are analogues of all stationary points found in higher dimensions, among them are de Sitter (dS) vacua in the theories with noncompact gauge groups SO(5, 3) 2 and SO(4, 4) 2 , as well as anti-de Sitter (AdS) vacua in the compact gauged theory preserving 1/4 and 1/8 of the supersymmetries. All the dS vacua have tachyonic instabilities, whereas there do exist nonsupersymmetric AdS vacua which are stable, again in contrast to the D≥4 theories

  11. Solving the wind farm layout optimization problem using random search algorithm

    DEFF Research Database (Denmark)

    Feng, Ju; Shen, Wen Zhong

    2015-01-01

    , in which better results than the genetic algorithm (GA) and the old version of the RS algorithm are obtained. Second it is applied to the Horns Rev 1 WF, and the optimized layouts obtain a higher power production than its original layout, both for the real scenario and for two constructed scenarios......Wind farm (WF) layout optimization is to find the optimal positions of wind turbines (WTs) inside a WF, so as to maximize and/or minimize a single objective or multiple objectives, while satisfying certain constraints. In this work, a random search (RS) algorithm based on continuous formulation....... In this application, it is also found that in order to get consistent and reliable optimization results, up to 360 or more sectors for wind direction have to be used. Finally, considering the inevitable inter-annual variations in the wind conditions, the robustness of the optimized layouts against wind condition...

  12. Improved Solutions for the Optimal Coordination of DOCRs Using Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Muhammad Sulaiman

    2018-01-01

    Full Text Available Nature-inspired optimization techniques are useful tools in electrical engineering problems to minimize or maximize an objective function. In this paper, we use the firefly algorithm to improve the optimal solution for the problem of directional overcurrent relays (DOCRs. It is a complex and highly nonlinear constrained optimization problem. In this problem, we have two types of design variables, which are variables for plug settings (PSs and the time dial settings (TDSs for each relay in the circuit. The objective function is to minimize the total operating time of all the basic relays to avoid unnecessary delays. We have considered four models in this paper which are IEEE (3-bus, 4-bus, 6-bus, and 8-bus models. From the numerical results, it is obvious that the firefly algorithm with certain parameter settings performs better than the other state-of-the-art algorithms.

  13. How <em>Varroa> Parasitism Affects the Immunological and Nutritional Status of the Honey Bee, <em>Apis melliferaem>

    Directory of Open Access Journals (Sweden)

    Katherine A. Aronstein

    2012-06-01

    Full Text Available We investigated the effect of the parasitic mite <em>Varroa destructorem> on the immunological and nutritional condition of honey bees, <em>Apis melliferaem>, from the perspective of the individual bee and the colony. Pupae, newly-emerged adults and foraging adults were sampled from honey bee colonies at one site in S. Texas, USA. <em>Varroa>‑infested bees displayed elevated titer of Deformed Wing Virus (DWV, suggestive of depressed capacity to limit viral replication. Expression of genes coding three anti-microbial peptides (<em>defensin1, abaecin, hymenoptaecinem> was either not significantly different between <em>Varroa>-infested and uninfested bees or was significantly elevated in <em>Varroa>-infested bees, varying with sampling date and bee developmental age. The effect of <em>Varroa> on nutritional indices of the bees was complex, with protein, triglyceride, glycogen and sugar levels strongly influenced by life-stage of the bee and individual colony. Protein content was depressed and free amino acid content elevated in <em>Varroa>-infested pupae, suggesting that protein synthesis, and consequently growth, may be limited in these insects. No simple relationship between the values of nutritional and immune-related indices was observed, and colony-scale effects were indicated by the reduced weight of pupae in colonies with high <em>Varroa> abundance, irrespective of whether the individual pupa bore <em>Varroa>.

  14. Enhanced intelligent water drops algorithm for multi-depot vehicle routing problem.

    Science.gov (United States)

    Ezugwu, Absalom E; Akutsah, Francis; Olusanya, Micheal O; Adewumi, Aderemi O

    2018-01-01

    The intelligent water drop algorithm is a swarm-based metaheuristic algorithm, inspired by the characteristics of water drops in the river and the environmental changes resulting from the action of the flowing river. Since its appearance as an alternative stochastic optimization method, the algorithm has found applications in solving a wide range of combinatorial and functional optimization problems. This paper presents an improved intelligent water drop algorithm for solving multi-depot vehicle routing problems. A simulated annealing algorithm was introduced into the proposed algorithm as a local search metaheuristic to prevent the intelligent water drop algorithm from getting trapped into local minima and also improve its solution quality. In addition, some of the potential problematic issues associated with using simulated annealing that include high computational runtime and exponential calculation of the probability of acceptance criteria, are investigated. The exponential calculation of the probability of acceptance criteria for the simulated annealing based techniques is computationally expensive. Therefore, in order to maximize the performance of the intelligent water drop algorithm using simulated annealing, a better way of calculating the probability of acceptance criteria is considered. The performance of the proposed hybrid algorithm is evaluated by using 33 standard test problems, with the results obtained compared with the solutions offered by four well-known techniques from the subject literature. Experimental results and statistical tests show that the new method possesses outstanding performance in terms of solution quality and runtime consumed. In addition, the proposed algorithm is suitable for solving large-scale problems.

  15. Mining the National Career Assessment Examination Result Using Clustering Algorithm

    Science.gov (United States)

    Pagudpud, M. V.; Palaoag, T. T.; Padirayon, L. M.

    2018-03-01

    Education is an essential process today which elicits authorities to discover and establish innovative strategies for educational improvement. This study applied data mining using clustering technique for knowledge extraction from the National Career Assessment Examination (NCAE) result in the Division of Quirino. The NCAE is an examination given to all grade 9 students in the Philippines to assess their aptitudes in the different domains. Clustering the students is helpful in identifying students’ learning considerations. With the use of the RapidMiner tool, clustering algorithms such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN), k-means, k-medoid, expectation maximization clustering, and support vector clustering algorithms were analyzed. The silhouette indexes of the said clustering algorithms were compared, and the result showed that the k-means algorithm with k = 3 and silhouette index equal to 0.196 is the most appropriate clustering algorithm to group the students. Three groups were formed having 477 students in the determined group (cluster 0), 310 proficient students (cluster 1) and 396 developing students (cluster 2). The data mining technique used in this study is essential in extracting useful information from the NCAE result to better understand the abilities of students which in turn is a good basis for adopting teaching strategies.

  16. <em>Maximizing policy learning in international committeesem>

    DEFF Research Database (Denmark)

    Nedergaard, Peter

    2007-01-01

    In the voluminous literature on the European Union's open method of coordination (OMC), no one has hitherto analysed on the basis of scholarly examination the question of what contributes to the learning processes in the OMC committees. On the basis of a questionnaire sent to all participants, th......-qualified and engaged participants, and a neutral presidency should be present in order to act as an authoritative persuader....

  17. Avoiding Optimal Mean ℓ2,1-Norm Maximization-Based Robust PCA for Reconstruction.

    Science.gov (United States)

    Luo, Minnan; Nie, Feiping; Chang, Xiaojun; Yang, Yi; Hauptmann, Alexander G; Zheng, Qinghua

    2017-04-01

    Robust principal component analysis (PCA) is one of the most important dimension-reduction techniques for handling high-dimensional data with outliers. However, most of the existing robust PCA presupposes that the mean of the data is zero and incorrectly utilizes the average of data as the optimal mean of robust PCA. In fact, this assumption holds only for the squared [Formula: see text]-norm-based traditional PCA. In this letter, we equivalently reformulate the objective of conventional PCA and learn the optimal projection directions by maximizing the sum of projected difference between each pair of instances based on [Formula: see text]-norm. The proposed method is robust to outliers and also invariant to rotation. More important, the reformulated objective not only automatically avoids the calculation of optimal mean and makes the assumption of centered data unnecessary, but also theoretically connects to the minimization of reconstruction error. To solve the proposed nonsmooth problem, we exploit an efficient optimization algorithm to soften the contributions from outliers by reweighting each data point iteratively. We theoretically analyze the convergence and computational complexity of the proposed algorithm. Extensive experimental results on several benchmark data sets illustrate the effectiveness and superiority of the proposed method.

  18. Estudo eletromiográfico do músculo masseter durante o apertamento dentário e mastigação habitual em adultos com oclusão dentária normal Electromyographic study of the masseter muscle during maximal voluntary clenching and habitual chewing in adults with normal occlusion

    Directory of Open Access Journals (Sweden)

    Adriana Rahal

    2009-01-01

    Full Text Available OBJETIVO: Analisar a diferença entre os lados na atividade eletromiográfica do masseter em indivíduos adultos com oclusão dentária normal. MÉTODOS: Foram avaliados 30 indivíduos saudáveis entre 21 e 30 anos e realizou-se eletromiografia de superfície nos músculos masseteres direito e esquerdo, durante apertamento em máxima intercuspidação e mastigação habitual com uva passa. Foram computados os valores médios dos três apertamentos dentários e dos 15 segundos da mastigação habitual para cada indivíduo. Foram considerados para a análise: o lado de maior valor e o de menor valor eletromiográfico. RESULTADOS: Durante o apertamento dentário, a diferença média entre os dois lados foi de 20,0 microvolts (μV com intervalo de confiança (95% entre 14,0 e 26,0 μV e durante a mastigação habitual, a diferença média entre os dois lados foi de 10,3 μV com intervalo de confiança (95% entre 6,7 e 13,8 μV. CONCLUSÃO: Houve diferença estatisticamente significante entre os lados, com relação entre eles de 24% para o apertamento dentário e de 27% para a mastigação habitual, em indiv duos adultos saudáveis.PURPOSE: To analyze the difference between both sides of the face during the electromyographic activity of the masseter muscle in adults with normal occlusion. METHODS: Thirty healthy individuals with ages ranging from 21 to 30 years old were selected. Surface electromyography was performed on right and left masseter muscles during maximal voluntary clenching and habitual chewing with raisins. The mean values of three teeth clenching and fifteen seconds of habitual chewing were calculated for each subject. The analysis considered the sides with higher and lower electromyographic activity. RESULTS: During maximal voluntary clenching, the mean difference between sides was 20.0 microvolts (μV, with confidence interval (95% between 14.0 and 26.0 μV. During habitual chewing, the mean difference between sides was 10.3

  19. A Multi-objective PMU Placement Method Considering Observability and Measurement Redundancy using ABC Algorithm

    Directory of Open Access Journals (Sweden)

    KULANTHAISAMY, A.

    2014-05-01

    Full Text Available This paper presents a Multi- objective Optimal Placement of Phasor Measurement Units (MOPP method in large electric transmission systems. It is proposed for minimizing the number of Phasor Measurement Units (PMUs for complete system observability and maximizing the measurement redundancy of the system, simultaneously. The measurement redundancy means that number of times a bus is able to monitor more than once by PMUs set. A higher level of measurement redundancy can maximize the total system observability and it is desirable for a reliable power system state estimation. Therefore, simultaneous optimization of the two conflicting objectives are performed using a binary coded Artificial Bee Colony (ABC algorithm. The complete observability of the power system is first prepared and then, single line loss contingency condition is considered to the main model. The efficiency of the proposed method is validated on IEEE 14, 30, 57 and 118 bus test systems. The valuable approach of ABC algorithm is demonstrated in finding the optimal number of PMUs and their locations by comparing the performance with earlier works.

  20. Freqüência cardíaca máxima em testes de exercício em esteira rolante e em cicloergômetro de membros inferiores Maximal heart rate in exercise tests on treadmill and in a cycloergometer of lower limbs

    Directory of Open Access Journals (Sweden)

    Claudio Gil Soares de Araújo

    2005-07-01

    Full Text Available OBJETIVO: Comparar, retrospectivamente, os valores de freqüência cardíaca máxima (FCM e o descenso da freqüência cardíaca no primeiro minuto da recuperação (dFC, obtidos em teste de exercício (TE realizados em dois ergômetros e momentos distintos. MÉTODOS: Sessenta indivíduos (29 a 80 anos de idade, submetidos a TE cardiopulmonar em ciclo de membros inferiores (CMI em nosso laboratório e que possuíam TE prévio (até 36 meses em esteira (EST em outros laboratórios, nas condições idênticas de medicações de ação cronotrópica negativa. RESULTADOS: FCM foi semelhante no CMI: 156±3 e EST: 154±2 bpm (p=0,125, enquanto o dFC foi maior em CMI: 33±2, EST: 26±3 bpm (média ± erro padrão da média (pOBJECTIVE: To compare, retrospectively, the values of maximum heart rate (MHR and the decrease of the heart rate at the first minute of recovery, which were obtained in an exercise test (ET performed in two different ergometers and at different moments. METHODS: Sixty individuals (from 29 to 80 years old, submitted to cardiopulmonary ET in a cycle of lower limbs (CLL in our laboratory and who had previous ET (up to 36 months in a treadmill (TRM in other laboratories, under identical conditions of medications of negative chronotropic action. RESULTS: MHR was similar in CLL: 156±3 and TRM: 154±2 bpm (p=0.125, whereas dHR was higher in CLL: 33±2, EST: 26±3 bpm (mean ± standard error of the mean (p<0.001. In hemodynamic variables studied, the systolic blood pressure and the double product were higher in the ET-CLL (p<0.001. The electrocardiogram (ECG was similar in both ETs, except due to more frequent supraventricular arrhythmias in CLL. CONCLUSION: a With some diligence from the examiner and previous knowledge of MHR in a previous ET it is possible to obtain high levels of MHR in an ET-CLL; b interrupting the MHR-based ET forecast through equations tends to lead to sub-maximum efforts; c dHR differs in active and passive

  1. Expectation-Maximization Tensor Factorization for Practical Location Privacy Attacks

    Directory of Open Access Journals (Sweden)

    Murakami Takao

    2017-10-01

    Full Text Available Location privacy attacks based on a Markov chain model have been widely studied to de-anonymize or de-obfuscate mobility traces. An adversary can perform various kinds of location privacy attacks using a personalized transition matrix, which is trained for each target user. However, the amount of training data available to the adversary can be very small, since many users do not disclose much location information in their daily lives. In addition, many locations can be missing from the training traces, since many users do not disclose their locations continuously but rather sporadically. In this paper, we show that the Markov chain model can be a threat even in this realistic situation. Specifically, we focus on a training phase (i.e. mobility profile building phase and propose Expectation-Maximization Tensor Factorization (EMTF, which alternates between computing a distribution of missing locations (E-step and computing personalized transition matrices via tensor factorization (M-step. Since the time complexity of EMTF is exponential in the number of missing locations, we propose two approximate learning methods, one of which uses the Viterbi algorithm while the other uses the Forward Filtering Backward Sampling (FFBS algorithm. We apply our learning methods to a de-anonymization attack and a localization attack, and evaluate them using three real datasets. The results show that our learning methods significantly outperform a random guess, even when there is only one training trace composed of 10 locations per user, and each location is missing with probability 80% (i.e. even when users hardly disclose two temporally-continuous locations.

  2. Trophic systems and chorology: data from shrews, moles and voles of Italy preyed by the barn owl / Sistemi trofici e corologia: dati su <em>Soricidae>, <em>Talpidae> ed <em>Arvicolidae> d'Italia predati da <em>Tyto albaem> (Scopoli 1769

    Directory of Open Access Journals (Sweden)

    Longino Contoli

    1986-12-01

    Full Text Available Abstract In small Mammals biogeography, available data are up to now by far too scanty for elucidate the distribution of a lot of taxa, especially with regard to the absence from a given area. In this respect, standardized quantitative sampling techniques, like Owl pellets analysis can enable not only to enhance faunistic knowledges, but also to estimate the actual absence probability of a given taxon "m", lacking from the diet of an individual raptor. For the last purpose, the relevant frequencies of "m" in the other ecologically similar sites of the same raptor species diets are averaged ($f_m$ : the relevant standard error (multiplicated by a coefficient, according to the desired degree of accuracy, in relation of the integral of probabilities subtracted ($overline{F}_m - a E$: then, the probability that a single specimen is not pertaining to "m" is obtained ($P_0 = 1 - F_m + a E$; lastly, the desiderate accuracy probability ($P_d$ is chosen. Now, "$N_d$" (the number of individuals of all prey species in a single site needed for obtain, with the desired probability, a specimen at least of "m" is obtained through $$N = frac{ln P_d}{ln P_0}$$ Obviously, every site-diet with more than "N" preyed individuals and without any "i" specimen is considered to be lacking of such taxon. A "usefulness index" for the above purposes is outlined and checked about three raptors. Some exanples about usefulness of the Owl pellet analysis method in biogeography are given, concerning <em>Tyto albaem> diets in peninsular Italy about: - <em>Sorex minutusem>, lacking in some quite insulated areas; - <em>Sorex araneusem> (sensu stricto, after GRAF et al., 1979, present also in lowland areas in Emilia-Romagna; - <em>Crocidura suaveolensem> and - <em>Suncus etruscusem>, present also in the southermost part of Calabria (Reggio province; - <em>Talpa caecaem>, present also in the Antiapennines of Latium (Cimini mounts; - <em>Talpa romanaem

  3. A Multifactorial, Criteria-based Progressive Algorithm for Hamstring Injury Treatment.

    Science.gov (United States)

    Mendiguchia, Jurdan; Martinez-Ruiz, Enrique; Edouard, Pascal; Morin, Jean-Benoît; Martinez-Martinez, Francisco; Idoate, Fernando; Mendez-Villanueva, Alberto

    2017-07-01

    Given the prevalence of hamstring injuries in football, a rehabilitation program that effectively promotes muscle tissue repair and functional recovery is paramount to minimize reinjury risk and optimize player performance and availability. This study aimed to assess the concurrent effectiveness of administering an individualized and multifactorial criteria-based algorithm (rehabilitation algorithm [RA]) on hamstring injury rehabilitation in comparison with using a general rehabilitation protocol (RP). Implementing a double-blind randomized controlled trial approach, two equal groups of 24 football players (48 total) completed either an RA group or a validated RP group 5 d after an acute hamstring injury. Within 6 months after return to sport, six hamstring reinjuries occurred in RP versus one injury in RA (relative risk = 6, 90% confidence interval = 1-35; clinical inference: very likely beneficial effect). The average duration of return to sport was possibly quicker (effect size = 0.34 ± 0.42) in RP (23.2 ± 11.7 d) compared with RA (25.5 ± 7.8 d) (-13.8%, 90% confidence interval = -34.0% to 3.4%; clinical inference: possibly small effect). At the time to return to sport, RA players showed substantially better 10-m time, maximal sprinting speed, and greater mechanical variables related to speed (i.e., maximum theoretical speed and maximal horizontal power) than the RP. Although return to sport was slower, male football players who underwent an individualized, multifactorial, criteria-based algorithm with a performance- and primary risk factor-oriented training program from the early stages of the process markedly decreased the risk of reinjury compared with a general protocol where long-length strength training exercises were prioritized.

  4. Sex differences in autonomic function following maximal exercise.

    Science.gov (United States)

    Kappus, Rebecca M; Ranadive, Sushant M; Yan, Huimin; Lane-Cordova, Abbi D; Cook, Marc D; Sun, Peng; Harvey, I Shevon; Wilund, Kenneth R; Woods, Jeffrey A; Fernhall, Bo

    2015-01-01

    Heart rate variability (HRV), blood pressure variability, (BPV) and heart rate recovery (HRR) are measures that provide insight regarding autonomic function. Maximal exercise can affect autonomic function, and it is unknown if there are sex differences in autonomic recovery following exercise. Therefore, the purpose of this study was to determine sex differences in several measures of autonomic function and the response following maximal exercise. Seventy-one (31 males and 40 females) healthy, nonsmoking, sedentary normotensive subjects between the ages of 18 and 35 underwent measurements of HRV and BPV at rest and following a maximal exercise bout. HRR was measured at minute one and two following maximal exercise. Males have significantly greater HRR following maximal exercise at both minute one and two; however, the significance between sexes was eliminated when controlling for VO2 peak. Males had significantly higher resting BPV-low-frequency (LF) values compared to females and did not significantly change following exercise, whereas females had significantly increased BPV-LF values following acute maximal exercise. Although males and females exhibited a significant decrease in both HRV-LF and HRV-high frequency (HF) with exercise, females had significantly higher HRV-HF values following exercise. Males had a significantly higher HRV-LF/HF ratio at rest; however, both males and females significantly increased their HRV-LF/HF ratio following exercise. Pre-menopausal females exhibit a cardioprotective autonomic profile compared to age-matched males due to lower resting sympathetic activity and faster vagal reactivation following maximal exercise. Acute maximal exercise is a sufficient autonomic stressor to demonstrate sex differences in the critical post-exercise recovery period.

  5. ANTQ evolutionary algorithm applied to nuclear fuel reload problem

    International Nuclear Information System (INIS)

    Machado, Liana; Schirru, Roberto

    2000-01-01

    Nuclear fuel reload optimization is a NP-complete combinatorial optimization problem where the aim is to find fuel rods' configuration that maximizes burnup or minimizes the power peak factor. For decades this problem was solved exclusively using an expert's knowledge. From the eighties, however, there have been efforts to automatize fuel reload. The first relevant effort used Simulated Annealing, but more recent publications show Genetic Algorithm's (GA) efficiency on this problem's solution. Following this direction, our aim is to optimize nuclear fuel reload using Ant-Q, a reinforcement learning algorithm based on the Cellular Computing paradigm. Ant-Q's results on the Travelling Salesmen Problem, which is conceptually similar to fuel reload, are better than the GA's ones. Ant-Q was tested on fuel reload by the simulation of the first cycle in-out reload of Bibils, a 193 fuel element PWR. Comparing An-Q's result with the GA's ones, it can b seen that even without a local heuristics, the former evolutionary algorithm can be used to solve the nuclear fuel reload problem. (author)

  6. Expert System and Heuristics Algorithm for Cloud Resource Scheduling

    Directory of Open Access Journals (Sweden)

    Mamatha E

    2017-03-01

    Full Text Available Rule-based scheduling algorithms have been widely used on cloud computing systems and there is still plenty of room to improve their performance. This paper proposes to develop an expert system to allocate resources in cloud by using Rule based Algorithm, thereby measuring the performance of the system by letting the system adapt new rules based on the feedback. Here performance of the action helps to make better allocation of the resources to improve quality of services, scalability and flexibility. The performance measure is based on how the allocation of the resources is dynamically optimized and how the resources are utilized properly. It aims to maximize the utilization of the resources. The data and resource are given to the algorithm which allocates the data to resources and an output is obtained based on the action occurred. Once the action is completed, the performance of every action is measured that contains how the resources are allocated and how efficiently it worked. In addition to performance, resource allocation in cloud environment is also considered.

  7. Eccentric exercise decreases maximal insulin action in humans

    DEFF Research Database (Denmark)

    Asp, Svend; Daugaard, J R; Kristiansen, S

    1996-01-01

    subjects participated in two euglycaemic clamps, performed in random order. One clamp was preceded 2 days earlier by one-legged eccentric exercise (post-eccentric exercise clamp (PEC)) and one was without the prior exercise (control clamp (CC)). 2. During PEC the maximal insulin-stimulated glucose uptake...... for all three clamp steps used (P maximal activity of glycogen synthase was identical in the two thighs for all clamp steps. 3. The glucose infusion rate (GIR......) necessary to maintain euglycaemia during maximal insulin stimulation was lower during PEC compared with CC (15.7%, 81.3 +/- 3.2 vs. 96.4 +/- 8.8 mumol kg-1 min-1, P maximal...

  8. Optimization of thermal performance of a smooth flat-plate solar air heater using teaching–learning-based optimization algorithm

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2015-12-01

    Full Text Available This paper presents the performance of teaching–learning-based optimization (TLBO algorithm to obtain the optimum set of design and operating parameters for a smooth flat plate solar air heater (SFPSAH. The TLBO algorithm is a recently proposed population-based algorithm, which simulates the teaching–learning process of the classroom. Maximization of thermal efficiency is considered as an objective function for the thermal performance of SFPSAH. The number of glass plates, irradiance, and the Reynolds number are considered as the design parameters and wind velocity, tilt angle, ambient temperature, and emissivity of the plate are considered as the operating parameters to obtain the thermal performance of the SFPSAH using the TLBO algorithm. The computational results have shown that the TLBO algorithm is better or competitive to other optimization algorithms recently reported in the literature for the considered problem.

  9. Modeling the ultrasonic testing echoes by a combination of particle swarm optimization and Levenberg–Marquardt algorithms

    International Nuclear Information System (INIS)

    Gholami, Ali; Honarvar, Farhang; Moghaddam, Hamid Abrishami

    2017-01-01

    This paper presents an accurate and easy-to-implement algorithm for estimating the parameters of the asymmetric Gaussian chirplet model (AGCM) used for modeling echoes measured in ultrasonic nondestructive testing (NDT) of materials. The proposed algorithm is a combination of particle swarm optimization (PSO) and Levenberg–Marquardt (LM) algorithms. PSO does not need an accurate initial guess and quickly converges to a reasonable output while LM needs a good initial guess in order to provide an accurate output. In the combined algorithm, PSO is run first to provide a rough estimate of the output and this result is consequently inputted to the LM algorithm for more accurate estimation of parameters. To apply the algorithm to signals with multiple echoes, the space alternating generalized expectation maximization (SAGE) is used. The proposed combined algorithm is robust and accurate. To examine the performance of the proposed algorithm, it is applied to a number of simulated echoes having various signal to noise ratios. The combined algorithm is also applied to a number of experimental ultrasonic signals. The results corroborate the accuracy and reliability of the proposed combined algorithm. (paper)

  10. Estimating model error covariances in nonlinear state-space models using Kalman smoothing and the expectation-maximisation algorithm

    KAUST Repository

    Dreano, Denis; Tandeo, P.; Pulido, M.; Ait-El-Fquih, Boujemaa; Chonavel, T.; Hoteit, Ibrahim

    2017-01-01

    Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended

  11. Maximize x(a - x)

    Science.gov (United States)

    Lange, L. H.

    1974-01-01

    Five different methods for determining the maximizing condition for x(a - x) are presented. Included is the ancient Greek version and a method attributed to Fermat. None of the proofs use calculus. (LS)

  12. The algorithmic level is the bridge between computation and brain.

    Science.gov (United States)

    Love, Bradley C

    2015-04-01

    Every scientist chooses a preferred level of analysis and this choice shapes the research program, even determining what counts as evidence. This contribution revisits Marr's (1982) three levels of analysis (implementation, algorithmic, and computational) and evaluates the prospect of making progress at each individual level. After reviewing limitations of theorizing within a level, two strategies for integration across levels are considered. One is top-down in that it attempts to build a bridge from the computational to algorithmic level. Limitations of this approach include insufficient theoretical constraint at the computation level to provide a foundation for integration, and that people are suboptimal for reasons other than capacity limitations. Instead, an inside-out approach is forwarded in which all three levels of analysis are integrated via the algorithmic level. This approach maximally leverages mutual data constraints at all levels. For example, algorithmic models can be used to interpret brain imaging data, and brain imaging data can be used to select among competing models. Examples of this approach to integration are provided. This merging of levels raises questions about the relevance of Marr's tripartite view. Copyright © 2015 Cognitive Science Society, Inc.

  13. Optimization in the utility maximization framework for conservation planning: a comparison of solution procedures in a study of multifunctional agriculture

    Science.gov (United States)

    Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.

    2014-01-01

    Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.

  14. Improved Approximation Algorithms for Item Pricing with Bounded Degree and Valuation

    Science.gov (United States)

    Hamane, Ryoso; Itoh, Toshiya

    When a store sells items to customers, the store wishes to decide the prices of the items to maximize its profit. If the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. It would be hard for the store to decide the prices of items. Assume that a store has a set V of n items and there is a set C of m customers who wish to buy those items. The goal of the store is to decide the price of each item to maximize its profit. We refer to this maximization problem as an item pricing problem. We classify the item pricing problems according to how many items the store can sell or how the customers valuate the items. If the store can sell every item i with unlimited (resp. limited) amount, we refer to this as unlimited supply (resp. limited supply). We say that the item pricing problem is single-minded if each customer j∈C wishes to buy a set ej⊆V of items and assigns valuation w(ej)≥0. For the single-minded item pricing problems (in unlimited supply), Balcan and Blum regarded them as weighted k-hypergraphs and gave several approximation algorithms. In this paper, we focus on the (pseudo) degree of k-hypergraphs and the valuation ratio, i. e., the ratio between the smallest and the largest valuations. Then for the single-minded item pricing problems (in unlimited supply), we show improved approximation algorithms (for k-hypergraphs, general graphs, bipartite graphs, etc.) with respect to the maximum (pseudo) degree and the valuation ratio.

  15. Planning the FUSE Mission Using the SOVA Algorithm

    Science.gov (United States)

    Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2011-01-01

    Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.

  16. Observations on muscle activity in REM sleep behavior disorder assessed with a semi-automated scoring algorithm

    DEFF Research Database (Denmark)

    Jeppesen, Jesper; Otto, Marit; Frederiksen, Yoon

    2018-01-01

    OBJECTIVES: Rapid eye movement (REM) sleep behavior disorder (RBD) is defined by dream enactment due to a failure of normal muscle atonia. Visual assessment of this muscle activity is time consuming and rater-dependent. METHODS: An EMG computer algorithm for scoring 'tonic', 'phasic' and 'any......' submental muscle activity during REM sleep was evaluated compared with human visual ratings. Subsequently, 52 subjects were analyzed with the algorithm. Duration and maximal amplitude of muscle activity, and self-awareness of RBD symptoms were assessed. RESULTS: The computer algorithm showed high congruency...... sleep without atonia. CONCLUSIONS: Our proposed algorithm was able to detect and rate REM sleep without atonia allowing identification of RBD. Increased duration and amplitude of muscle activity bouts were characteristics of RBD. Quantification of REM sleep without atonia represents a marker of RBD...

  17. Statistical complexity is maximized in a small-world brain.

    Directory of Open Access Journals (Sweden)

    Teck Liang Tan

    Full Text Available In this paper, we study a network of Izhikevich neurons to explore what it means for a brain to be at the edge of chaos. To do so, we first constructed the phase diagram of a single Izhikevich excitatory neuron, and identified a small region of the parameter space where we find a large number of phase boundaries to serve as our edge of chaos. We then couple the outputs of these neurons directly to the parameters of other neurons, so that the neuron dynamics can drive transitions from one phase to another on an artificial energy landscape. Finally, we measure the statistical complexity of the parameter time series, while the network is tuned from a regular network to a random network using the Watts-Strogatz rewiring algorithm. We find that the statistical complexity of the parameter dynamics is maximized when the neuron network is most small-world-like. Our results suggest that the small-world architecture of neuron connections in brains is not accidental, but may be related to the information processing that they do.

  18. Managing the innovation supply chain to maximize personalized medicine.

    Science.gov (United States)

    Waldman, S A; Terzic, A

    2014-02-01

    Personalized medicine epitomizes an evolving model of care tailored to the individual patient. This emerging paradigm harnesses radical technological advances to define each patient's molecular characteristics and decipher his or her unique pathophysiological processes. Translated into individualized algorithms, personalized medicine aims to predict, prevent, and cure disease without producing therapeutic adverse events. Although the transformative power of personalized medicine is generally recognized by physicians, patients, and payers, the complexity of translating discoveries into new modalities that transform health care is less appreciated. We often consider the flow of innovation and technology along a continuum of discovery, development, regulation, and application bridging the bench with the bedside. However, this process also can be viewed through a complementary prism, as a necessary supply chain of services and providers, each making essential contributions to the development of the final product to maximize value to consumers. Considering personalized medicine in this context of supply chain management highlights essential points of vulnerability and/or scalability that can ultimately constrain translation of the biological revolution or potentiate it into individualized diagnostics and therapeutics for optimized value creation and delivery.

  19. Contributions of leaf photosynthetic capacity, leaf angle and self-shading to the maximization of net photosynthesis in Acer saccharum: a modelling assessment.

    Science.gov (United States)

    Posada, Juan M; Sievänen, Risto; Messier, Christian; Perttunen, Jari; Nikinmaa, Eero; Lechowicz, Martin J

    2012-08-01

    Plants are expected to maximize their net photosynthetic gains and efficiently use available resources, but the fundamental principles governing trade-offs in suites of traits related to resource-use optimization remain uncertain. This study investigated whether Acer saccharum (sugar maple) saplings could maximize their net photosynthetic gains through a combination of crown structure and foliar characteristics that let all leaves maximize their photosynthetic light-use efficiency (ε). A functional-structural model, LIGNUM, was used to simulate individuals of different leaf area index (LAI(ind)) together with a genetic algorithm to find distributions of leaf angle (L(A)) and leaf photosynthetic capacity (A(max)) that maximized net carbon gain at the whole-plant level. Saplings grown in either the open or in a forest gap were simulated with A(max) either unconstrained or constrained to an upper value consistent with reported values for A(max) in A. saccharum. It was found that total net photosynthetic gain was highest when whole-plant PPFD absorption and leaf ε were simultaneously maximized. Maximization of ε required simultaneous adjustments in L(A) and A(max) along gradients of PPFD in the plants. When A(max) was constrained to a maximum, plants growing in the open maximized their PPFD absorption but not ε because PPFD incident on leaves was higher than the PPFD at which ε(max) was attainable. Average leaf ε in constrained plants nonetheless improved with increasing LAI(ind) because of an increase in self-shading. It is concluded that there are selective pressures for plants to simultaneously maximize both PPFD absorption at the scale of the whole individual and ε at the scale of leaves, which requires a highly integrated response between L(A), A(max) and LAI(ind). The results also suggest that to maximize ε plants have evolved mechanisms that co-ordinate the L(A) and A(max) of individual leaves with PPFD availability.

  20. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  1. Leads Detection Using Mixture Statistical Distribution Based CRF Algorithm from Sentinel-1 Dual Polarization SAR Imagery

    Science.gov (United States)

    Zhang, Yu; Li, Fei; Zhang, Shengkai; Zhu, Tingting

    2017-04-01

    Synthetic Aperture Radar (SAR) is significantly important for polar remote sensing since it can provide continuous observations in all days and all weather. SAR can be used for extracting the surface roughness information characterized by the variance of dielectric properties and different polarization channels, which make it possible to observe different ice types and surface structure for deformation analysis. In November, 2016, Chinese National Antarctic Research Expedition (CHINARE) 33rd cruise has set sails in sea ice zone in Antarctic. Accurate leads spatial distribution in sea ice zone for routine planning of ship navigation is essential. In this study, the semantic relationship between leads and sea ice categories has been described by the Conditional Random Fields (CRF) model, and leads characteristics have been modeled by statistical distributions in SAR imagery. In the proposed algorithm, a mixture statistical distribution based CRF is developed by considering the contexture information and the statistical characteristics of sea ice for improving leads detection in Sentinel-1A dual polarization SAR imagery. The unary potential and pairwise potential in CRF model is constructed by integrating the posteriori probability estimated from statistical distributions. For mixture statistical distribution parameter estimation, Method of Logarithmic Cumulants (MoLC) is exploited for single statistical distribution parameters estimation. The iteration based Expectation Maximal (EM) algorithm is investigated to calculate the parameters in mixture statistical distribution based CRF model. In the posteriori probability inference, graph-cut energy minimization method is adopted in the initial leads detection. The post-processing procedures including aspect ratio constrain and spatial smoothing approaches are utilized to improve the visual result. The proposed method is validated on Sentinel-1A SAR C-band Extra Wide Swath (EW) Ground Range Detected (GRD) imagery with a

  2. Comparative study between ultrahigh spatial frequency algorithm and high spatial frequency algorithm in high-resolution CT of the lungs

    International Nuclear Information System (INIS)

    Oh, Yu Whan; Kim, Jung Kyuk; Suh, Won Hyuck

    1994-01-01

    To date, the high spatial frequency algorithm (HSFA) which reduces image smoothing and increases spatial resolution has been used for the evaluation of parenchymal lung diseases in thin-section high-resolution CT. In this study, we compared the ultrahigh spatial frequency algorithm (UHSFA) with the high spatial frequency algorithm in the assessment of thin section images of the lung parenchyma. Three radiologists compared the UHSFA and HSFA on identical CT images in a line-pair resolution phantom, one lung specimen, 2 patients with normal lung and 18 patients with abnormal lung parenchyma. Scanning of a line-pair resolution phantom demonstrated no difference in resolution between two techniques but it showed that outer lines of the line pairs with maximal resolution looked thicker on UHSFA than those on HSFA. Lung parenchymal detail with UHSFA was judged equal or superior to HSFA in 95% of images. Lung parenchymal sharpness was improved with UHSFA in all images. Although UHSFA resulted in an increase in visible noise, observers did not found that image noise interfered with image interpretation. The visual CT attenuation of normal lung parenchyma is minimally increased in images with HSFA. The overall visual preference of the images reconstructed on UHSFA was considered equal to or greater than that of those reconstructed on HSFA in 78% of images. The ultrahigh spatial frequency algorithm improved the overall visual quality of the images in pulmonary parenchymal high-resolution CT

  3. Proximate Composition, Nutritional Attributes and Mineral Composition of <em>Peperomia> <em>pellucida> L. (Ketumpangan Air Grown in Malaysia

    Directory of Open Access Journals (Sweden)

    Maznah Ismail

    2012-09-01

    Full Text Available This study presents the proximate and mineral composition of <em>Peperomia> <em>pellucida> L., an underexploited weed plant in Malaysia. Proximate analysis was performed using standard AOAC methods and mineral contents were determined using atomic absorption spectrometry. The results indicated <em>Peperomia> <em>pellucida> to be rich in crude protein, carbohydrate and total ash contents. The high amount of total ash (31.22% suggests a high-value mineral composition comprising potassium, calcium and iron as the main elements. The present study inferred that <em>Peperomia> <em>pellucida> would serve as a good source of protein and energy as well as micronutrients in the form of a leafy vegetable for human consumption.

  4. Utility Maximization in Nonconvex Wireless Systems

    CERN Document Server

    Brehmer, Johannes

    2012-01-01

    This monograph formulates a framework for modeling and solving utility maximization problems in nonconvex wireless systems. First, a model for utility optimization in wireless systems is defined. The model is general enough to encompass a wide array of system configurations and performance objectives. Based on the general model, a set of methods for solving utility maximization problems is developed. The development is based on a careful examination of the properties that are required for the application of each method. The focus is on problems whose initial formulation does not allow for a solution by standard convex methods. Solution approaches that take into account the nonconvexities inherent to wireless systems are discussed in detail. The monograph concludes with two case studies that demonstrate the application of the proposed framework to utility maximization in multi-antenna broadcast channels.

  5. Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms

    Directory of Open Access Journals (Sweden)

    N. A. Azeez

    2017-04-01

    Full Text Available Data compression is the process of reducing the size of a file to effectively reduce storage space and communication cost. The evolvement in technology and digital age has led to an unparalleled usage of digital files in this current decade. The usage of data has resulted to an increase in the amount of data being transmitted via various channels of data communication which has prompted the need to look into the current lossless data compression algorithms to check for their level of effectiveness so as to maximally reduce the bandwidth requirement in communication and transfer of data. Four lossless data compression algorithm: Lempel-Ziv Welch algorithm, Shannon-Fano algorithm, Adaptive Huffman algorithm and Run-Length encoding have been selected for implementation. The choice of these algorithms was based on their similarities, particularly in application areas. Their level of efficiency and effectiveness were evaluated using some set of predefined performance evaluation metrics namely compression ratio, compression factor, compression time, saving percentage, entropy and code efficiency. The algorithms implementation was done in the NetBeans Integrated Development Environment using Java as the programming language. Through the statistical analysis performed using Boxplot and ANOVA and comparison made on the four algo

  6. On the Structure of a Best Possible Crossover Selection Strategy in Genetic Algorithms

    Science.gov (United States)

    Lässig, Jörg; Hoffmann, Karl Heinz

    The paper considers the problem of selecting individuals in the current population in genetic algorithms for crossover to find a solution with high fitness for a given optimization problem. Many different schemes have been described in the literature as possible strategies for this task but so far comparisons have been predominantly empirical. It is shown that if one wishes to maximize any linear function of the final state probabilities, e.g. the fitness of the best individual in the final population of the algorithm, then a best probability distribution for selecting an individual in each generation is a rectangular distribution over the individuals sorted in descending sequence by their fitness values. This means uniform probabilities have to be assigned to a group of the best individuals of the population but probabilities equal to zero to individuals with lower fitness, assuming that the probability distribution to choose individuals from the current population can be chosen independently for each iteration and each individual. This result is then generalized also to typical practically applied performance measures, such as maximizing the expected fitness value of the best individual seen in any generation.

  7. Evaluation of Antioxidant Activities of Aqueous Extracts and Fractionation of Different Parts of <em>Elsholtzia em>ciliata>

    Directory of Open Access Journals (Sweden)

    Yuangang Zu

    2012-05-01

    Full Text Available The aim of this study was to investigate the antioxidant and free-radical scavenging activity of extract and fractions from various parts of <em>Elsholtzia ciliataem>. The inflorescences, leaves, stems and roots of <em>E. ciliataem> were extracted separately and two phenolic component enrichment methods: ethyl acetate-water liquid-liquid extraction and macroporous resin adsorption-desorption, were adopted in this study. The antioxidant activities of water extracts and fractions of <em>E. ciliataem> were examined using different assay model systems <em>in vitroem>. The fraction root E (purified by HPD300 macroporous resin exhibited the highest total phenolics content (497.2 ± 24.9 mg GAE/g, accompanied with the highest antioxidant activity against various antioxidant systems <em>in vitroem> compared to other fractions. On the basis of the results obtained, <em>E. ciliataem> extracts can be used potentially as a ready accessible and valuable bioactive source of natural antioxidants.

  8. Optimum oil production planning using infeasibility driven evolutionary algorithm.

    Science.gov (United States)

    Singh, Hemant Kumar; Ray, Tapabrata; Sarker, Ruhul

    2013-01-01

    In this paper, we discuss a practical oil production planning optimization problem. For oil wells with insufficient reservoir pressure, gas is usually injected to artificially lift oil, a practice commonly referred to as enhanced oil recovery (EOR). The total gas that can be used for oil extraction is constrained by daily availability limits. The oil extracted from each well is known to be a nonlinear function of the gas injected into the well and varies between wells. The problem is to identify the optimal amount of gas that needs to be injected into each well to maximize the amount of oil extracted subject to the constraint on the total daily gas availability. The problem has long been of practical interest to all major oil exploration companies as it has the potential to derive large financial benefit. In this paper, an infeasibility driven evolutionary algorithm is used to solve a 56 well reservoir problem which demonstrates its efficiency in solving constrained optimization problems. Furthermore, a multi-objective formulation of the problem is posed and solved using a number of algorithms, which eliminates the need for solving the (single objective) problem on a regular basis. Lastly, a modified single objective formulation of the problem is also proposed, which aims to maximize the profit instead of the quantity of oil. It is shown that even with a lesser amount of oil extracted, more economic benefits can be achieved through the modified formulation.

  9. Classification of Knee Joint Vibration Signals Using Bivariate Feature Distribution Estimation and Maximal Posterior Probability Decision Criterion

    Directory of Open Access Journals (Sweden)

    Fang Zheng

    2013-04-01

    Full Text Available Analysis of knee joint vibration or vibroarthrographic (VAG signals using signal processing and machine learning algorithms possesses high potential for the noninvasive detection of articular cartilage degeneration, which may reduce unnecessary exploratory surgery. Feature representation of knee joint VAG signals helps characterize the pathological condition of degenerative articular cartilages in the knee. This paper used the kernel-based probability density estimation method to model the distributions of the VAG signals recorded from healthy subjects and patients with knee joint disorders. The estimated densities of the VAG signals showed explicit distributions of the normal and abnormal signal groups, along with the corresponding contours in the bivariate feature space. The signal classifications were performed by using the Fisher’s linear discriminant analysis, support vector machine with polynomial kernels, and the maximal posterior probability decision criterion. The maximal posterior probability decision criterion was able to provide the total classification accuracy of 86.67% and the area (Az of 0.9096 under the receiver operating characteristics curve, which were superior to the results obtained by either the Fisher’s linear discriminant analysis (accuracy: 81.33%, Az: 0.8564 or the support vector machine with polynomial kernels (accuracy: 81.33%, Az: 0.8533. Such results demonstrated the merits of the bivariate feature distribution estimation and the superiority of the maximal posterior probability decision criterion for analysis of knee joint VAG signals.

  10. Quantitative SPECT reconstruction of iodine-123 data

    International Nuclear Information System (INIS)

    Gilland, D.R.; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.

    1991-01-01

    Many clinical and research studies in nuclear medicine require quantitation of iodine-123 ( 123 I) distribution for the determination of kinetics or localization. The objective of this study was to implement several reconstruction methods designed for single-photon emission computed tomography (SPECT) using 123 I and to evaluate their performance in terms of quantitative accuracy, image artifacts, and noise. The methods consisted of four attenuation and scatter compensation schemes incorporated into both the filtered backprojection/Chang (FBP) and maximum likelihood-expectation maximization (ML-EM) reconstruction algorithms. The methods were evaluated on data acquired of a phantom containing a hot sphere of 123 I activity in a lower level background 123 I distribution and nonuniform density media. For both reconstruction algorithms, nonuniform attenuation compensation combined with either scatter subtraction or Metz filtering produced images that were quantitatively accurate to within 15% of the true value. The ML-EM algorithm demonstrated quantitative accuracy comparable to FBP and smaller relative noise magnitude for all compensation schemes

  11. Using a latent variable model with non-constant factor loadings to examine PM2.5 constituents related to secondary inorganic aerosols.

    Science.gov (United States)

    Zhang, Zhenzhen; O'Neill, Marie S; Sánchez, Brisa N

    2016-04-01

    Factor analysis is a commonly used method of modelling correlated multivariate exposure data. Typically, the measurement model is assumed to have constant factor loadings. However, from our preliminary analyses of the Environmental Protection Agency's (EPA's) PM 2.5 fine speciation data, we have observed that the factor loadings for four constituents change considerably in stratified analyses. Since invariance of factor loadings is a prerequisite for valid comparison of the underlying latent variables, we propose a factor model that includes non-constant factor loadings that change over time and space using P-spline penalized with the generalized cross-validation (GCV) criterion. The model is implemented using the Expectation-Maximization (EM) algorithm and we select the multiple spline smoothing parameters by minimizing the GCV criterion with Newton's method during each iteration of the EM algorithm. The algorithm is applied to a one-factor model that includes four constituents. Through bootstrap confidence bands, we find that the factor loading for total nitrate changes across seasons and geographic regions.

  12. A new modified artificial bee colony algorithm for the economic dispatch problem

    International Nuclear Information System (INIS)

    Secui, Dinu Calin

    2015-01-01

    Highlights: • A new modified ABC algorithm (MABC) is proposed to solve the EcD/EmD problem. • Valve-point effects, ramp-rate limits, POZ, transmission losses were considered. • The algorithm is tested on four systems having 6, 13, 40 and 52 thermal units. • MABC algorithm outperforms several optimization techniques. - Abstract: In this paper a new modified artificial bee colony algorithm (MABC) is proposed to solve the economic dispatch problem by taking into account the valve-point effects, the emission pollutions and various operating constraints of the generating units. The MABC algorithm introduces a new relation to update the solutions within the search space, in order to increase the algorithm ability to avoid premature convergence and to find stable and high quality solutions. Moreover, to strengthen the MABC algorithm performance, it is endowed with a chaotic sequence generated by both a cat map and a logistic map. The MABC algorithm behavior is investigated for several combinations resulting from three generating modalities of the chaotic sequences and two selection schemes of the solutions. The performance of the MABC variants is tested on four systems having six units, thirteen units, forty units and fifty-two thermal generating units. The comparison of the results shows that the MABC variants have a better performance than the classical ABC algorithm and other optimization techniques

  13. On the use of harmony search algorithm in the training of wavelet neural networks

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  14. Polyphonic pitch detection and instrument separation

    Science.gov (United States)

    Bay, Mert; Beauchamp, James W.

    2005-09-01

    An algorithm for polyphonic pitch detection and musical instrument separation is presented. Each instrument is represented as a time-varying harmonic series. Spectral information is obtained from a monaural input signal using a spectral peak tracking method. Fundamental frequencies (F0s) for each time frame are estimated from the spectral data using an Expectation Maximization (EM) algorithm with a Gaussian mixture model representing the harmonic series. The method first estimates the most predominant F0, suppresses its series in the input, and then the EM algorithm is run iteratively to estimate each next F0. Collisions between instrument harmonics, which frequently occur, are predicted from the estimated F0s, and the resulting corrupted harmonics are ignored. The amplitudes of these corrupted harmonics are replaced by harmonics taken from a library of spectral envelopes for different instruments, where the spectrum which most closely matches the important characteristics of each extracted spectrum is chosen. Finally, each voice is separately resynthesized by additive synthesis. This algorithm is demonstrated for a trio piece that consists of 3 different instruments.

  15. Ensemble bayesian model averaging using markov chain Monte Carlo sampling

    Energy Technology Data Exchange (ETDEWEB)

    Vrugt, Jasper A [Los Alamos National Laboratory; Diks, Cees G H [NON LANL; Clark, Martyn P [NON LANL

    2008-01-01

    Bayesian model averaging (BMA) has recently been proposed as a statistical method to calibrate forecast ensembles from numerical weather models. Successful implementation of BMA however, requires accurate estimates of the weights and variances of the individual competing models in the ensemble. In their seminal paper (Raftery etal. Mon Weather Rev 133: 1155-1174, 2(05)) has recommended the Expectation-Maximization (EM) algorithm for BMA model training, even though global convergence of this algorithm cannot be guaranteed. In this paper, we compare the performance of the EM algorithm and the recently developed Differential Evolution Adaptive Metropolis (DREAM) Markov Chain Monte Carlo (MCMC) algorithm for estimating the BMA weights and variances. Simulation experiments using 48-hour ensemble data of surface temperature and multi-model stream-flow forecasts show that both methods produce similar results, and that their performance is unaffected by the length of the training data set. However, MCMC simulation with DREAM is capable of efficiently handling a wide variety of BMA predictive distributions, and provides useful information about the uncertainty associated with the estimated BMA weights and variances.

  16. A parallel graded-mesh FDTD algorithm for human-antenna interaction problems.

    Science.gov (United States)

    Catarinucci, Luca; Tarricone, Luciano

    2009-01-01

    The finite difference time domain method (FDTD) is frequently used for the numerical solution of a wide variety of electromagnetic (EM) problems and, among them, those concerning human exposure to EM fields. In many practical cases related to the assessment of occupational EM exposure, large simulation domains are modeled and high space resolution adopted, so that strong memory and central processing unit power requirements have to be satisfied. To better afford the computational effort, the use of parallel computing is a winning approach; alternatively, subgridding techniques are often implemented. However, the simultaneous use of subgridding schemes and parallel algorithms is very new. In this paper, an easy-to-implement and highly-efficient parallel graded-mesh (GM) FDTD scheme is proposed and applied to human-antenna interaction problems, demonstrating its appropriateness in dealing with complex occupational tasks and showing its capability to guarantee the advantages of a traditional subgridding technique without affecting the parallel FDTD performance.

  17. The Forward-Reverse Algorithm for Stochastic Reaction Networks

    KAUST Repository

    Bayer, Christian

    2015-01-07

    In this work, we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which we solve a set of deterministic optimization problems where the SRNs are replaced by the classical ODE rates; then, during the second phase, the Monte Carlo version of the EM algorithm is applied starting from the output of the previous phase. Starting from a set of over-dispersed seeds, the output of our two-phase method is a cluster of maximum likelihood estimates obtained by using convergence assessment techniques from the theory of Markov chain Monte Carlo.

  18. An Enhanced PSO-Based Clustering Energy Optimization Algorithm for Wireless Sensor Network.

    Science.gov (United States)

    Vimalarani, C; Subramanian, R; Sivanandam, S N

    2016-01-01

    Wireless Sensor Network (WSN) is a network which formed with a maximum number of sensor nodes which are positioned in an application environment to monitor the physical entities in a target area, for example, temperature monitoring environment, water level, monitoring pressure, and health care, and various military applications. Mostly sensor nodes are equipped with self-supported battery power through which they can perform adequate operations and communication among neighboring nodes. Maximizing the lifetime of the Wireless Sensor networks, energy conservation measures are essential for improving the performance of WSNs. This paper proposes an Enhanced PSO-Based Clustering Energy Optimization (EPSO-CEO) algorithm for Wireless Sensor Network in which clustering and clustering head selection are done by using Particle Swarm Optimization (PSO) algorithm with respect to minimizing the power consumption in WSN. The performance metrics are evaluated and results are compared with competitive clustering algorithm to validate the reduction in energy consumption.

  19. Multiobjective genetic algorithm conjunctive use optimization for production, cost, and energy with dynamic return flow

    Science.gov (United States)

    Peralta, Richard C.; Forghani, Ali; Fayad, Hala

    2014-04-01

    Many real water resources optimization problems involve conflicting objectives for which the main goal is to find a set of optimal solutions on, or near to the Pareto front. E-constraint and weighting multiobjective optimization techniques have shortcomings, especially as the number of objectives increases. Multiobjective Genetic Algorithms (MGA) have been previously proposed to overcome these difficulties. Here, an MGA derives a set of optimal solutions for multiobjective multiuser conjunctive use of reservoir, stream, and (un)confined groundwater resources. The proposed methodology is applied to a hydraulically and economically nonlinear system in which all significant flows, including stream-aquifer-reservoir-diversion-return flow interactions, are simulated and optimized simultaneously for multiple periods. Neural networks represent constrained state variables. The addressed objectives that can be optimized simultaneously in the coupled simulation-optimization model are: (1) maximizing water provided from sources, (2) maximizing hydropower production, and (3) minimizing operation costs of transporting water from sources to destinations. Results show the efficiency of multiobjective genetic algorithms for generating Pareto optimal sets for complex nonlinear multiobjective optimization problems.

  20. An Algorithmic Information Calculus for Causal Discovery and Reprogramming Systems

    KAUST Repository

    Zenil, Hector

    2017-09-08

    We introduce a conceptual framework and an interventional calculus to steer and manipulate systems based on their intrinsic algorithmic probability using the universal principles of the theory of computability and algorithmic information. By applying sequences of controlled interventions to systems and networks, we estimate how changes in their algorithmic information content are reflected in positive/negative shifts towards and away from randomness. The strong connection between approximations to algorithmic complexity (the size of the shortest generating mechanism) and causality induces a sequence of perturbations ranking the network elements by the steering capabilities that each of them is capable of. This new dimension unmasks a separation between causal and non-causal components providing a suite of powerful parameter-free algorithms of wide applicability ranging from optimal dimension reduction, maximal randomness analysis and system control. We introduce methods for reprogramming systems that do not require the full knowledge or access to the system\\'s actual kinetic equations or any probability distributions. A causal interventional analysis of synthetic and regulatory biological networks reveals how the algorithmic reprogramming qualitatively reshapes the system\\'s dynamic landscape. For example, during cellular differentiation we find a decrease in the number of elements corresponding to a transition away from randomness and a combination of the system\\'s intrinsic properties and its intrinsic capabilities to be algorithmically reprogrammed can reconstruct an epigenetic landscape. The interventional calculus is broadly applicable to predictive causal inference of systems such as networks and of relevance to a variety of machine and causal learning techniques driving model-based approaches to better understanding and manipulate complex systems.

  1. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization.

    Science.gov (United States)

    Kurnianingsih, Yoanna A; Sim, Sam K Y; Chee, Michael W L; Mullette-Gillman, O'Dhaniel A

    2015-01-01

    We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble) and choice strategies (what gamble information influences choices) within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk, and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning. We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61-80 years old) were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic decision-making for

  2. Aging and loss decision making: increased risk aversion and decreased use of maximizing information, with correlated rationality and value maximization

    Directory of Open Access Journals (Sweden)

    Yoanna Arlina Kurnianingsih

    2015-05-01

    Full Text Available We investigated how adult aging specifically alters economic decision-making, focusing on examining alterations in uncertainty preferences (willingness to gamble and choice strategies (what gamble information influences choices within both the gains and losses domains. Within each domain, participants chose between certain monetary outcomes and gambles with uncertain outcomes. We examined preferences by quantifying how uncertainty modulates choice behavior as if altering the subjective valuation of gambles. We explored age-related preferences for two types of uncertainty, risk and ambiguity. Additionally, we explored how aging may alter what information participants utilize to make their choices by comparing the relative utilization of maximizing and satisficing information types through a choice strategy metric. Maximizing information was the ratio of the expected value of the two options, while satisficing information was the probability of winning.We found age-related alterations of economic preferences within the losses domain, but no alterations within the gains domain. Older adults (OA; 61 to 80 years old were significantly more uncertainty averse for both risky and ambiguous choices. OA also exhibited choice strategies with decreased use of maximizing information. Within OA, we found a significant correlation between risk preferences and choice strategy. This linkage between preferences and strategy appears to derive from a convergence to risk neutrality driven by greater use of the effortful maximizing strategy. As utility maximization and value maximization intersect at risk neutrality, this result suggests that OA are exhibiting a relationship between enhanced rationality and enhanced value maximization. While there was variability in economic decision-making measures within OA, these individual differences were unrelated to variability within examined measures of cognitive ability. Our results demonstrate that aging alters economic

  3. Effective Iterated Greedy Algorithm for Flow-Shop Scheduling Problems with Time lags

    Science.gov (United States)

    ZHAO, Ning; YE, Song; LI, Kaidian; CHEN, Siyu

    2017-05-01

    Flow shop scheduling problem with time lags is a practical scheduling problem and attracts many studies. Permutation problem(PFSP with time lags) is concentrated but non-permutation problem(non-PFSP with time lags) seems to be neglected. With the aim to minimize the makespan and satisfy time lag constraints, efficient algorithms corresponding to PFSP and non-PFSP problems are proposed, which consist of iterated greedy algorithm for permutation(IGTLP) and iterated greedy algorithm for non-permutation (IGTLNP). The proposed algorithms are verified using well-known simple and complex instances of permutation and non-permutation problems with various time lag ranges. The permutation results indicate that the proposed IGTLP can reach near optimal solution within nearly 11% computational time of traditional GA approach. The non-permutation results indicate that the proposed IG can reach nearly same solution within less than 1% computational time compared with traditional GA approach. The proposed research combines PFSP and non-PFSP together with minimal and maximal time lag consideration, which provides an interesting viewpoint for industrial implementation.

  4. Automated spike sorting algorithm based on Laplacian eigenmaps and k-means clustering.

    Science.gov (United States)

    Chah, E; Hok, V; Della-Chiesa, A; Miller, J J H; O'Mara, S M; Reilly, R B

    2011-02-01

    This study presents a new automatic spike sorting method based on feature extraction by Laplacian eigenmaps combined with k-means clustering. The performance of the proposed method was compared against previously reported algorithms such as principal component analysis (PCA) and amplitude-based feature extraction. Two types of classifier (namely k-means and classification expectation-maximization) were incorporated within the spike sorting algorithms, in order to find a suitable classifier for the feature sets. Simulated data sets and in-vivo tetrode multichannel recordings were employed to assess the performance of the spike sorting algorithms. The results show that the proposed algorithm yields significantly improved performance with mean sorting accuracy of 73% and sorting error of 10% compared to PCA which combined with k-means had a sorting accuracy of 58% and sorting error of 10%.A correction was made to this article on 22 February 2011. The spacing of the title was amended on the abstract page. No changes were made to the article PDF and the print version was unaffected.

  5. A Genetic Algorithm Approach to the Optimization of a Radioactive Waste Treatment System

    International Nuclear Information System (INIS)

    Yang, Yeongjin; Lee, Kunjai; Koh, Y.; Mun, J.H.; Kim, H.S.

    1998-01-01

    This study is concerned with the applications of goal programming and genetic algorithm techniques to the analysis of management and operational problems in the radioactive waste treatment system (RWTS). A typical RWTS is modeled and solved by goal program and genetic algorithm to study and resolve the effects of conflicting objectives such as cost, limitation of released radioactivity to the environment, equipment utilization and total treatable radioactive waste volume before discharge and disposal. The developed model is validated and verified using actual data obtained from the RWTS at Kyoto University in Japan. The solution by goal programming and genetic algorithm would show the optimal operation point which is to maximize the total treatable radioactive waste volume and minimize the released radioactivity of liquid waste even under the restricted resources. The comparison of two methods shows very similar results. (author)

  6. A Biologically-Inspired Power Control Algorithm for Energy-Efficient Cellular Networks

    Directory of Open Access Journals (Sweden)

    Hyun-Ho Choi

    2016-03-01

    Full Text Available Most of the energy used to operate a cellular network is consumed by a base station (BS, and reducing the transmission power of a BS can therefore afford a substantial reduction in the amount of energy used in a network. In this paper, we propose a distributed transmit power control (TPC algorithm inspired by bird flocking behavior as a means of improving the energy efficiency of a cellular network. Just as each bird in a flock attempts to match its velocity with the average velocity of adjacent birds, in the proposed algorithm, each mobile station (MS in a cell matches its rate with the average rate of the co-channel MSs in adjacent cells by controlling the transmit power of its serving BS. We verify that this bio-inspired TPC algorithm using a local rate-average process achieves an exponential convergence and maximizes the minimum rate of the MSs concerned. Simulation results show that the proposed TPC algorithm follows the same convergence properties as the flocking algorithm and also effectively reduces the power consumption at the BSs while maintaining a low outage probability as the inter-cell interference increases; in so doing, it significantly improves the energy efficiency of a cellular network.

  7. Designing a solar powered Stirling heat engine based on multiple criteria: Maximized thermal efficiency and power

    International Nuclear Information System (INIS)

    Ahmadi, Mohammad Hossein; Sayyaadi, Hoseyn; Dehghani, Saeed; Hosseinzade, Hadi

    2013-01-01

    Highlights: • Thermodynamic model of a solar-dish Stirling engine was presented. • Thermal efficiency and output power of the engine were simultaneously maximized. • A final optimal solution was selected using several decision-making methods. • An optimal solution with least deviation from the ideal design was obtained. • Optimal solutions showed high sensitivity against variation of system parameters. - Abstract: A solar-powered high temperature differential Stirling engine was considered for optimization using multiple criteria. A thermal model was developed so that the output power and thermal efficiency of the solar Stirling system with finite rate of heat transfer, regenerative heat loss, conductive thermal bridging loss, finite regeneration process time and imperfect performance of the dish collector could be obtained. The output power and overall thermal efficiency were considered for simultaneous maximization. Multi-objective evolutionary algorithms (MOEAs) based on the NSGA-II algorithm were employed while the solar absorber temperature and the highest and lowest temperatures of the working fluid were considered the decision variables. The Pareto optimal frontier was obtained and a final optimal solution was also selected using various decision-making methods including the fuzzy Bellman–Zadeh, LINMAP and TOPSIS. It was found that multi-objective optimization could yield results with a relatively low deviation from the ideal solution in comparison to the conventional single objective approach. Furthermore, it was shown that, if the weight of thermal efficiency as one of the objective functions is considered to be greater than weight of the power objective, lower absorber temperature and low temperature ratio should be considered in the design of the Stirling engine

  8. Optimal pacing for right ventricular and biventricular devices: minimizing, maximizing, and right ventricular/left ventricular site considerations.

    Science.gov (United States)

    Gillis, Anne M

    2014-10-01

    The results from numerous clinical studies provide guidance for optimizing outcomes related to RV or biventricular pacing in the pacemaker and ICD populations. (1) Programming algorithms to minimize RV pacing is imperative in patients with dual-chamber pacemakers who have intrinsic AV conduction or intermittent AV conduction block. (2) Dual-chamber ICDs should be avoided in candidates without an indication for bradycardia pacing. (3) Alternate RV septal pacing sites may be considered at the time of pacemaker implantation. (4) Biventricular pacing may be beneficial in some patients with mild LV dysfunction. (5) LV lead placement at the site of latest LV activation is desirable. (6) Programming CRT systems to achieve biventricular/LV pacing >98.5% is important. (7) Protocols for AV and VV optimization in patients with CRT are not recommended after device implantation but may be considered for CRT nonresponders. (8) Novel algorithms to maximize the benefit of CRT are in evolution further.

  9. Allometric and Isometric variations in the Italian <em>Apodemus sylvaticusem> and <em>Apodemus flavicollisem> with respect to the conditions of allopatry and sympatry / Variazioni allometriche e isometriche in <em>Apodemus sylvaticusem> e <em>Apodemus flavicollisem> italiani, rispetto alle condizioni di allopatria e simpatria

    Directory of Open Access Journals (Sweden)

    Giovanni Amori

    1986-12-01

    Full Text Available Abstract In Italy there are two species of <em>Apodemus> (<em>Sylvaemus>: <em>Apodemus sylvaticusem> on the mainland and the main island, and <em>Apodemus flavicollisem> only on the mainland. The trend of some morphometric characters of the skull (incisive foramen length - FI; interorbital breadth = IO; length of palatal bridge = PP; upper alveolar length = $M^1M^3$ was analized and some theoretical models verified for <em>A. sylvaticusem>. If one considers the sympatric population of <em>A. sylvaticusem> and <em>A. flavicollisem> simultaneously the characters PP, IO and $M^1M^3$ appear significantly isometric being directly correlated ($P leq O.O1$, while FI character results allometric with respect to the previous ones, as expected. If one considers the sympatric populations of each of the species separately, the scenario is different. For <em>A. sylvaticusem> only PP and $M^1M^3$ are isometric ($P leq 0.05$. For <em>A. flavicollisem> only $M^1M^3$ and FI appear to be correlated, although not as significantly as for <em>A. sylvaticusem> ($P le 0.05$; one tail. The insular populations of <em>A. sylvaticusem> do not show significant correlations, except for FI and $M^1M^3$ ($P le 0.05$. On the contrary, considering all populations, sympatric and allopatric, of <em>A. sylvaticusem> at the same time are significant correlations ($P le 0.05$ in all combinations of characters, except for those involving the IO. We suggest that the isometric relations in sympatric assemblages are confined within a morphological range available to the genus <em>Apodemus>. In such a space, the two species are split in two different and innerly homogeneous distributions. We found no evidence to confirm the niche variation hypothesis. On the contrary, the variability expressed as SO or CV's appears higher in the sympatric populations than in the allopatric ones, for three of the four characters, confirming previous results

  10. Maximally Informative Observables and Categorical Perception

    OpenAIRE

    Tsiang, Elaine

    2012-01-01

    We formulate the problem of perception in the framework of information theory, and prove that categorical perception is equivalent to the existence of an observable that has the maximum possible information on the target of perception. We call such an observable maximally informative. Regardless whether categorical perception is real, maximally informative observables can form the basis of a theory of perception. We conclude with the implications of such a theory for the problem of speech per...

  11. A Novel Apoptosis Correlated Molecule: Expression and Characterization of Protein Latcripin-1 from <em>Lentinula em>edodes> C91–3

    Directory of Open Access Journals (Sweden)

    Min Huang

    2012-05-01

    Full Text Available An apoptosis correlated molecule—protein Latcripin-1 of <em>Lentinula> edodesem> C91-3—was expressed and characterized in <em>Pichia pastorisem> GS115. The total RNA was obtained from <em>Lentinula edodesem> C91–3. According to the transcriptome, the full-length gene of Latcripin-1 was isolated with 3'-Full Rapid Amplification of cDNA Ends (RACE and 5'-Full RACE methods. The full-length gene was inserted into the secretory expression vector pPIC9K. The protein Latcripin-1 was expressed in <em>Pichia pastorisem> GS115 and analyzed by Sodium Dodecylsulfonate Polyacrylate Gel Electrophoresis (SDS-PAGE and Western blot. The Western blot showed that the protein was expressed successfully. The biological function of protein Latcripin-1 on A549 cells was studied with flow cytometry and the 3-(4,5-Dimethylthiazol-2-yl-2,5-Diphenyl-tetrazolium Bromide (MTT method. The toxic effect of protein Latcripin-1 was detected with the MTT method by co-culturing the characterized protein with chick embryo fibroblasts. The MTT assay results showed that there was a great difference between protein Latcripin-1 groups and the control group (<em>p em>< 0.05. There was no toxic effect of the characterized protein on chick embryo fibroblasts. The flow cytometry showed that there was a significant difference between the protein groups of interest and the control group according to apoptosis function (<em>p em>< 0.05. At the same time, cell ultrastructure observed by transmission electron microscopy supported the results of flow cytometry. The work demonstrates that protein Latcripin-1 can induce apoptosis of human lung cancer cells A549 and brings new insights into and advantages to finding anti-tumor proteins.

  12. Solving Flexible Job-Shop Scheduling Problem Using Gravitational Search Algorithm and Colored Petri Net

    Directory of Open Access Journals (Sweden)

    Behnam Barzegar

    2012-01-01

    Full Text Available Scheduled production system leads to avoiding stock accumulations, losses reduction, decreasing or even eliminating idol machines, and effort to better benefitting from machines for on time responding customer orders and supplying requested materials in suitable time. In flexible job-shop scheduling production systems, we could reduce time and costs by transferring and delivering operations on existing machines, that is, among NP-hard problems. The scheduling objective minimizes the maximal completion time of all the operations, which is denoted by Makespan. Different methods and algorithms have been presented for solving this problem. Having a reasonable scheduled production system has significant influence on improving effectiveness and attaining to organization goals. In this paper, new algorithm were proposed for flexible job-shop scheduling problem systems (FJSSP-GSPN that is based on gravitational search algorithm (GSA. In the proposed method, the flexible job-shop scheduling problem systems was modeled by color Petri net and CPN tool and then a scheduled job was programmed by GSA algorithm. The experimental results showed that the proposed method has reasonable performance in comparison with other algorithms.

  13. Algorithms for selecting informative marker panels for population assignment.

    Science.gov (United States)

    Rosenberg, Noah A

    2005-11-01

    Given a set of potential source populations, genotypes of an individual of unknown origin at a collection of markers can be used to predict the correct source population of the individual. For improved efficiency, informative markers can be chosen from a larger set of markers to maximize the accuracy of this prediction. However, selecting the loci that are individually most informative does not necessarily produce the optimal panel. Here, using genotypes from eight species--carp, cat, chicken, dog, fly, grayling, human, and maize--this univariate accumulation procedure is compared to new multivariate "greedy" and "maximin" algorithms for choosing marker panels. The procedures generally suggest similar panels, although the greedy method often recommends inclusion of loci that are not chosen by the other algorithms. In seven of the eight species, when applied to five or more markers, all methods achieve at least 94% assignment accuracy on simulated individuals, with one species--dog--producing this level of accuracy with only three markers, and the eighth species--human--requiring approximately 13-16 markers. The new algorithms produce substantial improvements over use of randomly selected markers; where differences among the methods are noticeable, the greedy algorithm leads to slightly higher probabilities of correct assignment. Although none of the approaches necessarily chooses the panel with optimal performance, the algorithms all likely select panels with performance near enough to the maximum that they all are suitable for practical use.

  14. Learning to maximize reward rate: a model based on semi-Markov decision processes.

    Science.gov (United States)

    Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R

    2014-01-01

    WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.

  15. Maximally Entangled Multipartite States: A Brief Survey

    International Nuclear Information System (INIS)

    Enríquez, M; Wintrowicz, I; Życzkowski, K

    2016-01-01

    The problem of identifying maximally entangled quantum states of a composite quantum systems is analyzed. We review some states of multipartite systems distinguished with respect to certain measures of quantum entanglement. Numerical results obtained for 4-qubit pure states illustrate the fact that the notion of maximally entangled state depends on the measure used. (paper)

  16. Corporate Social Responsibility and Profit Maximizing Behaviour

    OpenAIRE

    Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta

    2005-01-01

    We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.

  17. Automatic determination of pressurized water reactor core loading patterns which maximize end-of-cycle reactivity within power peaking and burnup constraints

    International Nuclear Information System (INIS)

    Hobson, G.H.

    1985-01-01

    An automated procedure for determining the optimal core loading pattern for a pressurized water reactor which maximizes end-of-cycle k/sub eff/ while satisfying constraints on power peaking and discharge burnup has been developed. The optimization algorithm combines a two energy group, two-dimensional coarse-mesh finite difference diffusion theory neutronics model to simulate core conditions, a perturbation theory approach to determine reactivity, flux, power and burnup changes as a function of assembly shuffling, and Monte Carlo integer programming to select the optimal loading pattern solution. The core examined was a typical Cycle 2 reload with no burnable poisons. Results indicate that the core loading pattern that maximizes end-of-cycle k/sub eff/ results in a 5.4% decrease in fuel cycle costs compared with the core loading pattern that minimizes the maximum relative radial power peak

  18. Guinea pig maximization test

    DEFF Research Database (Denmark)

    Andersen, Klaus Ejner

    1985-01-01

    Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...

  19. Inverse Ising problem in continuous time: A latent variable approach

    Science.gov (United States)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  20. Generator scheduling under competitive environment using Memory Management Algorithm

    Directory of Open Access Journals (Sweden)

    A. Amudha

    2013-09-01

    Full Text Available This paper presents a new approach for Real-Time Application of Profit Based Unit Commitment using Memory Management Algorithm. The main objective of the restructured system is to maximize its own profit without the responsibility of satisfying the forecasted demand. The Profit Based Unit Commitment (PBUC is solved by Memory Management Algorithm (MMA in Real-Time Application. MMA approach is introduced in this paper considering power and reserve generation. The proposed method MMA uses Best Fit and Worst Fit allocation for generator scheduling in order to receive the maximum profit by considering the softer demand. Also, this method gives an idea regarding how much power and reserve should be sold in markets. The proposed approach has been tested on a power system with 2, 3, and 10 generating units. Simulation results of the proposed approach have been compared with the existing methods.