Parallel Expectation-Maximization Algorithm for Large Databases
HUANG Hao; SONG Han-tao; LU Yu-chang
2006-01-01
A new parallel expectation-maximization (EM) algorithm is proposed for large databases. The purpose of the algorithm is to accelerate the operation of the EM algorithm. As a well-known algorithm for estimation in generic statistical problems, the EM algorithm has been widely used in many domains. But it often requires significant computational resources. So it is needed to develop more elaborate methods to adapt the databases to a large number of records or large dimensionality. The parallel EM algorithm is based on partial E-steps which has the standard convergence guarantee of EM. The algorithm utilizes fully the advantage of parallel computation. It was confirmed that the algorithm obtains about 2.6 speedups in contrast with the standard EM algorithm through its application to large databases. The running time will decrease near linearly when the number of processors increasing.
Image fusion based on expectation maximization algorithm and steerable pyramid
Gang Liu(刘刚); Zhongliang Jing(敬忠良); Shaoyuan Sun(孙韶媛); Jianxun Li(李建勋); Zhenhua Li(李振华); Henry Leung
2004-01-01
In this paper, a novel image fusion method based on the expectation maximization (EM) algorithm and steerable pyramid is proposed. The registered images are first decomposed by using steerable pyramid.The EM algorithm is used to fuse the image components in the low frequency band. The selection method involving the informative importance measure is applied to those in the high frequency band. The final fused image is then computed by taking the inverse transform on the composite coefficient representations.Experimental results show that the proposed method outperforms conventional image fusion methods.
Additive Approximation Algorithms for Modularity Maximization
Kawase, Yasushi; Matsui, Tomomi; Miyauchi, Atsushi
2016-01-01
The modularity is a quality function in community detection, which was introduced by Newman and Girvan (2004). Community detection in graphs is now often conducted through modularity maximization: given an undirected graph $G=(V,E)$, we are asked to find a partition $\\mathcal{C}$ of $V$ that maximizes the modularity. Although numerous algorithms have been developed to date, most of them have no theoretical approximation guarantee. Recently, to overcome this issue, the design of modularity max...
Acceleration of Expectation-Maximization algorithm for length-biased right-censored data.
Chan, Kwun Chuen Gary
2017-01-01
Vardi's Expectation-Maximization (EM) algorithm is frequently used for computing the nonparametric maximum likelihood estimator of length-biased right-censored data, which does not admit a closed-form representation. The EM algorithm may converge slowly, particularly for heavily censored data. We studied two algorithms for accelerating the convergence of the EM algorithm, based on iterative convex minorant and Aitken's delta squared process. Numerical simulations demonstrate that the acceleration algorithms converge more rapidly than the EM algorithm in terms of number of iterations and actual timing. The acceleration method based on a modification of Aitken's delta squared performed the best under a variety of settings.
Operational Modal Analysis using Expectation Maximization Algorithm
Cara Cañas, Francisco Javier; Carpio Huertas, Jaime; Juan Ruiz, Jesús; Alarcón Álvarez, Enrique
2011-01-01
This paper presents a time-domain stochastic system identification method based on Maximum Likelihood Estimation and the Expectation Maximization algorithm. The effectiveness of this structural identification method is evaluated through numerical simulation in the context of the ASCE benchmark problem on structural health monitoring. Modal parameters (eigenfrequencies, damping ratios and mode shapes) of the benchmark structure have been estimated applying the proposed identification method...
Xianbin Wen; Hua Zhang; Jianguang Zhang; Xu Jiao; Lei Wang
2009-01-01
A novel method that hybridizes genetic algorithm (GA) and expectation maximization (EM) algorithm for the classification of syn-thetic aperture radar (SAR) imagery is proposed by the finite Gaussian mixtures model (GMM) and multiscale autoregressive (MAR)model. This algorithm is capable of improving the global optimality and consistency of the classification performance. The experiments on the SAR images show that the proposed algorithm outperforms the standard EM method significantly in classification accuracy.
Solving Maximal Clique Problem through Genetic Algorithm
Rajawat, Shalini; Hemrajani, Naveen; Menghani, Ekta
2010-11-01
Genetic algorithm is one of the most interesting heuristic search techniques. It depends basically on three operations; selection, crossover and mutation. The outcome of the three operations is a new population for the next generation. Repeating these operations until the termination condition is reached. All the operations in the algorithm are accessible with today's molecular biotechnology. The simulations show that with this new computing algorithm, it is possible to get a solution from a very small initial data pool, avoiding enumerating all candidate solutions. For randomly generated problems, genetic algorithm can give correct solution within a few cycles at high probability.
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Estimating Rigid Transformation Between Two Range Maps Using Expectation Maximization Algorithm
Zeng, Shuqing
2012-01-01
We address the problem of estimating a rigid transformation between two point sets, which is a key module for target tracking system using Light Detection And Ranging (LiDAR). A fast implementation of Expectation-maximization (EM) algorithm is presented whose complexity is O(N) with $N$ the number of scan points.
An approach to operational modal analysis using the expectation maximization algorithm
Cara, F. Javier; Carpio, Jaime; Juan, Jesús; Alarcón, Enrique
2012-08-01
This paper presents the Expectation Maximization algorithm (EM) applied to operational modal analysis of structures. The EM algorithm is a general-purpose method for maximum likelihood estimation (MLE) that in this work is used to estimate state space models. As it is well known, the MLE enjoys some optimal properties from a statistical point of view, which make it very attractive in practice. However, the EM algorithm has two main drawbacks: its slow convergence and the dependence of the solution on the initial values used. This paper proposes two different strategies to choose initial values for the EM algorithm when used for operational modal analysis: to begin with the parameters estimated by Stochastic Subspace Identification method (SSI) and to start using random points. The effectiveness of the proposed identification method has been evaluated through numerical simulation and measured vibration data in the context of a benchmark problem. Modal parameters (natural frequencies, damping ratios and mode shapes) of the benchmark structure have been estimated using SSI and the EM algorithm. On the whole, the results show that the application of the EM algorithm starting from the solution given by SSI is very useful to identify the vibration modes of a structure, discarding the spurious modes that appear in high order models and discovering other hidden modes. Similar results are obtained using random starting values, although this strategy allows us to analyze the solution of several starting points what overcome the dependence on the initial values used.
Improved Algorithms OF CELF and CELF++ for Influence Maximization
Jiaguo Lv
2014-06-01
Full Text Available Motivated by the wide application in some fields, such as viral marketing, sales promotion etc, influence maximization has been the most important and extensively studied problem in social network. However, the most classical KK-Greedy algorithm for influence maximization is inefficient. Two major sources of the algorithm’s inefficiency were analyzed in this paper. With the analysis of algorithms CELF and CELF++, all nodes in the influenced set of u would never bring any marginal gain when a new seed u was produced. Through this optimization strategy, a lot of redundant nodes will be removed from the candidate nodes. Basing on the strategy, two improved algorithms of Lv_CELF and Lv_CELF++ were proposed in this study. To evaluate the two algorithms, the two algorithms with their benchmark algorithms of CELF and CELF++ were conducted on some real world datasets. To estimate the algorithms, influence degree and running time were employed to measure the performance and efficiency respectively. Experimental results showed that, compared with benchmark algorithms of CELF and CELF++, matching effects and higher efficiency were achieved by the new algorithms Lv_CELF and Lv_CELF++. Solutions with the proposed optimization strategy can be useful for the decisionmaking problems under the scenarios related to the influence maximization problem.
The threshold EM algorithm for parameter learning in bayesian network with incomplete data
Lamine, Fradj Ben; Mahjoub, Mohamed Ali
2012-01-01
Bayesian networks (BN) are used in a big range of applications but they have one issue concerning parameter learning. In real application, training data are always incomplete or some nodes are hidden. To deal with this problem many learning parameter algorithms are suggested foreground EM, Gibbs sampling and RBE algorithms. In order to limit the search space and escape from local maxima produced by executing EM algorithm, this paper presents a learning parameter algorithm that is a fusion of EM and RBE algorithms. This algorithm incorporates the range of a parameter into the EM algorithm. This range is calculated by the first step of RBE algorithm allowing a regularization of each parameter in bayesian network after the maximization step of the EM algorithm. The threshold EM algorithm is applied in brain tumor diagnosis and show some advantages and disadvantages over the EM algorithm.
An Efficient Algorithm for Mining Maximal Frequent Item Sets
A. M.J.M.Z. Rahman
2008-01-01
Full Text Available Problem Statement: In today's life, the mining of frequent patterns is a basic problem in data mining applications. The algorithms which are used to generate these frequent patterns must perform efficiently. The objective was to propose an effective algorithm which generates frequent patterns in less time. Approach: We proposed an algorithm which was based on hashing technique and combines a vertical tidset representation of the database with effective pruning mechanisms. It removes all the non-maximal frequent item-sets to get exact set of MFI directly. It worked efficiently when the number of item-sets and tid-sets is more. Results: The performance of our algorithm had been compared with recently developed MAFIA algorithm and the results show how our algorithm gives better performance. Conclusions: Hence, the proposed algorithm performs effectively and generates frequent patterns faster.
Approximation algorithms for indefinite complex quadratic maximization problems
无
2010-01-01
In this paper,we consider the following indefinite complex quadratic maximization problem: maximize zHQz,subject to zk ∈ C and zkm = 1,k = 1,...,n,where Q is a Hermitian matrix with trQ = 0,z ∈ Cn is the decision vector,and m 3.An (1/log n) approximation algorithm is presented for such problem.Furthermore,we consider the above problem where the objective matrix Q is in bilinear form,in which case a 0.7118 cos mπ 2approximation algorithm can be constructed.In the context of quadratic optimization,various extensions and connections of the model are discussed.
Computing a Clique Tree with the Algorithm Maximal Label Search
Anne Berry
2017-01-01
Full Text Available The algorithm MLS (Maximal Label Search is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS, Lexicographic Breadth-First Search (LexBFS, Lexicographic Depth-First Search (LexDFS and Maximal Neighborhood Search (MNS. On a chordal graph, MLS computes a PEO (perfect elimination ordering of the graph. We show how the algorithm MLS can be modified to compute a PMO (perfect moplex ordering, as well as a clique tree and the minimal separators of a chordal graph. We give a necessary and sufficient condition on the labeling structure of MLS for the beginning of a new clique in the clique tree to be detected by a condition on labels. MLS is also used to compute a clique tree of the complement graph, and new cliques in the complement graph can be detected by a condition on labels for any labeling structure. We provide a linear time algorithm computing a PMO and the corresponding generators of the maximal cliques and minimal separators of the complement graph. On a non-chordal graph, the algorithm MLSM, a graph search algorithm computing an MEO and a minimal triangulation of the graph, is used to compute an atom tree of the clique minimal separator decomposition of any graph.
The Parallel Maximal Cliques Algorithm for Protein Sequence Clustering
Khalid Jaber
2009-01-01
Full Text Available Problem statement: Protein sequence clustering is a method used to discover relations between proteins. This method groups the proteins based on their common features. It is a core process in protein sequence classification. Graph theory has been used in protein sequence clustering as a means of partitioning the data into groups, where each group constitutes a cluster. Mohseni-Zadeh introduced a maximal cliques algorithm for protein clustering. Approach: In this study we adapted the maximal cliques algorithm of Mohseni-Zadeh to find cliques in protein sequences and we then parallelized the algorithm to improve computation times and allowed large protein databases to be processed. We used the N-Gram Hirschberg approach proposed by Abdul Rashid to calculate the distance between protein sequences. The task farming parallel program model was used to parallelize the enhanced cliques algorithm. Results: Our parallel maximal cliques algorithm was implemented on the stealth cluster using the C programming language and a hybrid approach that includes both the Message Passing Interface (MPI library and POSIX threads (PThread to accelerate protein sequence clustering. Conclusion: Our results showed a good speedup over sequential algorithms for cliques in protein sequences.
Hong, Hunsop; Schonfeld, Dan
2008-06-01
In this paper, we propose a maximum-entropy expectation-maximization (MEEM) algorithm. We use the proposed algorithm for density estimation. The maximum-entropy constraint is imposed for smoothness of the estimated density function. The derivation of the MEEM algorithm requires determination of the covariance matrix in the framework of the maximum-entropy likelihood function, which is difficult to solve analytically. We, therefore, derive the MEEM algorithm by optimizing a lower-bound of the maximum-entropy likelihood function. We note that the classical expectation-maximization (EM) algorithm has been employed previously for 2-D density estimation. We propose to extend the use of the classical EM algorithm for image recovery from randomly sampled data and sensor field estimation from randomly scattered sensor networks. We further propose to use our approach in density estimation, image recovery and sensor field estimation. Computer simulation experiments are used to demonstrate the superior performance of the proposed MEEM algorithm in comparison to existing methods.
A Constrained EM Algorithm for Independent Component Analysis
Welling, Max; Weber, Markus
2001-01-01
We introduce a novel way of performing independent component analysis using a constrained version of the expectation-maximization (EM) algorithm. The source distributions are modeled as D one-dimensional mixtures of gaussians. The observed data are modeled as linear mixtures of the sources with additive, isotropic noise. This generative model is fit to the data using constrained EM. The simpler “soft-switching” approach is introduced, which uses only one parameter to decide on the sub- or sup...
Makram KRIT
2016-01-01
Full Text Available This paper presents several iterative methods based on Stochastic Expectation-Maximization (EM methodology in order to estimate parametric reliability models for randomly lifetime data. The methodology is related to Maximum Likelihood Estimates (MLE in the case of missing data. A bathtub form of failure intensity formulation of a repairable system reliability is presented where the estimation of its parameters is considered through EM algorithm . Field of failures data from industrial site are used to fit the model. Finally, the interval estimation basing on large-sample in literature is discussed and the examination of the actual coverage probabilities of these confidence intervals is presented using Monte Carlo simulation method.
Accurate and efficient maximal ball algorithm for pore network extraction
Arand, Frederick; Hesser, Jürgen
2017-04-01
The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.
Hobolth, Asger
2008-01-01
-dependent substitution models are analytically intractable and must be analyzed using either approximate or simulation-based methods. We describe statistical inference of neighbor-dependent models using a Markov chain Monte Carlo expectation maximization (MCMC-EM) algorithm. In the MCMC-EM algorithm, the high...
A New Algorithm to Optimize Maximal Information Coefficient.
Yuan Chen
Full Text Available The maximal information coefficient (MIC captures dependences between paired variables, including both functional and non-functional relationships. In this paper, we develop a new method, ChiMIC, to calculate the MIC values. The ChiMIC algorithm uses the chi-square test to terminate grid optimization and then removes the restriction of maximal grid size limitation of original ApproxMaxMI algorithm. Computational experiments show that ChiMIC algorithm can maintain same MIC values for noiseless functional relationships, but gives much smaller MIC values for independent variables. For noise functional relationship, the ChiMIC algorithm can reach the optimal partition much faster. Furthermore, the MCN values based on MIC calculated by ChiMIC can capture the complexity of functional relationships in a better way, and the statistical powers of MIC calculated by ChiMIC are higher than those calculated by ApproxMaxMI. Moreover, the computational costs of ChiMIC are much less than those of ApproxMaxMI. We apply the MIC values tofeature selection and obtain better classification accuracy using features selected by the MIC values from ChiMIC.
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Influence Maximization in Social Networks: Towards an Optimal Algorithmic Solution
Borgs, Christian; Chayes, Jennifer; Lucier, Brendan
2012-01-01
Diffusion is a fundamental graph process, underpinning such phenomena as epidemic disease contagion and the spread of innovation by word-of-mouth. We address the algorithmic problem of finding a set of k initial seed nodes in a network so that the expected size of the resulting cascade is maximized, under the standard independent cascade model of network diffusion. Our main result is an algorithm for the influence maximization problem that obtains the near-optimal approximation factor of (1 - 1/e - epsilon), for any epsilon > 0, in time O((m+n)log(n) / epsilon^3) where n and m are the number of vertices and edges in the network. Our algorithm is nearly runtime-optimal (up to a logarithmic factor) as we establish a lower bound of Omega(m+n) on the runtime required to obtain a constant approximation. Our method also allows a provable tradeoff between solution quality and runtime: we obtain an O(1/beta)-approximation in time O(n log^3(n) * a(G) / beta) for any beta > 1, where a(G) denotes the arboricity of the d...
A Novel Multithreaded Algorithm For Extracting Maximal Chordal Subgraphs
Halappanavar, Mahantesh; Feo, John T.; Dempsey, Kathryn; Ali, Hesham; Bhowmick, Sanjukta
2012-10-25
Chordal graphs are triangulated graphs where any cycle larger than three is bisected by a chord. Many combinatorial optimization problems such as computing the minimum fill-in, the size of the maximum clique and the chromatic number are NP-hard on general graphs but have polynomial time solutions on chordal graphs. In this paper, we present a novel multithreaded algorithm to extract a maximal chordal subgraph from a general graph. Our algorithm is based on an iterative approach where each thread can asynchronously update a subset of edges that are dynamically assigned to it. We implement our algorithm on two different multithreaded architectures – Cray XMT, a massively multithreaded platform, and AMD Magny-Cours, a shared memory multicore platform. In addition to the proof of correctness, we present the performance of our algorithm using a testset of carefully generated synthetical graphs with up to half-a-billion edges and real world networks from gene correlation studies. We demonstrate that our algorithm achieves high scalability for all inputs on both types of architectures.
Takahashi, Yasuyuki; Murase, Kenya [Osaka Medical Coll., Takatsuki (Japan). Graduate School; Higashino, Hiroshi; Sogabe, Ichiro; Sakamoto, Kana
2001-12-01
The quality of images reconstructed by means of the maximum likelihood-expectation maximization (ML-EM) and ordered subset (OS)-EM algorithms, was examined with parameters such as the number of iterations and subsets, then compared with the quality of images reconstructed by the filtered back projection method. Phantoms showing signals inside signals, which mimicked single-photon emission computed tomography (SPECT) images of cerebral blood flow and myocardial perfusion, and phantoms showing signals around the signals obtained by SPECT of bone and tumor were used for experiments. To determine signals for recognition, SPECT images in which the signals could be appropriately recognized with a combination of fewer iterations and subsets of different sizes and densities were evaluated by receiver operating characteristic (ROC) analysis. The results of ROC analysis were applied to myocardial phantom experiments and scintigraphy of myocardial perfusion. Taking the image processing time into consideration, good SPECT images were obtained by OS-EM at iteration No. 10 and subset 5. This study will be helpful for selection of parameters such as the number of iterations and subsets when using the ML-EM or OS-EM algorithms. (author)
Three penalized EM-type algorithms for PET image reconstruction.
Teng, Yueyang; Zhang, Tie
2012-06-01
Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.
Algorithms for k-Colouring and Finding Maximal Independent Sets
Byskov, Jesper Makholm
2003-01-01
In this extended abstract, we construct algorithms that decide for a graph with n vertices whether there exists a 4-, 5- or 6-colouring of the vertices running in time O(1.7504n), O(2.1592 n) and O(2.3289n), respectively, using polynomial space. For 6- or 7-colouring we construct algorithms running...... in time (9(2.2680 n) and O(2.4023n), respectively, using exponential space. To do this, we prove that the number of maximal independent sets of size at most k (k-MIS's) in a graph is at most (d -1)dk - n am-(d -1)k for any d > 4. Eppstein shows the same bound for d = 4....
Improved Algorithm for Throughput Maximization in MC-CDMA
Hema Kale
2012-08-01
Full Text Available The Multi-Carrier Code Division Multiple Access (MC-CDMA is becoming a very significant down link multiple access technique for high-rate data transmission in the fourth generation wireless communication systems. By means of efficient resource allocation higher data rate i.e. throughput can be achieved. This paper evaluates the performance of group (sub channel allocation criteria employed in down link transmission, which results in throughput maximization. Proposed algorithm gives the modified technique of sub channel allocation in the down link transmission of MC-CDMA systems. Simulation are carried out for all the three combining schemes, results shows that for the given power and BER proposed algorithm comparatively gives far better results .
Minimum-distortion isometric shape correspondence using EM algorithm.
Sahillioğlu, Yusuf; Yemez, Yücel
2012-11-01
We present a purely isometric method that establishes 3D correspondence between two (nearly) isometric shapes. Our method evenly samples high-curvature vertices from the given mesh representations, and then seeks an injective mapping from one vertex set to the other that minimizes the isometric distortion. We formulate the problem of shape correspondence as combinatorial optimization over the domain of all possible mappings, which then reduces in a probabilistic setting to a log-likelihood maximization problem that we solve via the Expectation-Maximization (EM) algorithm. The EM algorithm is initialized in the spectral domain by transforming the sampled vertices via classical Multidimensional Scaling (MDS). Minimization of the isometric distortion, and hence maximization of the log-likelihood function, is then achieved in the original 3D euclidean space, for each iteration of the EM algorithm, in two steps: by first using bipartite perfect matching, and then a greedy optimization algorithm. The optimal mapping obtained at convergence can be one-to-one or many-to-one upon choice. We demonstrate the performance of our method on various isometric (or nearly isometric) pairs of shapes for some of which the ground-truth correspondence is available.
Partial AUC maximization for essential gene prediction using genetic algorithms.
Hwang, Kyu-Baek; Ha, Beom-Yong; Ju, Sanghun; Kim, Sangsoo
2013-01-01
Identifying genes indispensable for an organism's life and their characteristics is one of the central questions in current biological research, and hence it would be helpful to develop computational approaches towards the prediction of essential genes. The performance of a predictor is usually measured by the area under the receiver operating characteristic curve (AUC). We propose a novel method by implementing genetic algorithms to maximize the partial AUC that is restricted to a specific interval of lower false positive rate (FPR), the region relevant to follow-up experimental validation. Our predictor uses various features based on sequence information, protein-protein interaction network topology, and gene expression profiles. A feature selection wrapper was developed to alleviate the over-fitting problem and to weigh each feature's relevance to prediction. We evaluated our method using the proteome of budding yeast. Our implementation of genetic algorithms maximizing the partial AUC below 0.05 or 0.10 of FPR outperformed other popular classification methods.
Lee, Youngrok [Iowa State Univ., Ames, IA (United States)
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.
Farzinfar, Mahshid; Teoh, Eam Khwang; Xue, Zhong
2011-11-01
This study proposes an expectation-maximization (EM)-based curve evolution algorithm for segmentation of magnetic resonance brain images. In the proposed algorithm, the evolution curve is constrained not only by a shape-based statistical model but also by a hidden variable model from image observation. The hidden variable model herein is defined by the local voxel labeling, which is unknown and estimated by the expected likelihood function derived from the image data and prior anatomical knowledge. In the M-step, the shapes of the structures are estimated jointly by encoding the hidden variable model and the statistical prior model obtained from the training stage. In the E-step, the expected observation likelihood and the prior distribution of the hidden variables are estimated. In experiments, the proposed automatic segmentation algorithm is applied to multiple gray nuclei structures such as caudate, putamens and thalamus of three-dimensional magnetic resonance imaging in volunteers and patients. As for the robustness and accuracy of the segmentation algorithm, the results of the proposed EM-joint shape-based algorithm outperformed those obtained using the statistical shape model-based techniques in the same framework and a current state-of-the-art region competition level set method.
Retinal Microaneursym Detection using Maximally Stable External Region Algorithm
Diana Tri Susetianingtias
2016-10-01
Full Text Available The growth of diabetics’ worldwide increased drastically. Diabetic can cause blindness due to retinopathy diabetic. Often the patients of retinopathy diabetic do not experience the sign and the symptoms at early stage of their symptoms, even in the severe stages where the bleeding start to occur. One indicator of patients that has diabetic retinopathy can be seen from the blood vessel that experienced microaneurysm and hemorrhage due to a swelling blood vessels in the retina. The study in this paper will implement the Maximally Stable External Region (MSER algorithm to detect microaneursym. Microaneursym is one of the main indicators that causes retinopathy diabetic. This study uses HRF dataset. The results are expected to improve the accuracy microaneursym detection.
A new approximate proximal point algorithm for maximal monotone operator
HE; Bingsheng(何炳生); LIAO; Lizhi(廖立志); YANG; Zhenhua(杨振华)
2003-01-01
The problem concerned in this paper is the set-valued equation 0 ∈ T(z) where T is a maximal monotone operator. For given xk and βk ＞ 0, some existing approximate proximal point algorithms take xk+1 = xk such that xk +ek∈ xk + βkT(xk) and||ek|| ≤ηk||xk - xk||, where {ηk} is a non-negative summable sequence. Instead of xk+1 = xk, the new iterate of the proposing method is given by xk+1 = PΩ[xk - ek], where Ω is the domain of T and PΩ(@) denotes the projection on Ω. The convergence is proved under a significantly relaxed restriction supk＞0 ηk ＜ 1.
Meena Prakash, R; Shantha Selva Kumari, R
2017-01-01
The Fuzzy C Means (FCM) and Expectation Maximization (EM) algorithms are the most prevalent methods for automatic segmentation of MR brain images into three classes Gray Matter (GM), White Matter (WM) and Cerebrospinal Fluid (CSF). The major difficulties associated with these conventional methods for MR brain image segmentation are the Intensity Non-uniformity (INU) and noise. In this paper, EM and FCM with spatial information and bias correction are proposed to overcome these effects. The spatial information is incorporated by convolving the posterior probability during E-Step of the EM algorithm with mean filter. Also, a method of pixel re-labeling is included to improve the segmentation accuracy. The proposed method is validated by extensive experiments on both simulated and real brain images from standard database. Quantitative and qualitative results depict that the method is superior to the conventional methods by around 25% and over the state-of-the art method by 8%.
A spatially constrained generative model and an EM algorithm for image segmentation.
Diplaros, Aristeidis; Vlassis, Nikos; Gevers, Theo
2007-05-01
In this paper, we present a novel spatially constrained generative model and an expectation-maximization (EM) algorithm for model-based image segmentation. The generative model assumes that the unobserved class labels of neighboring pixels in the image are generated by prior distributions with similar parameters, where similarity is defined by entropic quantities relating to the neighboring priors. In order to estimate model parameters from observations, we derive a spatially constrained EM algorithm that iteratively maximizes a lower bound on the data log-likelihood, where the penalty term is data-dependent. Our algorithm is very easy to implement and is similar to the standard EM algorithm for Gaussian mixtures with the main difference that the labels posteriors are "smoothed" over pixels between each E- and M-step by a standard image filter. Experiments on synthetic and real images show that our algorithm achieves competitive segmentation results compared to other Markov-based methods, and is in general faster.
State-space models - from the EM algorithm to a gradient approach
Olsson, Rasmus Kongsgaard; Petersen, Kaare Brandt; Lehn-Schiøler, Tue
2007-01-01
Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact...... that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly...
Maximizing bandgaps in two-dimensional photonic crystals a variational algorithm
Paul, P; Paul, Prabasaj; Ndi, Francis C.
2002-01-01
We present an algorithm for the maximization of photonic bandgaps in two-dimensional crystals. Once the translational symmetries of the underlying structure have been imposed, our algorithm finds a global maximal (and complete, if one exists) bandgap. Additionally, we prove two remarkable results related to maximal bandgaps: the so-called `maximum contrast' rule, and about the location in the Brillouin zone of band edges.
Maximizing influence in a social network: Improved results using a genetic algorithm
Zhang, Kaiqi; Du, Haifeng; Feldman, Marcus W.
2017-07-01
The influence maximization problem focuses on finding a small subset of nodes in a social network that maximizes the spread of influence. While the greedy algorithm and some improvements to it have been applied to solve this problem, the long solution time remains a problem. Stochastic optimization algorithms, such as simulated annealing, are other choices for solving this problem, but they often become trapped in local optima. We propose a genetic algorithm to solve the influence maximization problem. Through multi-population competition, using this algorithm we achieve an optimal result while maintaining diversity of the solution. We tested our method with actual networks, and our genetic algorithm performed slightly worse than the greedy algorithm but better than other algorithms.
Applications of expectation maximization algorithm for coherent optical communication
Carvalho, L.; Oliveira, J.; Zibar, Darko
2014-01-01
In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization al...
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
Maximizing Submodular Functions under Matroid Constraints by Evolutionary Algorithms.
Friedrich, Tobias; Neumann, Frank
2015-01-01
Many combinatorial optimization problems have underlying goal functions that are submodular. The classical goal is to find a good solution for a given submodular function f under a given set of constraints. In this paper, we investigate the runtime of a simple single objective evolutionary algorithm called (1 + 1) EA and a multiobjective evolutionary algorithm called GSEMO until they have obtained a good approximation for submodular functions. For the case of monotone submodular functions and uniform cardinality constraints, we show that the GSEMO achieves a (1 - 1/e)-approximation in expected polynomial time. For the case of monotone functions where the constraints are given by the intersection of K ≥ 2 matroids, we show that the (1 + 1) EA achieves a (1/k + δ)-approximation in expected polynomial time for any constant δ > 0. Turning to nonmonotone symmetric submodular functions with k ≥ 1 matroid intersection constraints, we show that the GSEMO achieves a 1/((k + 2)(1 + ε))-approximation in expected time O(n(k + 6)log(n)/ε.
Khan, Zia; Bloom, Joshua S.; Kruglyak, Leonid; Singh, Mona
2009-01-01
Motivation: High-throughput sequencing technologies place ever increasing demands on existing algorithms for sequence analysis. Algorithms for computing maximal exact matches (MEMs) between sequences appear in two contexts where high-throughput sequencing will vastly increase the volume of sequence data: (i) seeding alignments of high-throughput reads for genome assembly and (ii) designating anchor points for genome–genome comparisons.
Robust Mean Change-Point Detecting through Laplace Linear Regression Using EM Algorithm
Fengkai Yang
2014-01-01
normal distribution, we developed the expectation maximization (EM algorithm to estimate the position of mean change-point. We investigated the performance of the algorithm through different simulations, finding that our methods is robust to the distributions of errors and is effective to estimate the position of mean change-point. Finally, we applied our method to the classical Holbert data and detected a change-point.
Hierarchical trie packet classification algorithm based on expectation-maximization clustering
Bi, Xia-an; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm. PMID:28704476
An extended EM algorithm for subspace clustering
Lifei CHEN; Qingshan JIANG
2008-01-01
Clustering high dimensional data has become a challenge in data mining due to the curse of dimension-ality. To solve this problem, subspace clustering has been defined as an extension of traditional clustering that seeks to find clusters in subspaces spanned by different combinations of dimensions within a dataset. This paper presents a new subspace clustering algorithm that calcu-lates the local feature weights automatically in an EM-based clustering process. In the algorithm, the features are locally weighted by using a new unsupervised weight-ing method, as a means to minimize a proposed cluster-ing criterion that takes into account both the average intra-clusters compactness and the average inter-clusters separation for subspace clustering. For the purposes of capturing accurate subspace information, an additional outlier detection process is presented to identify the pos-sible local outliers of subspace clusters, and is embedded between the E-step and M-step of the algorithm. The method has been evaluated in clustering real-world gene expression data and high dimensional artificial data with outliers, and the experimental results have shown its effectiveness.
Miguel G. Villarreal-Cervantes
2012-10-01
Full Text Available Mobile robots with omnidirectional wheels are expected to perform a wide variety of movements in a narrow space. However, kinematic mobility and dexterity have not been clearly identified as an objective to be considered when designing omnidirectional redundant robots. In light of this fact, this article proposes to maximize the dexterity of the mobile robot by properly locating the omnidirectional wheels. In addition, four hybrid differential evolution (DE algorithm based on the synergetic integration of different kinds of mutation and crossover are presented. A comparison of metaheuristic and gradient-based algorithms for kinematic dexterity maximization is also presented.
Genetic algorithm in DNA computing:A solution to the maximal clique problem
LI Yuan; FANG Chen; OUYANG Qi
2004-01-01
Genetic algorithm is one of the possible ways to break the limit of brute-force method in DNA computing. Using the idea of Darwinian evolution, we introduce a genetic DNA computing algorithm to solve the maximal clique problem. All the operations in the algorithm are accessible with today's molecular biotechnology. Our computer simulations show that with this new computing algorithm, it is possible to get a solution from a very small initial data pool, avoiding enumerating all candidate solutions. For randomly generated problems, genetic algorithm can give correct solution within a few cycles at high probability. Although the current speed of a DNA computer is slow compared with silicon computers, our simulation indicates that the number of cycles needed in this genetic algorithm is approximately a linear function of the number of vertices in the network. This may make DNA computers more powerfully attacking some hard computational problems.
A batch algorithm for estimating trajectories of point targets using expectation maximization
Rahmathullah, Abu; Raghavendra, Selvan; Svensson, Lennart
2016-01-01
In this paper, we propose a strategy that is based on expectation maximization for tracking multiple point targets. The algorithm is similar to probabilistic multi-hypothesis tracking (PMHT), but does not relax the point target model assumptions. According to the point target models, a target can...
An efficient algorithm for maximizing range sum queries in a road network.
Phan, Tien-Khoi; Jung, HaRim; Kim, Ung-Mo
2014-01-01
Given a set of positive-weighted points and a query rectangle r (specified by a client) of given extents, the goal of a maximizing range sum (MaxRS) query is to find the optimal location of r such that the total weights of all the points covered by r are maximized. All existing methods for processing MaxRS queries assume the Euclidean distance metric. In many location-based applications, however, the motion of a client may be constrained by an underlying (spatial) road network; that is, the client cannot move freely in space. This paper addresses the problem of processing MaxRS queries in a road network. We propose the external-memory algorithm that is suited for a large road network database. In addition, in contrast to the existing methods, which retrieve only one optimal location, our proposed algorithm retrieves all the possible optimal locations. Through simulations, we evaluate the performance of the proposed algorithm.
PMCR-Miner: parallel maximal confident association rules miner algorithm for microarray data set.
Zakaria, Wael; Kotb, Yasser; Ghaleb, Fayed F M
2015-01-01
The MCR-Miner algorithm is aimed to mine all maximal high confident association rules form the microarray up/down-expressed genes data set. This paper introduces two new algorithms: IMCR-Miner and PMCR-Miner. The IMCR-Miner algorithm is an extension of the MCR-Miner algorithm with some improvements. These improvements implement a novel way to store the samples of each gene into a list of unsigned integers in order to benefit using the bitwise operations. In addition, the IMCR-Miner algorithm overcomes the drawbacks faced by the MCR-Miner algorithm by setting some restrictions to ignore repeated comparisons. The PMCR-Miner algorithm is a parallel version of the new proposed IMCR-Miner algorithm. The PMCR-Miner algorithm is based on shared-memory systems and task parallelism, where no time is needed in the process of sharing and combining data between processors. The experimental results on real microarray data sets show that the PMCR-Miner algorithm is more efficient and scalable than the counterparts.
An Expectation Maximization Algorithm to Model Failure Times by Continuous-Time Markov Chains
Qihong Duan
2010-01-01
Full Text Available In many applications, the failure rate function may present a bathtub shape curve. In this paper, an expectation maximization algorithm is proposed to construct a suitable continuous-time Markov chain which models the failure time data by the first time reaching the absorbing state. Assume that a system is described by methods of supplementary variables, the device of stage, and so on. Given a data set, the maximum likelihood estimators of the initial distribution and the infinitesimal transition rates of the Markov chain can be obtained by our novel algorithm. Suppose that there are m transient states in the system and that there are n failure time data. The devised algorithm only needs to compute the exponential of m×m upper triangular matrices for O(nm2 times in each iteration. Finally, the algorithm is applied to two real data sets, which indicates the practicality and efficiency of our algorithm.
Video segmentation using multiple features based on EM algorithm
张风超; 杨杰; 刘尔琦
2004-01-01
Object-based video segmentation is an important issue for many multimedia applications. A video segmentation method based on EM algorithm is proposed. We consider video segmentation as an unsupervised classification problem and apply EM algorithm to obtain the maximum-likelihood estimation of the Gaussian model parameters for model-based segmentation. We simultaneously combine multiple features (motion, color) within a maximum likelihood framework to obtain accurate segment results. We also use the temporal consistency among video frames to improve the speed of EM algorithm. Experimental results on typical MPEG-4 sequences and real scene sequences show that our method has an attractive accuracy and robustness.
A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems
Dong Shi-Wei
2007-01-01
Full Text Available A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL test loops show that the proposed algorithm is efficient for practical DMT transmissions.
Relaxation of the EM Algorithm via Quantum Annealing
Miyahara, Hideyuki
2016-01-01
The EM algorithm is a novel numerical method to obtain maximum likelihood estimates and is often used for practical calculations. However, many of maximum likelihood estimation problems are nonconvex, and it is known that the EM algorithm fails to give the optimal estimate by being trapped by local optima. In order to deal with this difficulty, we propose a deterministic quantum annealing EM algorithm by introducing the mathematical mechanism of quantum fluctuations into the conventional EM algorithm because quantum fluctuations induce the tunnel effect and are expected to relax the difficulty of nonconvex optimization problems in the maximum likelihood estimation problems. We show a theorem that guarantees its convergence and give numerical experiments to verify its efficiency.
A tailored ML-EM algorithm for reconstruction of truncated projection data using few view angles
Mao, Yanfei; Zeng, Gengsheng L.
2013-06-01
Dedicated cardiac single photon emission computed tomography (SPECT) systems have the advantage of high speed and sensitivity at no loss, or even a gain, in resolution. The potential drawbacks of these dedicated systems are data truncation by the small field of view (FOV) and the lack of view angles. Serious artifacts, including streaks outside the FOV and distortion in the FOV, are introduced to the reconstruction when using the traditional emission data maximum-likelihood expectation-maximization (ML-EM) algorithm to reconstruct images from the truncated data with a small number of views. In this note, we propose a tailored ML-EM algorithm to suppress the artifacts caused by data truncation and insufficient angular sampling by reducing the image updating step sizes for the pixels outside the FOV. As a consequence, the convergence speed for the pixels outside the FOV is decelerated. We applied the proposed algorithm to truncated analytical data, Monte Carlo simulation data and real emission data with different numbers of views. The computer simulation results show that the tailored ML-EM algorithm outperforms the conventional ML-EM algorithm in terms of streak artifacts and distortion suppression for reconstruction from truncated projection data with a small number of views.
无
2007-01-01
This paper addresses the problems of parameter estimation of multivariable stationary stochastic systems on the basis of observed output data. The main contribution is to employ the expectation-maximisation (EM) method as a means for computation of the maximum-likelihood (ML) parameter estimation of the system. Closed form of the expectation of the studied system subjected to Gaussian distribution noise is derived and paraneter choice that maximizes the expectation is also proposed. This results in an iterative algorithm for parameter estimation and the robust algorithm implementation based on technique of QR-factorization and Cholesky factorization is also discussed. Moreover, algorithmic properties such as non-decreasing likelihood value, necessary and sufficient conditions for the algorithm to arrive at a local stationary parameter, the convergence rate and the factors affecting the convergence rate are analyzed. Simulation study shows that the proposed algorithm has attractive properties such as numerical stability, and avoidance of difficult initial conditions.
Distributed Matching Algorithms: Maximizing Secrecy in the Presence of Untrusted Relay
B. Ali
2017-06-01
Full Text Available In this paper, we propose a secrecy sum-rate maximization based matching algorithm between primary transmitters and secondary cooperative jammers in the presence of an eavesdropper. More explicitly, we consider an untrusted relay scenario, where the relay is a potential eavesdropper. We first show the achievable secrecy regions employing a friendly jammer in a cooperative scenario with employing an untrusted relay. Then, we provide results for the secrecy regions for two scenarios, where in the first case we consider no direct transmission between the source and the destination, while in the second case we include a source to destination direct link in our communication system. Furthermore, a friendly jammer helps to send a noise signal during the first phase of the cooperative transmission, for securing the information transmitted from the source. In our matching algorithm, the selected cooperative jammer or the secondary user, is rewarded with the spectrum allocation for a fraction of time slot from the source which is the primary user. The Conventional Distributed Algorithm (CDA and the Pragmatic Distributed Algorithm (PDA, which were originally designed for maximising the user’s sum rate, are modified and adapted for maximizing the secrecy sum-rate for the primary user. Instead of assuming perfect modulation and/or perfect channel coding, we have also investigated our proposed schemes when practical channel coding and modulation schemes are invoked.
UMR: A utility-maximizing routing algorithm for delay-sensitive service in LEO satellite networks
Lu Yong
2015-04-01
Full Text Available This paper develops a routing algorithm for delay-sensitive packet transmission in a low earth orbit multi-hop satellite network consists of micro-satellites. The micro-satellite low earth orbit (MS-LEO network endures unstable link connection and frequent link congestion due to the uneven user distribution and the link capacity variations. The proposed routing algorithm, referred to as the utility maximizing routing (UMR algorithm, improve the network utility of the MS-LEO network for carrying flows with strict end-to-end delay bound requirement. In UMR, first, a link state parameter is defined to capture the link reliability on continuing to keep the end-to-end delay into constraint; then, on the basis of this parameter, a routing metric is formulated and a routing scheme is designed for balancing the reliability in delay bound guarantee among paths and building a path maximizing the network utility expectation. While the UMR algorithm has many advantages, it may result in a higher blocking rate of new calls. This phenomenon is discussed and a weight factor is introduced into UMR to provide a flexible performance option for network operator. A set of simulations are conducted to verify the good performance of UMR, in terms of balancing the traffic distribution on inter-satellite links, reducing the flow interruption rate, and improving the network utility.
A Trust Region Aggressive Space Mapping Algorithm for EM
Bakr., M.; Bandler, J. W.; Biernacki, R.;
1998-01-01
A robust new algorithm for electromagnetic (EM) optimization of microwave circuits is presented. The algorithm (TRASM) integrates a trust region methodology with the aggressive space mapping (ASM). The trust region ensures that each iteration results in improved alignment between the coarse and f...
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
Bhaduri, Kanishka; Srivastava, Ashok N.
2009-01-01
This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.
Maximal use of minimal libraries through the adaptive substituent reordering algorithm.
Liang, Fan; Feng, Xiao-jiang; Lowry, Michael; Rabitz, Herschel
2005-03-31
This paper describes an adaptive algorithm for interpolation over a library of molecules subjected to synthesis and property assaying. Starting with a coarse sampling of the library compounds, the algorithm finds the optimal substituent orderings on all of the functionalized scaffold sites to allow for accurate property interpolation over all remaining compounds in the full library space. A previous paper introduced the concept of substituent reordering and a smoothness-based criterion to search for optimal orderings (Shenvi, N.; Geremia, J. M.; Rabitz, H. J. Phys. Chem. A 2003, 107, 2066). Here, we propose a data-driven root-mean-squared (RMS) criteria and a combined RMS/smoothness criteria as alternative methods for the discovery of optimal substituent orderings. Error propagation from the property measurements of the sampled compounds is determined to provide confidence intervals on the interpolated molecular property values, and a substituent rescaling technique is introduced to manage poorly designed/sampled libraries. Finally, various factors are explored that can influence the applicability and interpolation quality of the algorithm. An adaptive methodology is proposed to iteratively and efficiently use laboratory experiments to optimize these algorithmic factors, so that the accuracy of property predictions is maximized. The enhanced algorithm is tested on copolymer and transition metal complex libraries, and the results demonstrate the capability of the algorithm to accurately interpolate various properties of both molecular libraries.
An EM algorithm for mapping segregation distortion loci.
Zhu, Chengsong; Zhang, Yuan-Ming
2007-11-29
Chromosomal region that causes distorted segregation ratios is referred to as segregation distortion locus (SDL). The distortion is caused either by differential representation of SDL genotypes in gametes before fertilization or by viability differences of SDL genotypes after fertilization but before genotype scoring. In both cases, observable phenotypes are distorted for marker loci in the chromosomal region close to the SDL. Under the quantitative genetics model for viability selection by proposing a continuous liability controlling the viability of individual, a simplex algorithm has been used to search for the solution in SDL mapping. However, they did not consider the effects of SDL on the construction of linkage maps. We proposed a multipoint maximum-likelihood method to estimate the position and the effects of SDL under the liability model together with both selection coefficients of marker genotypes and recombination fractions. The method was implemented via an expectation and maximization (EM) algorithm. The superiority of the method proposed under the liability model over the previous methods was verified by a series of Monte Carlo simulation experiments, together with a working example derived from the MAPMAKER/QTL software. Our results suggested that the new method can serve as a powerful alternative to existing methods for SDL mapping. Under the liability model, the new method can simultaneously estimate the position and the effects of SDL as well as the recombinant fractions between adjacent markers, and also be used to probe into the genetic mechanism for the bias of uncorrected map distance and to elucidate the relationship between the viability selection and genetic linkage.
An EM algorithm for mapping segregation distortion loci
Zhang Yuan-Ming
2007-11-01
Full Text Available Abstract Background Chromosomal region that causes distorted segregation ratios is referred to as segregation distortion locus (SDL. The distortion is caused either by differential representation of SDL genotypes in gametes before fertilization or by viability differences of SDL genotypes after fertilization but before genotype scoring. In both cases, observable phenotypes are distorted for marker loci in the chromosomal region close to the SDL. Under the quantitative genetics model for viability selection by proposing a continuous liability controlling the viability of individual, a simplex algorithm has been used to search for the solution in SDL mapping. However, they did not consider the effects of SDL on the construction of linkage maps. Results We proposed a multipoint maximum-likelihood method to estimate the position and the effects of SDL under the liability model together with both selection coefficients of marker genotypes and recombination fractions. The method was implemented via an expectation and maximization (EM algorithm. The superiority of the method proposed under the liability model over the previous methods was verified by a series of Monte Carlo simulation experiments, together with a working example derived from the MAPMAKER/QTL software. Conclusion Our results suggested that the new method can serve as a powerful alternative to existing methods for SDL mapping. Under the liability model, the new method can simultaneously estimate the position and the effects of SDL as well as the recombinant fractions between adjacent markers, and also be used to probe into the genetic mechanism for the bias of uncorrected map distance and to elucidate the relationship between the viability selection and genetic linkage.
An Efficient Algorithm for Maximizing Range Sum Queries in a Road Network
Tien-Khoi Phan
2014-01-01
Full Text Available Given a set of positive-weighted points and a query rectangle r (specified by a client of given extents, the goal of a maximizing range sum (MaxRS query is to find the optimal location of r such that the total weights of all the points covered by r are maximized. All existing methods for processing MaxRS queries assume the Euclidean distance metric. In many location-based applications, however, the motion of a client may be constrained by an underlying (spatial road network; that is, the client cannot move freely in space. This paper addresses the problem of processing MaxRS queries in a road network. We propose the external-memory algorithm that is suited for a large road network database. In addition, in contrast to the existing methods, which retrieve only one optimal location, our proposed algorithm retrieves all the possible optimal locations. Through simulations, we evaluate the performance of the proposed algorithm.
Constructing quantum circuits for maximally entangled multi-qubit states using the genetic algorithm
Fan, Zheyong; Goertzel, Ben; Ren, Zhongzhou; Zeng, Huabi
2010-01-01
Numerical optimization methods such as hillclimbing and simulated annealing have been applied to search for highly entangled multi-qubit states. Here the genetic algorithm is applied to this optimization problem -- to search not only for highly entangled states, but also for the corresponding quantum circuits creating these states. Simple quantum circuits for maximally (highly) entangled states are discovered for 3, 4, 5, and 6-qubit systems; and extension of the method to systems with more qubits is discussed. Among other results we have found explicit quantum circuits for maximally entangled 5 and 6-qubit circuits, with only 8 and 13 quantum gates respectively. One significant advantage of our method over previous ones is that it allows very simple construction of quantum circuits based on the quantum states found.
A Tabu Search DSA Algorithm for Reward Maximization in Cellular Networks
Kamal, Hany; Coupechoux, Marceau; Godlewski, Philippe
2010-01-01
International audience; In this paper, we present and analyze a Tabu Search (TS) algorithm for DSA (Dynamic Spectrum Access) in cellular networks. We study a mono-operator case where the operator is providing packet services to the end-users. The objective of the cellular operator is to maximize its reward while taking into account the trade-off between the spectrum cost and the revenues obtained from end-users. These revenue are modeled here as an increasing function of the achieved throughp...
Comparison of parametric FBP and OS-EM reconstruction algorithm images for PET dynamic study
Oda, Keiichi; Uemura, Koji; Kimura, Yuichi; Senda, Michio [Tokyo Metropolitan Inst. of Gerontology (Japan). Positron Medical Center; Toyama, Hinako; Ikoma, Yoko
2001-10-01
An ordered subsets expectation maximization (OS-EM) algorithm is used for image reconstruction to suppress image noise and to make non-negative value images. We have applied OS-EM to a digital brain phantom and to human brain {sup 18}F-FDG PET kinetic studies to generate parametric images. A 45 min dynamic scan was performed starting injection of FDG with a 2D PET scanner. The images were reconstructed with OS-EM (6 iterations, 16 subsets) and with filtered backprojection (FBP), and K1, k2 and k3 images were created by the Marquardt non-linear least squares method based on the 3-parameter kinetic model. Although the OS-EM activity images correlated fairly well with those obtained by FBP, the pixel correlations were poor for the k2 and k3 parametric images, but the plots were scattered along the line of identity and the mean values for K1, k2 and k3 obtained by OS-EM were almost equal to those by FBP. The kinetic fitting error for OS-EM was no smaller than that for FBP. The results suggest that OS-EM is not necessarily superior to FBP for creating parametric images. (author)
A modified EM algorithm for estimation in generalized mixed models.
Steele, B M
1996-12-01
Application of the EM algorithm for estimation in the generalized mixed model has been largely unsuccessful because the E-step cannot be determined in most instances. The E-step computes the conditional expectation of the complete data log-likelihood and when the random effect distribution is normal, this expectation remains an intractable integral. The problem can be approached by numerical or analytic approximations; however, the computational burden imposed by numerical integration methods and the absence of an accurate analytic approximation have limited the use of the EM algorithm. In this paper, Laplace's method is adapted for analytic approximation within the E-step. The proposed algorithm is computationally straightforward and retains much of the conceptual simplicity of the conventional EM algorithm, although the usual convergence properties are not guaranteed. The proposed algorithm accommodates multiple random factors and random effect distributions besides the normal, e.g., the log-gamma distribution. Parameter estimates obtained for several data sets and through simulation show that this modified EM algorithm compares favorably with other generalized mixed model methods.
Uamporn Witthayarat
2012-01-01
Full Text Available The aim of this paper is to introduce an iterative algorithm for finding a common solution of the sets (A+M2−1(0 and (B+M1−1(0, where M is a maximal accretive operator in a Banach space and, by using the proposed algorithm, to establish some strong convergence theorems for common solutions of the two sets above in a uniformly convex and 2-uniformly smooth Banach space. The results obtained in this paper extend and improve the corresponding results of Qin et al. 2011 from Hilbert spaces to Banach spaces and Petrot et al. 2011. Moreover, we also apply our results to some applications for solving convex feasibility problems.
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-12-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
Pal, Suvra; Balakrishnan, N
2017-05-16
In this paper, we develop likelihood inference based on the expectation maximization (EM) algorithm for the Box- Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model mis-specification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Mahdi M. M. El-Arini
2013-01-01
Full Text Available In recent years, the solar energy has become one of the most important alternative sources of electric energy, so it is important to operate photovoltaic (PV panel at the optimal point to obtain the possible maximum efficiency. This paper presents a new optimization approach to maximize the electrical power of a PV panel. The technique which is based on objective function represents the output power of the PV panel and constraints, equality and inequality. First the dummy variables that have effect on the output power are classified into two categories: dependent and independent. The proposed approach is a multistage one as the genetic algorithm, GA, is used to obtain the best initial population at optimal solution and this initial population is fed to Lagrange multiplier algorithm (LM, then a comparison between the two algorithms, GA and LM, is performed. The proposed technique is applied to solar radiation measured at Helwan city at latitude 29.87°, Egypt. The results showed that the proposed technique is applicable.
On-line EM algorithm for the normalized gaussian network.
Sato, M; Ishii, S
2000-02-01
A normalized gaussian network (NGnet) (Moody & Darken, 1989) is a network of local linear regression units. The model softly partitions the input space by normalized gaussian functions, and each local unit linearly approximates the output within the partition. In this article, we propose a new on-line EMalgorithm for the NGnet, which is derived from the batch EMalgorithm (Xu, Jordan, &Hinton 1995), by introducing a discount factor. We show that the on-line EM algorithm is equivalent to the batch EM algorithm if a specific scheduling of the discount factor is employed. In addition, we show that the on-line EM algorithm can be considered as a stochastic approximation method to find the maximum likelihood estimator. A new regularization method is proposed in order to deal with a singular input distribution. In order to manage dynamic environments, where the input-output distribution of data changes over time, unit manipulation mechanisms such as unit production, unit deletion, and unit division are also introduced based on probabilistic interpretation. Experimental results show that our approach is suitable for function approximation problems in dynamic environments. We also apply our on-line EM algorithm to robot dynamics problems and compare our algorithm with the mixtures-of-experts family.
Hamada, Michiaki; Asai, Kiyoshi
2012-05-01
Many estimation problems in bioinformatics are formulated as point estimation problems in a high-dimensional discrete space. In general, it is difficult to design reliable estimators for this type of problem, because the number of possible solutions is immense, which leads to an extremely low probability for every solution-even for the one with the highest probability. Therefore, maximum score and maximum likelihood estimators do not work well in this situation although they are widely employed in a number of applications. Maximizing expected accuracy (MEA) estimation, in which accuracy measures of the target problem and the entire distribution of solutions are considered, is a more successful approach. In this review, we provide an extensive discussion of algorithms and software based on MEA. We describe how a number of algorithms used in previous studies can be classified from the viewpoint of MEA. We believe that this review will be useful not only for users wishing to utilize software to solve the estimation problems appearing in this article, but also for developers wishing to design algorithms on the basis of MEA.
Nonlinear Random Effects Mixture Models: Maximum Likelihood Estimation via the EM Algorithm.
Wang, Xiaoning; Schumitzky, Alan; D'Argenio, David Z
2007-08-15
Nonlinear random effects models with finite mixture structures are used to identify polymorphism in pharmacokinetic/pharmacodynamic phenotypes. An EM algorithm for maximum likelihood estimation approach is developed and uses sampling-based methods to implement the expectation step, that results in an analytically tractable maximization step. A benefit of the approach is that no model linearization is performed and the estimation precision can be arbitrarily controlled by the sampling process. A detailed simulation study illustrates the feasibility of the estimation approach and evaluates its performance. Applications of the proposed nonlinear random effects mixture model approach to other population pharmacokinetic/pharmacodynamic problems will be of interest for future investigation.
Dragonfly: an implementation of the expand-maximize-compress algorithm for single-particle imaging.
Ayyer, Kartik; Lan, Ti-Yen; Elser, Veit; Loh, N Duane
2016-08-01
Single-particle imaging (SPI) with X-ray free-electron lasers has the potential to change fundamentally how biomacromolecules are imaged. The structure would be derived from millions of diffraction patterns, each from a different copy of the macromolecule before it is torn apart by radiation damage. The challenges posed by the resultant data stream are staggering: millions of incomplete, noisy and un-oriented patterns have to be computationally assembled into a three-dimensional intensity map and then phase reconstructed. In this paper, the Dragonfly software package is described, based on a parallel implementation of the expand-maximize-compress reconstruction algorithm that is well suited for this task. Auxiliary modules to simulate SPI data streams are also included to assess the feasibility of proposed SPI experiments at the Linac Coherent Light Source, Stanford, California, USA.
Liu, Haiguang; Spence, John C.H.
2014-01-01
Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these ‘stills’. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity), the method is validated. PMID:25485120
Haiguang Liu
2014-11-01
Full Text Available Crystallographic auto-indexing algorithms provide crystal orientations and unit-cell parameters and assign Miller indices based on the geometric relations between the Bragg peaks observed in diffraction patterns. However, if the Bravais symmetry is higher than the space-group symmetry, there will be multiple indexing options that are geometrically equivalent, and hence many ways to merge diffraction intensities from protein nanocrystals. Structure factor magnitudes from full reflections are required to resolve this ambiguity but only partial reflections are available from each XFEL shot, which must be merged to obtain full reflections from these `stills'. To resolve this chicken-and-egg problem, an expectation maximization algorithm is described that iteratively constructs a model from the intensities recorded in the diffraction patterns as the indexing ambiguity is being resolved. The reconstructed model is then used to guide the resolution of the indexing ambiguity as feedback for the next iteration. Using both simulated and experimental data collected at an X-ray laser for photosystem I in the P63 space group (which supports a merohedral twinning indexing ambiguity, the method is validated.
Vargas Cardona, Hernán Darío; Orozco, Álvaro Ángel; Álvarez, Mauricio A
2013-01-01
Automatic identification of biosignals is one of the more studied fields in biomedical engineering. In this paper, we present an approach for the unsupervised recognition of biomedical signals: Microelectrode Recordings (MER) and Electrocardiography signals (ECG). The unsupervised learning is based in classic and bayesian estimation theory. We employ gaussian mixtures models with two estimation methods. The first is derived from the frequentist estimation theory, known as Expectation-Maximization (EM) algorithm. The second is obtained from bayesian probabilistic estimation and it is called variational inference. In this framework, both methods are used for parameters estimation of Gaussian mixtures. The mixtures models are used for unsupervised pattern classification, through the responsibility matrix. The algorithms are applied in two real databases acquired in Parkinson's disease surgeries and electrocardiograms. The results show an accuracy over 85% in MER and 90% in ECG for identification of two classes. These results are statistically equal or even better than parametric (Naive Bayes) and nonparametric classifiers (K-nearest neighbor).
Tumuluru, Jaya
2013-01-10
Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio, barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated
Aida Tayebiyan
2016-06-01
Full Text Available Background: Several reservoir systems have been constructed for hydropower generation around the world. Hydropower offers an economical source of electricity with reduce carbon emissions. Therefore, it is such a clean and renewable source of energy. Reservoirs that generate hydropower are typically operated with the goal of maximizing energy revenue. Yet, reservoir systems are inefficiently operated and manage according to policies determined at the construction time. It is worth noting that with little enhancement in operation of reservoir system, there could be an increase in efficiency of the scheme for many consumers. Methods: This research develops simulation-optimization models that reflect discrete hedging policy (DHP to manage and operate hydropower reservoir system and analyse it in both single and multireservoir system. Accordingly, three operational models (2 single reservoir systems and 1 multi-reservoir system were constructed and optimized by genetic algorithm (GA. Maximizing the total power generation in horizontal time is chosen as an objective function in order to improve the functional efficiency in hydropower production with consideration to operational and physical limitations. The constructed models, which is a cascade hydropower reservoirs system have been tested and evaluated in the Cameron Highland and Batang Padang in Malaysia. Results: According to the given results, usage of DHP for hydropower reservoir system operation could increase the power generation output to nearly 13% in the studied reservoir system compared to present operating policy (TNB operation. This substantial increase in power production will enhance economic development. Moreover, the given results of single and multi-reservoir systems affirmed that hedging policy could manage the single system much better than operation of the multi-reservoir system. Conclusion: It can be summarized that DHP is an efficient and feasible policy, which could be used
Shepherd, Ross K; Meuwissen, Theo H E; Woolliams, John A
2010-10-22
The information provided by dense genome-wide markers using high throughput technology is of considerable potential in human disease studies and livestock breeding programs. Genome-wide association studies relate individual single nucleotide polymorphisms (SNP) from dense SNP panels to individual measurements of complex traits, with the underlying assumption being that any association is caused by linkage disequilibrium (LD) between SNP and quantitative trait loci (QTL) affecting the trait. Often SNP are in genomic regions of no trait variation. Whole genome Bayesian models are an effective way of incorporating this and other important prior information into modelling. However a full Bayesian analysis is often not feasible due to the large computational time involved. This article proposes an expectation-maximization (EM) algorithm called emBayesB which allows only a proportion of SNP to be in LD with QTL and incorporates prior information about the distribution of SNP effects. The posterior probability of being in LD with at least one QTL is calculated for each SNP along with estimates of the hyperparameters for the mixture prior. A simulated example of genomic selection from an international workshop is used to demonstrate the features of the EM algorithm. The accuracy of prediction is comparable to a full Bayesian analysis but the EM algorithm is considerably faster. The EM algorithm was accurate in locating QTL which explained more than 1% of the total genetic variation. A computational algorithm for very large SNP panels is described. emBayesB is a fast and accurate EM algorithm for implementing genomic selection and predicting complex traits by mapping QTL in genome-wide dense SNP marker data. Its accuracy is similar to Bayesian methods but it takes only a fraction of the time.
Hufnagel, Heike [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany); Pennec, Xavier; Ayache, Nicholas [Institut National de Recherche en Informatique et en Automatique (INRIA), Asclepios Project, Sophia Antipolis (France); Ehrhardt, Jan; Handels, Heinz [University Medical Center Hamburg-Eppendorf, Department of Medical Informatics, Hamburg (Germany)
2008-03-15
Identification of point correspondences between shapes is required for statistical analysis of organ shapes differences. Since manual identification of landmarks is not a feasible option in 3D, several methods were developed to automatically find one-to-one correspondences on shape surfaces. For unstructured point sets, however, one-to-one correspondences do not exist but correspondence probabilities can be determined. A method was developed to compute a statistical shape model based on shapes which are represented by unstructured point sets with arbitrary point numbers. A fundamental problem when computing statistical shape models is the determination of correspondences between the points of the shape observations of the training data set. In the absence of landmarks, exact correspondences can only be determined between continuous surfaces, not between unstructured point sets. To overcome this problem, we introduce correspondence probabilities instead of exact correspondences. The correspondence probabilities are found by aligning the observation shapes with the affine expectation maximization-iterative closest points (EM-ICP) registration algorithm. In a second step, the correspondence probabilities are used as input to compute a mean shape (represented once again by an unstructured point set). Both steps are unified in a single optimization criterion which depe nds on the two parameters 'registration transformation' and 'mean shape'. In a last step, a variability model which best represents the variability in the training data set is computed. Experiments on synthetic data sets and in vivo brain structure data sets (MRI) are then designed to evaluate the performance of our algorithm. The new method was applied to brain MRI data sets, and the estimated point correspondences were compared to a statistical shape model built on exact correspondences. Based on established measures of 'generalization ability' and &apos
Bayer, Christian
2016-02-20
© 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
DeSutter, John; Francoeur, Mathieu
2016-01-01
Optimal radiator thermal emission spectra maximizing thermophotovoltaic (TPV) conversion efficiency and output power density are determined when temperature effects in the cell are considered. To do this, a framework is designed in which a TPV model that accounts for radiative, electrical and thermal losses is coupled with a genetic algorithm. The TPV device under study involves a spectrally selective radiator at a temperature of 2000 K, a gallium antimonide cell, and a cell thermal management system characterized by a fluid temperature and a heat transfer coefficient of 293 K and 600 Wm-2K-1. It is shown that a maximum conversion efficiency of 38.8% is achievable with an emission spectrum that has emissivity of unity between 0.719 eV and 0.763 eV and zero elsewhere. This optimal spectrum is less than half of the width of those when thermal losses are neglected. A maximum output power density of 41708 Wm-2 is achievable with a spectrum having emissivity values of unity between 0.684 eV and 1.082 eV and zero e...
Vilanova, Pedro
2016-01-07
In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
Maximization of induction motor torque in the zone of high speed of rotor using a genetic algorithm
2013-01-01
Is studied the problem of quality improving of the vector-controlled induction motor drives. Using genetic algorithm obtained a law forming of the rotor flux linkage that maximizes the torque of an induction motor with constraints voltage and stator current. Numerical studies have shown that the proposed law can significantly increase the motor torque in the area of high speed of rotor.
EXTREME: an online EM algorithm for motif discovery
Quang, Daniel; Xie, Xiaohui
2014-01-01
Motivation: Identifying regulatory elements is a fundamental problem in the field of gene transcription. Motif discovery—the task of identifying the sequence preference of transcription factor proteins, which bind to these elements—is an important step in this challenge. MEME is a popular motif discovery algorithm. Unfortunately, MEME’s running time scales poorly with the size of the dataset. Experiments such as ChIP-Seq and DNase-Seq are providing a rich amount of information on the binding preference of transcription factors. MEME cannot discover motifs in data from these experiments in a practical amount of time without a compromising strategy such as discarding a majority of the sequences. Results: We present EXTREME, a motif discovery algorithm designed to find DNA-binding motifs in ChIP-Seq and DNase-Seq data. Unlike MEME, which uses the expectation-maximization algorithm for motif discovery, EXTREME uses the online expectation-maximization algorithm to discover motifs. EXTREME can discover motifs in large datasets in a practical amount of time without discarding any sequences. Using EXTREME on ChIP-Seq and DNase-Seq data, we discover many motifs, including some novel and infrequent motifs that can only be discovered by using the entire dataset. Conservation analysis of one of these novel infrequent motifs confirms that it is evolutionarily conserved and possibly functional. Availability and implementation: All source code is available at the Github repository http://github.com/uci-cbcl/EXTREME. Contact: xhx@ics.uci.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:24532725
EM algorithm and its application to testing hypotheses
房祥忠; 陈家鼎
2003-01-01
The conventional method for testing hypotheses is to find an exact or asymptotic distributionof a test statistic. But when the model is complex and the sample size is small, difficulty often arises. Thispaper aims to present a method for finding maximum probability with the help of EM algorithm. For any fixedsample size, this method can be used not only to obtain an accurate test but also to check the real level ofa test which is build by large sample theory. Especially, while doing this, one needs neither the accurate norasymptotic distribution of the test statistic. So the method is easily performed and is especially useful for small samples.
Naparstek, Oshri; Leshem, Amir
2013-01-01
In this paper we analyze the expected time complexity of the auction algorithm for the matching problem on random bipartite graphs. We prove that the expected time complexity of the auction algorithm for bipartite matching is $O\\left(\\frac{N\\log^2(N)}{\\log\\left(Np\\right)}\\right)$ on sequential machines. This is equivalent to other augmenting path algorithms such as the HK algorithm. Furthermore, we show that the algorithm can be implemented on parallel machines with $O(\\log(N))$ processors an...
Dexter, F; Macario, A; Traub, R D
1999-11-01
The algorithm to schedule add-on elective cases that maximizes operating room (OR) suite utilization is unknown. The goal of this study was to use computer simulation to evaluate 10 scheduling algorithms described in the management sciences literature to determine their relative performance at scheduling as many hours of add-on elective cases as possible into open OR time. From a surgical services information system for two separate surgical suites, the authors collected these data: (1) hours of open OR time available for add-on cases in each OR each day and (2) duration of each add-on case. These empirical data were used in computer simulations of case scheduling to compare algorithms appropriate for "variable-sized bin packing with bounded space." "Variable size" refers to differing amounts of open time in each "bin," or OR. The end point of the simulations was OR utilization (time an OR was used divided by the time the OR was available). Each day there were 0.24 +/- 0.11 and 0.28 +/- 0.23 simulated cases (mean +/- SD) scheduled to each OR in each of the two surgical suites. The algorithm that maximized OR utilization, Best Fit Descending with fuzzy constraints, achieved OR utilizations 4% larger than the algorithm with poorest performance. We identified the algorithm for scheduling add-on elective cases that maximizes OR utilization for surgical suites that usually have zero or one add-on elective case in each OR. The ease of implementation of the algorithm, either manually or in an OR information system, needs to be studied.
Maximum-Likelihood Semiblind Equalization of Doubly Selective Channels Using the EM Algorithm
Gideon Kutz
2010-01-01
Full Text Available Maximum-likelihood semi-blind joint channel estimation and equalization for doubly selective channels and single-carrier systems is proposed. We model the doubly selective channel as an FIR filter where each filter tap is modeled as a linear combination of basis functions. This channel description is then integrated in an iterative scheme based on the expectation-maximization (EM principle that converges to the channel description vector estimation. We discuss the selection of the basis functions and compare various functions sets. To alleviate the problem of convergence to a local maximum, we propose an initialization scheme to the EM iterations based on a small number of pilot symbols. We further derive a pilot positioning scheme targeted to reduce the probability of convergence to a local maximum. Our pilot positioning analysis reveals that for high Doppler rates it is better to spread the pilots evenly throughout the data block (and not to group them even for frequency-selective channels. The resulting equalization algorithm is shown to be superior over previously proposed equalization schemes and to perform in many cases close to the maximum-likelihood equalizer with perfect channel knowledge. Our proposed method is also suitable for coded systems and as a building block for Turbo equalization algorithms.
Saurabh Ghosh; Partha P. Majumder
2000-08-01
Mapping a locus controlling a quantitative genetic trait (e.g. blood pressure) to a specific genomic region is of considerable contemporary interest. Data on the quantitative trait under consideration and several codominant genetic markers with known genomic locations are collected from members of families and statistically analysed to estimate the recombination fraction, , between the putative quantitative trait locus and a genetic marker. One of the major complications in estimating for a quantitative trait in humans is the lack of haplotype information on members of families. We have devised a computationally simple two-stage method of estimation of in the absence of haplotypic information using the expectation-maximization (EM) algorithm. In the first stage, parameters of the quantitative trait locus (QTL) are estimated on the basis of data of a sample of unrelated individuals and a Bayes's rule is used to classify each parent into a QTL genotypic class. In the second stage, we have proposed an EM algorithm for obtaining the maximum-likelihood estimate of based on data of informative families (which are identified upon inferring parental QTL genotypes performed in the first stage). The purpose of this paper is to investigate whether, instead of using genotypically `classified' data of parents, the use of posterior probabilities of QT genotypes of parents at the second stage yields better estimators. We show, using simulated data, that the proposed procedure using posterior probabilities is statistically more efficient than our earlier classification procedure, although it is computationally heavier.
Gallo A. S.
2005-01-01
Full Text Available We investigate the application of the Bayesian expectation-maximization (BEM technique to the design of soft-in soft-out (SISO detection algorithms for wireless communication systems operating over channels affected by parametric uncertainty. First, the BEM algorithm is described in detail and its relationship with the well-known expectation-maximization (EM technique is explained. Then, some of its applications are illustrated. In particular, the problems of SISO detection of spread spectrum, single-carrier and multicarrier space-time block coded signals are analyzed. Numerical results show that BEM-based detectors perform closely to the maximum likelihood (ML receivers endowed with perfect channel state information as long as channel variations are not too fast.
Application of k-person and k-task maximal efficiency assignment algorithm to water piping repair
Su-juan ZHENG
2009-06-01
Full Text Available Solving the absent assignment problem of the shortest time limit in a weighted bipartite graph with the minimal weighted k-matching algorithm is unsuitable for situations in which large numbers of problems need to be addressed by large numbers of parties. This paper simplifies the algorithm of searching for the even alternating path that contains a maximal element using the minimal weighted k-matching theorem and intercept graph. A program for solving the maximal efficiency assignment problem was compiled. As a case study, the program was used to solve the assignment problem of water piping repair in the case of a large number of companies and broken pipes, and the validity of the program was verified.
Application of k-person and k-task maximal efficiency assignment algorithm to water piping repair
Su-juan ZHENG; Xiu-ming YU; Li-qing CAO
2009-01-01
Solving the absent assignment problern of the shortest time limit in a weighted bipartite graph with the minimal weighted k-matching algorithm is unsuitable for situations in which large numbers of problems need to be addressed by large numbers of parties. This paper simplifies the algorithm of searching for the even alternating path that contains a maximal element using the minimal weighted k-matching theorem and intercept graph. A program for solving the maximal efficiency assignment problem was compiled. As a case study, the program was used to solve the assignment problem of water piping repair in the case of a large number of companies and broken pipes, and the validity of the program was verified.
Lemaire, H.; Barat, E.; Carrel, F.; Dautremer, T.; Dubos, S.; Limousin, O.; Montagu, T.; Normand, S.; Schoepff, V. [CEA, Gif-sur-Yvette, F-91191 (France); Amgarou, K.; Menaa, N. [CANBERRA, 1, rue des Herons, Saint Quentin en Yvelines, F-78182 (France); Angelique, J.-C. [LPC, 6, boulevard du Marechal Juin, F-14050 (France); Patoz, A. [CANBERRA, 10, route de Vauzelles, Loches, F-37600 (France)
2015-07-01
In this work, we tested Maximum likelihood expectation-maximization (MLEM) algorithms optimized for gamma imaging applications on two recent coded mask gamma cameras. We respectively took advantage of the characteristics of the GAMPIX and Caliste HD-based gamma cameras: noise reduction thanks to mask/anti-mask procedure but limited energy resolution for GAMPIX, high energy resolution for Caliste HD. One of our short-term perspectives is the test of MAPEM algorithms integrating specific prior values for the data to reconstruct adapted to the gamma imaging topic. (authors)
Chang Luo
2015-01-01
Full Text Available The many-objective optimization performance of the Kriging-surrogate-based evolutionary algorithm (EA, which maximizes expected hypervolume improvement (EHVI for updating the Kriging model, is investigated and compared with those using expected improvement (EI and estimation (EST updating criteria in this paper. Numerical experiments are conducted in 3- to 15-objective DTLZ1-7 problems. In the experiments, an exact hypervolume calculating algorithm is used for the problems with less than six objectives. On the other hand, an approximate hypervolume calculating algorithm based on Monte Carlo sampling is adopted for the problems with more objectives. The results indicate that, in the nonconstrained case, EHVI is a highly competitive updating criterion for the Kriging model and EA based many-objective optimization, especially when the test problem is complex and the number of objectives or design variables is large.
A pathway EM-algorithm for estimating vaccine efficacy with a non-monotone validation set.
Yang, Yang; Halloran, M Elizabeth; Chen, Yanjun; Kenah, Eben
2014-09-01
Here, we consider time-to-event data where individuals can experience two or more types of events that are not distinguishable from one another without further confirmation, perhaps by laboratory test. The event type of primary interest can occur only once. The other types of events can recur. If the type of a portion of the events is identified, this forms a validation set. However, even if a random sample of events are tested, confirmations can be missing nonmonotonically, creating uncertainty about whether an individual is still at risk for the event of interest. For example, in a study to estimate efficacy of an influenza vaccine, an individual may experience a sequence of symptomatic respiratory illnesses caused by various pathogens over the season. Often only a limited number of these episodes are confirmed in the laboratory to be influenza-related or not. We propose two novel methods to estimate covariate effects in this survival setting, and subsequently vaccine efficacy. The first is a pathway expectation-maximization (EM) algorithm that takes into account all pathways of event types in an individual compatible with that individual's test outcomes. The pathway EM iteratively estimates baseline hazards that are used to weight possible event types. The second method is a non-iterative pathway piecewise validation method that does not estimate the baseline hazards. These methods are compared with a previous simpler method. Simulation studies suggest mean squared error is lower in the efficacy estimates when the baseline hazards are estimated, especially at higher hazard rates. We use the pathway EM-algorithm to reevaluate the efficacy of a trivalent live-attenuated influenza vaccine during the 2003-2004 influenza season in Temple-Belton, Texas, and compare our results with a previously published analysis. © 2014, The International Biometric Society.
Zibar, Darko; Winther, Ole; Franceschi, Niccolo
2012-01-01
In this paper, we show numerically and experimentally that expectation maximization (EM) algorithm is a powerful tool in combating system impairments such as fibre nonlinearities, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. The EM algorithm is an iterative algorithm...
Maximum Likelihood in a Generalized Linear Finite Mixture Model by Using the EM Algorithm
Jansen, R.C.
A generalized linear finite mixture model and an EM algorithm to fit the model to data are described. By this approach the finite mixture model is embedded within the general framework of generalized linear models (GLMs). Implementation of the proposed EM algorithm can be readily done in statistical
Cai, Li; Lee, Taehun
2009-01-01
We apply the Supplemented EM algorithm (Meng & Rubin, 1991) to address a chronic problem with the "two-stage" fitting of covariance structure models in the presence of ignorable missing data: the lack of an asymptotically chi-square distributed goodness-of-fit statistic. We show that the Supplemented EM algorithm provides a…
Maximizing the brightness of an electron beam by means of a genetic algorithm
Bacci, A.; Maroli, C. [Sezione di Milano INFN, Via Celoria, 16, 20133 Milano (Italy); Petrillo, V. [Universita degli Studi di Milano, Via Celoria, 16, 20133 Milan (Italy)], E-mail: petrillo@mi.infn.it; Rossi, A.; Serafini, L. [Sezione di Milano INFN, Via Celoria, 16, 20133 Milan (Italy)
2007-10-15
We present the architecture and some applications of a genetic algorithm developed for solving problems in beam dynamics of high brightness beams, where highly non-linear behaviour of the beam characteristics are optimized through a multi-dimensional variation technique based on genetic evolution criteria.
A Local Scalable Distributed EM Algorithm for Large P2P Networks
National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
Algorithmic and Complexity Results for Cutting Planes Derived from Maximal Lattice-Free Convex Sets
Basu, Amitabh; Köppe, Matthias
2011-01-01
We study a mixed integer linear program with m integer variables and k non-negative continuous variables in the form of the relaxation of the corner polyhedron that was introduced by Andersen, Louveaux, Weismantel and Wolsey [Inequalities from two rows of a simplex tableau, Proc. IPCO 2007, LNCS, vol. 4513, Springer, pp. 1--15]. We describe the facets of this mixed integer linear program via the extreme points of a well-defined polyhedron. We then utilize this description to give polynomial time algorithms to derive valid inequalities with optimal l_p norm for arbitrary, but fixed m. For the case of m=2, we give a refinement and a new proof of a characterization of the facets by Cornuejols and Margot [On the facets of mixed integer programs with two integer variables and two constraints, Math. Programming 120 (2009), 429--456]. The key point of our approach is that the conditions are much more explicit and can be tested in a more direct manner, removing the need for a reduction algorithm. These results allow ...
Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering
Fonarev, Alexander
2017-02-07
Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method\\'s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.
Efficient Retrieval of Text for Biomedical Domain using Expectation Maximization Algorithm
Sumit Vashishtha
2011-11-01
Full Text Available Data mining, a branch of computer science [1], is the process of extracting patterns from large data sets by combining methods from statistics and artificial intelligence with database management. Data mining is seen as an increasingly important tool by modern business to transform data into business intelligence giving an informational advantage. Biomedical text retrieval refers to text retrieval techniques applied to biomedical resources and literature available of the biomedical and molecular biology domain. The volume of published biomedical research, and therefore the underlying biomedical knowledge base, is expanding at an increasing rate. Biomedical text retrieval is a way to aid researchers in coping with information overload. By discovering predictive relationships between different pieces of extracted data, data-mining algorithms can be used to improve the accuracy of information extraction. However, textual variation due to typos, abbreviations, and other sources can prevent the productive discovery and utilization of hard-matching rules. Recent methods of soft clustering can exploit predictive relationships in textual data. This paper presents a technique for using soft clustering data mining algorithm to increase the accuracy of biomedical text extraction. Experimental results demonstrate that this approach improves text extraction more effectively that hard keyword matching rules.
Flamant, Julien; Le Bihan, Nicolas; Martin, Andrew V; Manton, Jonathan H
2016-05-01
In three-dimensional (3D) single particle imaging with x-ray free-electron lasers, particle orientation is not recorded during measurement but is instead recovered as a necessary step in the reconstruction of a 3D image from the diffraction data. Here we use harmonic analysis on the sphere to cleanly separate the angular and radial degrees of freedom of this problem, providing new opportunities to efficiently use data and computational resources. We develop the expansion-maximization-compression algorithm into a shell-by-shell approach and implement an angular bandwidth limit that can be gradually raised during the reconstruction. We study the minimum number of patterns and minimum rotation sampling required for a desired angular and radial resolution. These extensions provide new avenues to improve computational efficiency and speed of convergence, which are critically important considering the very large datasets expected from experiment.
Yasui Yutaka
2011-01-01
Full Text Available Abstract Background Autism spectrum disorders (ASD are associated with complications of pregnancy that implicate fetal hypoxia (FH; the excess of ASD in male gender is poorly understood. We tested the hypothesis that risk of ASD is related to fetal hypoxia and investigated whether this effect is greater among males. Methods Provincial delivery records (PDR identified the cohort of all 218,890 singleton live births in the province of Alberta, Canada, between 01-01-98 and 12-31-04. These were followed-up for ASD via ICD-9 diagnostic codes assigned by physician billing until 03-31-08. Maternal and obstetric risk factors, including FH determined from blood tests of acidity (pH, were extracted from PDR. The binary FH status was missing in approximately half of subjects. Assuming that characteristics of mothers and pregnancies would be correlated with FH, we used an Estimation-Maximization algorithm to estimate HF-ASD association, allowing for both missing-at-random (MAR and specific not-missing-at-random (NMAR mechanisms. Results Data indicated that there was excess risk of ASD among males who were hypoxic at birth, not materially affected by adjustment for potential confounding due to birth year and socio-economic status: OR 1.13, 95%CI: 0.96, 1.33 (MAR assumption. Limiting analysis to full-term males, the adjusted OR under specific NMAR assumptions spanned 95%CI of 1.0 to 1.6. Conclusion Our results are consistent with a weak effect of fetal hypoxia on risk of ASD among males. E-M algorithm is an efficient and flexible tool for modeling missing data in the studied setting.
González, M; Gutiérrez, C; Martínez, R
2012-09-01
A two-dimensional bisexual branching process has recently been presented for the analysis of the generation-to-generation evolution of the number of carriers of a Y-linked gene. In this model, preference of females for males with a specific genetic characteristic is assumed to be determined by an allele of the gene. It has been shown that the behavior of this kind of Y-linked gene is strongly related to the reproduction law of each genotype. In practice, the corresponding offspring distributions are usually unknown, and it is necessary to develop their estimation theory in order to determine the natural selection of the gene. Here we deal with the estimation problem for the offspring distribution of each genotype of a Y-linked gene when the only observable data are each generation's total numbers of males of each genotype and of females. We set out the problem in a non parametric framework and obtain the maximum likelihood estimators of the offspring distributions using an expectation-maximization algorithm. From these estimators, we also derive the estimators for the reproduction mean of each genotype and forecast the distribution of the future population sizes. Finally, we check the accuracy of the algorithm by means of a simulation study.
The PX-EM algorithm for fast stable fitting of Henderson's mixed model
Van Dyk David A
2000-03-01
Full Text Available Abstract This paper presents procedures for implementing the PX-EM algorithm of Liu, Rubin and Wu to compute REML estimates of variance covariance components in Henderson's linear mixed models. The class of models considered encompasses several correlated random factors having the same vector length e.g., as in random regression models for longitudinal data analysis and in sire-maternal grandsire models for genetic evaluation. Numerical examples are presented to illustrate the procedures. Much better results in terms of convergence characteristics (number of iterations and time required for convergence are obtained for PX-EM relative to the basic EM algorithm in the random regression.
Riediger Michael L. B.
2005-01-01
Full Text Available In this paper, we consider the issue of blind detection of Alamouti-type differential space-time (ST modulation in static Rayleigh fading channels. We focus our attention on a π / 2 -shifted BPSK constellation, introducing a novel transformation to the received signal such that this binary ST modulation, which has a second-order transmit diversity, is equivalent to QPSK modulation with second-order receive diversity. This equivalent representation allows us to apply a low-complexity detection technique specifically designed for receive diversity, namely, scalar multiple-symbol differential detection (MSDD. To further increase receiver performance, we apply an iterative expectation-maximization (EM algorithm which performs joint channel estimation and sequence detection. This algorithm uses minimum mean square estimation to obtain channel estimates and the maximum-likelihood principle to detect the transmitted sequence, followed by differential decoding. With receiver complexity proportional to the observation window length, our receiver can achieve the performance of a coherent maximal ratio combining receiver (with differential decoding in as few as a single EM receiver iteration, provided that the window size of the initial MSDD is sufficiently long. To further demonstrate that the MSDD is a vital part of this receiver setup, we show that an initial ST conventional differential detector would lead to strange convergence behavior in the EM algorithm.
Machado-Neto, L. V. B.; Cabral, C. V. T.; Diniz, A. S. A. C.; Cortizo, P. C.; Oliveira-Filho, D.
2004-07-01
The maximization of the efficiency in the energy conversion is essential into the developing of technical and economic sustainability of photovoltaic solar energy systems. In this paper is realized the study of a power maximization technique for photovoltaic generators. The power maximization technique explored in this paper is the Maximum Power Point Tracking (MPPT). There are different strategies being studied currently; this work consists of the development of an electronic converter prototype for MPPT, including the developing of the tracking algorithm implemented in a microcontroller. It is also realized a simulation of the system and a prototype was assembled and the first results are presented here. (Author)
Liu Yang
2015-12-01
Full Text Available Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs in the network act as routers to transmit data to base station (BS cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.
Chia-Feng Lu
Full Text Available Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified.
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X
2015-12-26
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.
A Hybrid Aggressive Space Mapping Algorithm for EM Optimization
Bakr, M.; Bandler, J. W.; Georgieva, N.;
1999-01-01
We present a novel, Hybrid Aggressive Space Mapping (HASM) optimization algorithm. HASM is a hybrid approach exploiting both the Trust Region Aggressive Space Mapping (TRASM) algorithm and direct optimization. It does not assume that the final space-mapped design is the true optimal design and is...
Al-Jabr, Ahmad Ali
2013-03-01
In this paper, an finite-difference time-domain (FDTD) algorithm for simulating propagation of EM waves in anisotropic material is presented. The algorithm is based on the auxiliary differential equation and the general polarization formulation. In anisotropic materials, electric fields are coupled and elements in the permittivity tensor are, in general, multiterm dispersive. The presented algorithm resolves the field coupling using a formulation based on electric polarizations. It also offers a simple procedure for the treatment of multiterm dispersion in the FDTD scheme. The algorithm is tested by simulating wave propagation in 1-D magnetized plasma showing excellent agreement with analytical solutions. Extension of the algorithm to multidimensional structures is straightforward. The presented algorithm is efficient and simple compared to other algorithms found in the literature. © 2012 IEEE.
Use of the AIC with the EM algorithm: A demonstration of a probability model selection technique
Glosup, J.G.; Axelrod M.C. [Lawrence Livermore National Lab., CA (United States)
1994-11-15
The problem of discriminating between two potential probability models, a Gaussian distribution and a mixture of Gaussian distributions, is considered. The focus of our interest is a case where the models are potentially non-nested and the parameters of the mixture model are estimated through the EM algorithm. The AIC, which is frequently used as a criterion for discriminating between non-nested models, is modified to work with the EM algorithm and is shown to provide a model selection tool for this situation. A particular problem involving an infinite mixture distribution known as Middleton`s Class A model is used to demonstrate the effectiveness and limitations of this method.
Papaconstadopoulos, P; Levesque, I R; Maglieri, R; Seuntjens, J
2016-02-07
Direct determination of the source intensity distribution of clinical linear accelerators is still a challenging problem for small field beam modeling. Current techniques most often involve special equipment and are difficult to implement in the clinic. In this work we present a maximum-likelihood expectation-maximization (MLEM) approach to the source reconstruction problem utilizing small fields and a simple experimental set-up. The MLEM algorithm iteratively ray-traces photons from the source plane to the exit plane and extracts corrections based on photon fluence profile measurements. The photon fluence profiles were determined by dose profile film measurements in air using a high density thin foil as build-up material and an appropriate point spread function (PSF). The effect of other beam parameters and scatter sources was minimized by using the smallest field size ([Formula: see text] cm(2)). The source occlusion effect was reproduced by estimating the position of the collimating jaws during this process. The method was first benchmarked against simulations for a range of typical accelerator source sizes. The sources were reconstructed with an accuracy better than 0.12 mm in the full width at half maximum (FWHM) to the respective electron sources incident on the target. The estimated jaw positions agreed within 0.2 mm with the expected values. The reconstruction technique was also tested against measurements on a Varian Novalis Tx linear accelerator and compared to a previously commissioned Monte Carlo model. The reconstructed FWHM of the source agreed within 0.03 mm and 0.11 mm to the commissioned electron source in the crossplane and inplane orientations respectively. The impact of the jaw positioning, experimental and PSF uncertainties on the reconstructed source distribution was evaluated with the former presenting the dominant effect.
Adachi, Kohei
2013-01-01
Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…
Adachi, Kohei
2013-01-01
Rubin and Thayer ("Psychometrika," 47:69-76, 1982) proposed the EM algorithm for exploratory and confirmatory maximum likelihood factor analysis. In this paper, we prove the following fact: the EM algorithm always gives a proper solution with positive unique variances and factor correlations with absolute values that do not exceed one,…
A Hybrid Aggressive Space Mapping Algorithm for EM Optimization
Bakr, Mohamed H.; Bandler, John W.; Georgieva, N.;
1999-01-01
We propose a novel hybrid aggressive space-mapping (HASM) optimization algorithm. HASM exploits both the trust-region aggressive space-mapping (TRASM) strategy and direct optimization. Severe differences between the coarse and fine models and nonuniqueness of the parameter extraction procedure ma...
An Efficient Forward-Reverse EM Algorithm for Statistical Inference in Stochastic Reaction Networks
Bayer, Christian
2016-01-06
In this work [1], we present an extension of the forward-reverse algorithm by Bayer and Schoenmakers [2] to the context of stochastic reaction networks (SRNs). We then apply this bridge-generation technique to the statistical inference problem of approximating the reaction coefficients based on discretely observed data. To this end, we introduce an efficient two-phase algorithm in which the first phase is deterministic and it is intended to provide a starting point for the second phase which is the Monte Carlo EM Algorithm.
Unsupervised classification algorithm based on EM method for polarimetric SAR images
Fernández-Michelli, J. I.; Hurtado, M.; Areta, J. A.; Muravchik, C. H.
2016-07-01
In this work we develop an iterative classification algorithm using complex Gaussian mixture models for the polarimetric complex SAR data. It is a non supervised algorithm which does not require training data or an initial set of classes. Additionally, it determines the model order from data, which allows representing data structure with minimum complexity. The algorithm consists of four steps: initialization, model selection, refinement and smoothing. After a simple initialization stage, the EM algorithm is iteratively applied in the model selection step to compute the model order and an initial classification for the refinement step. The refinement step uses Classification EM (CEM) to reach the final classification and the smoothing stage improves the results by means of non-linear filtering. The algorithm is applied to both simulated and real Single Look Complex data of the EMISAR mission and compared with the Wishart classification method. We use confusion matrix and kappa statistic to make the comparison for simulated data whose ground-truth is known. We apply Davies-Bouldin index to compare both classifications for real data. The results obtained for both types of data validate our algorithm and show that its performance is comparable to Wishart's in terms of classification quality.
Plubtieng Somyot
2009-01-01
Full Text Available Abstract We introduce an iterative scheme for finding a common element of the solution set of a maximal monotone operator and the solution set of the variational inequality problem for an inverse strongly-monotone operator in a uniformly smooth and uniformly convex Banach space, and then we prove weak and strong convergence theorems by using the notion of generalized projection. The result presented in this paper extend and improve the corresponding results of Kamimura et al. (2004, and Iiduka and Takahashi (2008. Finally, we apply our convergence theorem to the convex minimization problem, the problem of finding a zero point of a maximal monotone operator and the complementary problem.
Somyot Plubtieng
2009-01-01
Full Text Available We introduce an iterative scheme for finding a common element of the solution set of a maximal monotone operator and the solution set of the variational inequality problem for an inverse strongly-monotone operator in a uniformly smooth and uniformly convex Banach space, and then we prove weak and strong convergence theorems by using the notion of generalized projection. The result presented in this paper extend and improve the corresponding results of Kamimura et al. (2004, and Iiduka and Takahashi (2008. Finally, we apply our convergence theorem to the convex minimization problem, the problem of finding a zero point of a maximal monotone operator and the complementary problem.
一种新型的社会网络影响最大化算法%A New Hybrid Algorithm for Influence Maximization in Social Networks
田家堂; 王轶彤; 冯小军
2011-01-01
社会网络中影响最大化问题是对于给定k值,寻找k个具有最大影响范围的节点集.这是一个优化问题并且是NP-完全的.Kemple和Kleinberg提出具有较好影响范围的贪心算法,但其时间复杂度很高,不能适用在大型社会网络中,并且不能保证最好的影响范围.文中利用线性阈值模型的“影响力积累”特性,提出了一个该模型下影响最大化算法的框架,并在此框架基础上给出一个新的算法HPG.HPG综合考虑网络的结构特性和传播特性,首先启发式选择PI值最大的节点,然后寻找最具影响力的节点.实验结果显示HPG在最终影响范围和运行时间上都获得比贪心算法更好的效果.%Influence maximization is a problem of finding a small subset of nodes (target set) in a social network that could maximize the spread of influence. This optimization problem of influence maximization is NP-hard under several most widely studied diffusion models and is even challenging for current online social networks which contain both positive and negative relations. Kemple and Kleinberg proposed a natural climbing-hill greedy algorithm that chooses the nodes which could provide a good marginal influence. This greedy algorithm has large spread of influence, but is very costly and cannot be applied to large social networks. Also, greedy algorithm could not guarantee the best influence spread. In this paper, we propose a framework on the linear threshold model and a hybrid potential-influence greedy algorithm (HPG) which can make good use of the "influence accumulation" property of the linear threshold model. Considering the network structure and propagation characteristics, HPG algorithm first heuristically choose half of the initial seeds with the biggest potential influence (PI) and then greedily choose the other half initial seeds with the most influence. Experiments are conducted comprehensively on different real datasets (including weighted social
Model Selection Criteria for Missing-Data Problems Using the EM Algorithm.
Ibrahim, Joseph G; Zhu, Hongtu; Tang, Niansheng
2008-12-01
We consider novel methods for the computation of model selection criteria in missing-data problems based on the output of the EM algorithm. The methodology is very general and can be applied to numerous situations involving incomplete data within an EM framework, from covariates missing at random in arbitrary regression models to nonignorably missing longitudinal responses and/or covariates. Toward this goal, we develop a class of information criteria for missing-data problems, called IC(H) (,) (Q), which yields the Akaike information criterion and the Bayesian information criterion as special cases. The computation of IC(H) (,) (Q) requires an analytic approximation to a complicated function, called the H-function, along with output from the EM algorithm used in obtaining maximum likelihood estimates. The approximation to the H-function leads to a large class of information criteria, called IC(H̃) (() (k) (),) (Q). Theoretical properties of IC(H̃) (() (k) (),) (Q), including consistency, are investigated in detail. To eliminate the analytic approximation to the H-function, a computationally simpler approximation to IC(H) (,) (Q), called IC(Q), is proposed, the computation of which depends solely on the Q-function of the EM algorithm. Advantages and disadvantages of IC(H̃) (() (k) (),) (Q) and IC(Q) are discussed and examined in detail in the context of missing-data problems. Extensive simulations are given to demonstrate the methodology and examine the small-sample and large-sample performance of IC(H̃) (() (k) (),) (Q) and IC(Q) in missing-data problems. An AIDS data set also is presented to illustrate the proposed methodology.
Clustering dynamic textures with the hierarchical em algorithm for modeling video.
Mumtaz, Adeel; Coviello, Emanuele; Lanckriet, Gert R G; Chan, Antoni B
2013-07-01
Dynamic texture (DT) is a probabilistic generative model, defined over space and time, that represents a video as the output of a linear dynamical system (LDS). The DT model has been applied to a wide variety of computer vision problems, such as motion segmentation, motion classification, and video registration. In this paper, we derive a new algorithm for clustering DT models that is based on the hierarchical EM algorithm. The proposed clustering algorithm is capable of both clustering DTs and learning novel DT cluster centers that are representative of the cluster members in a manner that is consistent with the underlying generative probabilistic model of the DT. We also derive an efficient recursive algorithm for sensitivity analysis of the discrete-time Kalman smoothing filter, which is used as the basis for computing expectations in the E-step of the HEM algorithm. Finally, we demonstrate the efficacy of the clustering algorithm on several applications in motion analysis, including hierarchical motion clustering, semantic motion annotation, and learning bag-of-systems (BoS) codebooks for dynamic texture recognition.
Maximal Clique Percolation Algorithm Based on Neighboring Information%一种基于邻居信息的最大派系过滤算法
陈端兵; 周玉林; 傅彦
2011-01-01
Maximal clique problem(MCP) is a classical and important combinational optimization problem with many prominent applications, for example, information retrieval, signal transmission, computer vision, social network and bioinformatics, etc. Researchers presented many algorithms to solve it by using various strategies, such as branch-andbound, genetic algorithm, simulation annealing, cross entropy and DNA method. In this paper, a new clique percolation algorithm was presented based on neighboring vertices and edges of clique. From a given clique( it's a vertex at initial)at each step,investigated its all neighboring vertices and expanded it to a larger clique through a neighboring edge of clique. Two large scale author collaboration networks were used to test the performance of proposed algorithm and the clique distribution in large scale social network was also discussed. Experimental results demonstrate that the presented algorithm is efficient to percolation the maximal clique in network.%最大派系问题(Maximal Clique Problem,MCP)是组合优化中经典而重要的问题之一,在信息抽取、信号传输、计算机视觉、社会网络及生物信息学等众多领域有着重要的应用.学者们根据不同的思想策略,提出了许多方法求解最大派系问题,如分支定界、遗传算法、模拟退火、交叉熵及DNA方法等.现根据派系的邻居信息提出一种基于派系邻接顶点和邻接边的派系过滤算法.算法从一个已知派系(初始为一个单独顶点)出发,每次考察派系的邻接顶点,并以派系的邻接边为基础,扩展已有派系而得到更大的派系.用两个大规模的科学家合作网络对提出的算法进行了分析,并讨论了大规模社会网络中的派系分布情况.实验表明,提出的算法可有效地抽取网络中的最大派系.
Paul, P.; Bhattacharyya, D.; Turton, R.; Zitney, S.
2012-01-01
Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In this work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel
Paul, P.; Bhattacharyya, D.; Turton, R.; Zitney, S.
2012-01-01
Future integrated gasification combined cycle (IGCC) power plants with CO{sub 2} capture will face stricter operational and environmental constraints. Accurate values of relevant states/outputs/disturbances are needed to satisfy these constraints and to maximize the operational efficiency. Unfortunately, a number of these process variables cannot be measured while a number of them can be measured, but have low precision, reliability, or signal-to-noise ratio. In this work, a sensor placement (SP) algorithm is developed for optimal selection of sensor location, number, and type that can maximize the plant efficiency and result in a desired precision of the relevant measured/unmeasured states. In this work, an SP algorithm is developed for an selective, dual-stage Selexol-based acid gas removal (AGR) unit for an IGCC plant with pre-combustion CO{sub 2} capture. A comprehensive nonlinear dynamic model of the AGR unit is developed in Aspen Plus Dynamics® (APD) and used to generate a linear state-space model that is used in the SP algorithm. The SP algorithm is developed with the assumption that an optimal Kalman filter will be implemented in the plant for state and disturbance estimation. The algorithm is developed assuming steady-state Kalman filtering and steady-state operation of the plant. The control system is considered to operate based on the estimated states and thereby, captures the effects of the SP algorithm on the overall plant efficiency. The optimization problem is solved by Genetic Algorithm (GA) considering both linear and nonlinear equality and inequality constraints. Due to the very large number of candidate sets available for sensor placement and because of the long time that it takes to solve the constrained optimization problem that includes more than 1000 states, solution of this problem is computationally expensive. For reducing the computation time, parallel computing is performed using the Distributed Computing Server (DCS®) and the Parallel
McLachlan GM
2005-01-01
Full Text Available Abstract QTL detection experiments in livestock species commonly use the half-sib design. Each male is mated to a number of females, each female producing a limited number of progeny. Analysis consists of attempting to detect associations between phenotype and genotype measured on the progeny. When family sizes are limiting experimenters may wish to incorporate as much information as possible into a single analysis. However, combining information across sires is problematic because of incomplete linkage disequilibrium between the markers and the QTL in the population. This study describes formulæ for obtaining MLEs via the expectation maximization (EM algorithm for use in a multiple-trait, multiple-family analysis. A model specifying a QTL with only two alleles, and a common within sire error variance is assumed. Compared to single-family analyses, power can be improved up to fourfold with multi-family analyses. The accuracy and precision of QTL location estimates are also substantially improved. With small family sizes, the multi-family, multi-trait analyses reduce substantially, but not totally remove, biases in QTL effect estimates. In situations where multiple QTL alleles are segregating the multi-family analysis will average out the effects of the different QTL alleles.
Koichi, Shungo; Arisaka, Masaki; Koshino, Hiroyuki; Aoki, Atsushi; Iwata, Satoru; Uno, Takeaki; Satoh, Hiroko
2014-04-28
Computer-assisted chemical structure elucidation has been intensively studied since the first use of computers in chemistry in the 1960s. Most of the existing elucidators use a structure-spectrum database to obtain clues about the correct structure. Such a structure-spectrum database is expected to grow on a daily basis. Hence, the necessity to develop an efficient structure elucidation system that can adapt to the growth of a database has been also growing. Therefore, we have developed a new elucidator using practically efficient graph algorithms, including the convex bipartite matching, weighted bipartite matching, and Bron-Kerbosch maximal clique algorithms. The utilization of the two matching algorithms especially is a novel point of our elucidator. Because of these sophisticated algorithms, the elucidator exactly produces a correct structure if all of the fragments are included in the database. Even if not all of the fragments are in the database, the elucidator proposes relevant substructures that can help chemists to identify the actual chemical structures. The elucidator, called the CAST/CNMR Structure Elucidator, plays a complementary role to the CAST/CNMR Chemical Shift Predictor, and together these two functions can be used to analyze the structures of organic compounds.
Greedy Maximal Scheduling in Wireless Networks
Li, Qiao
2010-01-01
In this paper we consider greedy scheduling algorithms in wireless networks, i.e., the schedules are computed by adding links greedily based on some priority vector. Two special cases are considered: 1) Longest Queue First (LQF) scheduling, where the priorities are computed using queue lengths, and 2) Static Priority (SP) scheduling, where the priorities are pre-assigned. We first propose a closed-form lower bound stability region for LQF scheduling, and discuss the tightness result in some scenarios. We then propose an lower bound stability region for SP scheduling with multiple priority vectors, as well as a heuristic priority assignment algorithm, which is related to the well-known Expectation-Maximization (EM) algorithm. The performance gain of the proposed heuristic algorithm is finally confirmed by simulations.
Finding Maximal Quasiperiodicities in Strings
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes......Apostolico and Ehrenfeucht defined the notion of a maximal quasiperiodic substring and gave an algorithm that finds all maximal quasiperiodic substrings in a string of length n in time O(n log2 n). In this paper we give an algorithm that finds all maximal quasiperiodic substrings in a string...
Finite sample performance of the E-M algorithm for ranks data modelling
Angela D'Elia
2007-10-01
Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.
Tiansong Cui
2016-01-01
Full Text Available Dynamic energy pricing provides a promising solution for the utility companies to incentivize energy users to perform demand side management in order to minimize their electric bills. Moreover, the emerging decentralized smart grid, which is a likely infrastructure scenario for future electrical power networks, allows energy consumers to select their energy provider from among multiple utility companies in any billing period. This paper thus starts by considering an oligopolistic energy market with multiple non-cooperative (competitive utility companies, and addresses the problem of determining dynamic energy prices for every utility company in this market based on a modified Bertrand Competition Model of user behaviors. Two methods of dynamic energy pricing are proposed for a utility company to maximize its total profit. The first method finds the greatest lower bound on the total profit that can be achieved by the utility company, whereas the second method finds the best response of a utility company to dynamic pricing policies that the other companies have adopted in previous billing periods. To exploit the advantages of each method while compensating their shortcomings, an adaptive dynamic pricing policy is proposed based on a machine learning technique, which finds a good balance between invocations of the two aforesaid methods. Experimental results show that the adaptive policy results in consistently high profit for the utility company no matter what policies are employed by the other companies.
王璐; 李光春; 乔相伟; 王兆龙; 马涛
2012-01-01
In order to solve the state estimation problem of nonlinear systems without knowing prior noise statistical characteristics, an adaptive unscented Kalman filter (UKF) based on the maximum likelihood principle and expectation maximization algorithm is proposed in this paper. In our algorithm, the maximum likelihood principle is used to find a log likelihood function with noise statistical characteristics. Then, the problem of noise estimation turns out to be maximizing the mean of the log likelihood function, which can be achieved by using the expectation maximization algorithm. Finally, the adaptive UKF algorithm with a suboptimal and recurred noise statistical estimator can be obtained. The simulation analysis shows that the proposed adaptive UKF algorithm can overcome the problem of filtering accuracy declination of traditional UKF used in nonlinear filtering without knowing prior noise statistical characteristics and that the algorithm can estimate the noise statistical parameters online.%针对噪声先验统计特性未知情况下的非线性系统状态估计问题,提出了基于极大似然准则和最大期望算法的自适应无迹卡尔曼滤波(Unscented Kalman filter,UKF)算法.利用极大似然准则构造含有噪声统计特性的对数似然函数,通过最大期望算法将噪声估计问题转化为对数似然函数数学期望极大化问题,最终得到带次优递推噪声统计估计器的自适应UKF算法.仿真分析表明,与传统UKF算法相比,提出的自适应UKF算法有效克服了传统UKF算法在系统噪声统计特性未知情况下滤波精度下降的问题,并实现了系统噪声统计特性的在线估计.
Goerg, Georg M
2011-01-01
In this work I propose a frequency domain adaptation of the Expectation Maximization (EM) algorithm to separate a family of sequential observations in classes of similar dynamic structure, which can either mean non-stationary signals of similar shape, or stationary signals with similar auto-covariance function. It does this by viewing the magnitude of the discrete Fourier transform (DFT) of the signals (or power spectrum) as a probability density/mass function (pdf/pmf) on the unit circle: signals with similar dynamics have similar pdfs; distinct patterns have distinct pdfs. An advantage of this approach is that it does not rely on any parametric form of the dynamic structure, but can be used for non-parametric, robust and model-free classi?cation. Applications to neural spike sorting (non-stationary) and pattern-recognition in socio-economic time series (stationary) demonstrate the usefulness and wide applicability of the proposed method.
Unsupervised Classification of SAR Images using Hierarchical Agglomeration and EM
Kayabol, K.; Krylov, V.; Zerubia, J.; Salerno, E.; Cetin, A.E.; Salvetti, O.
2012-01-01
We implement an unsupervised classification algorithm for high resolution Synthetic Aperture Radar (SAR) images. The foundation of algorithm is based on Classification Expectation-Maximization (CEM). To get rid of two drawbacks of EM type algorithms, namely the initialization and the model order sel
Chow, Sy-Miin; Lu, Zhaohua; Sherwood, Andrew; Zhu, Hongtu
2016-03-01
The past decade has evidenced the increased prevalence of irregularly spaced longitudinal data in social sciences. Clearly lacking, however, are modeling tools that allow researchers to fit dynamic models to irregularly spaced data, particularly data that show nonlinearity and heterogeneity in dynamical structures. We consider the issue of fitting multivariate nonlinear differential equation models with random effects and unknown initial conditions to irregularly spaced data. A stochastic approximation expectation-maximization algorithm is proposed and its performance is evaluated using a benchmark nonlinear dynamical systems model, namely, the Van der Pol oscillator equations. The empirical utility of the proposed technique is illustrated using a set of 24-h ambulatory cardiovascular data from 168 men and women. Pertinent methodological challenges and unresolved issues are discussed.
Molina, F; Aguilera, P; Romero-Barrientos, J; Arellano, H F; Agramunt, J; Medel, J; Morales, J R; Zambra, M
2017-08-04
We present a methodology to obtain the energy distribution of the neutron flux of an experimental nuclear reactor, using multi-foil activation measurements and the Expectation Maximization unfolding algorithm, which is presented as an alternative to well known unfolding methods such as GRAVEL. Self-shielding flux corrections for energy bin groups were obtained using MCNP6 Monte Carlo simulations. We have made studies at the at the Dry Tube of RECH-1 obtaining fluxes of 1.5(4)×10(13)cm(-2)s(-1) for the thermal neutron energy region, 1.9(5)×10(12)cm(-2)s(-1) for the epithermal neutron energy region, and 4.3(11)×10(11)cm(-2)s(-1) for the fast neutron energy region. Copyright © 2017 Elsevier Ltd. All rights reserved.
S.S.L. Rao, A. Shaija, S. Jayaraj
2014-01-01
Full Text Available A mathematical model was developed to investigate water accumulation at the cathode membrane interface by varying different operating parameters like fuel cell operating temperature and pressure, cathode and anode humidification temperatures and cathode stoichiometry. Taguchi optimization methodology is then combined with this model to determine the optimal combination of the operating parameters to maximize current density without flooding. Results of analysis of variance (ANOVA show that fuel cell operating temperature and cathode humidification temperature are the two most significant parameters in the ratio of 56.07% and 27.89% respectively and also that higher fuel cell temperature and lower cathode humidification temperature are favourable to get the maximum current draw without flooding at the cathode membrane interface. The global optimum value of the operating parameters to maximize the current density without flooding was obtained by formulating as an optimization problem using genetic algorithm (GA. These results were compared with the results obtained using Taguchi method and it was found to be similar and slightly better.
Aher, Sunita B.
2014-01-01
Recommendation systems have been widely used in internet activities whose aim is to present the important and useful information to the user with little effort. Course Recommendation System is system which recommends to students the best combination of courses in engineering education system e.g. if student is interested in course like system programming then he would like to learn the course entitled compiler construction. The algorithm with combination of two data mining algorithm i.e. combination of Expectation Maximization Clustering and Apriori Association Rule Algorithm have been developed. The result of this developed algorithm is compared with Apriori Association Rule Algorithm which is an existing algorithm in open source data mining tool Weka.
Brandes, U; Gaertler, M; Goerke, R; Hoefer, M; Nikoloski, Z; Wagner, D
2006-01-01
Several algorithms have been proposed to compute partitions of networks into communities that score high on a graph clustering index called modularity. While publications on these algorithms typically contain experimental evaluations to emphasize the plausibility of results, none of these algorithms has been shown to actually compute optimal partitions. We here settle the unknown complexity status of modularity maximization by showing that the corresponding decision version is NP-complete in the strong sense. As a consequence, any efficient, i.e. polynomial-time, algorithm is only heuristic and yields suboptimal partitions on many instances.
Brüstle, Thomas; Pérotin, Matthieu
2012-01-01
Maximal green sequences are particular sequences of quiver mutations which were introduced by Keller in the context of quantum dilogarithm identities and independently by Cecotti-Cordova-Vafa in the context of supersymmetric gauge theory. Our aim is to initiate a systematic study of these sequences from a combinatorial point of view. Interpreting maximal green sequences as paths in various natural posets arising in representation theory, we prove the finiteness of the number of maximal green sequences for cluster finite quivers, affine quivers and acyclic quivers with at most three vertices. We also give results concerning the possible numbers and lengths of these maximal green sequences. Finally we describe an algorithm for computing maximal green sequences for arbitrary valued quivers which we used to obtain numerous explicit examples that we present.
A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm
Bork, Lasse
This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...
Discriminative variable selection for clustering with the sparse Fisher-EM algorithm
Bouveyron, Charles
2012-01-01
The interest in variable selection for clustering has increased recently due to the growing need in clustering high-dimensional data. Variable selection allows in particular to ease both the clustering and the interpretation of the results. Existing approaches have demonstrated the efficiency of variable selection for clustering but turn out to be either very time consuming or not sparse enough in high-dimensional spaces. This work proposes to perform a selection of the discriminative variables by introducing sparsity in the loading matrix of the Fisher-EM algorithm. This clustering method has been recently proposed for the simultaneous visualization and clustering of high-dimensional data. It is based on a latent mixture model which fits the data into a low-dimensional discriminative subspace. Three different approaches are proposed in this work to introduce sparsity in the orientation matrix of the discriminative subspace through $\\ell_{1}$-type penalizations. Experimental comparisons with existing approach...
A structural dynamic factor model for the effects of monetary policy estimated by the EM algorithm
Bork, Lasse
This paper applies the maximum likelihood based EM algorithm to a large-dimensional factor analysis of US monetary policy. Specifically, economy-wide effects of shocks to the US federal funds rate are estimated in a structural dynamic factor model in which 100+ US macroeconomic and financial time...... series are driven by the joint dynamics of the federal funds rate and a few correlated dynamic factors. This paper contains a number of methodological contributions to the existing literature on data-rich monetary policy analysis. Firstly, the identification scheme allows for correlated factor dynamics...... as opposed to the orthogonal factors resulting from the popular principal component approach to structural factor models. Correlated factors are economically more sensible and important for a richer monetary policy transmission mechanism. Secondly, I consider both static factor loadings as well as dynamic...
Brendle, Joerg
2016-01-01
We show that, consistently, there can be maximal subtrees of P (omega) and P (omega) / fin of arbitrary regular uncountable size below the size of the continuum. We also show that there are no maximal subtrees of P (omega) / fin with countable levels. Our results answer several questions of Campero, Cancino, Hrusak, and Miranda.
Implementing EM and Viterbi algorithms for Hidden Markov Model in linear memory
Winters-Hilt Stephen
2008-04-01
Full Text Available Abstract Background The Baum-Welch learning procedure for Hidden Markov Models (HMMs provides a powerful tool for tailoring HMM topologies to data for use in knowledge discovery and clustering. A linear memory procedure recently proposed by Miklós, I. and Meyer, I.M. describes a memory sparse version of the Baum-Welch algorithm with modifications to the original probabilistic table topologies to make memory use independent of sequence length (and linearly dependent on state number. The original description of the technique has some errors that we amend. We then compare the corrected implementation on a variety of data sets with conventional and checkpointing implementations. Results We provide a correct recurrence relation for the emission parameter estimate and extend it to parameter estimates of the Normal distribution. To accelerate estimation of the prior state probabilities, and decrease memory use, we reverse the originally proposed forward sweep. We describe different scaling strategies necessary in all real implementations of the algorithm to prevent underflow. In this paper we also describe our approach to a linear memory implementation of the Viterbi decoding algorithm (with linearity in the sequence length, while memory use is approximately independent of state number. We demonstrate the use of the linear memory implementation on an extended Duration Hidden Markov Model (DHMM and on an HMM with a spike detection topology. Comparing the various implementations of the Baum-Welch procedure we find that the checkpointing algorithm produces the best overall tradeoff between memory use and speed. In cases where sequence length is very large (for Baum-Welch, or state number is very large (for Viterbi, the linear memory methods outlined may offer some utility. Conclusion Our performance-optimized Java implementations of Baum-Welch algorithm are available at http://logos.cs.uno.edu/~achurban. The described method and implementations will aid
K B Athreya
2009-09-01
It is shown that (i) every probability density is the unique maximizer of relative entropy in an appropriate class and (ii) in the class of all pdf that satisfy $\\int fh_id_=_i$ for $i=1,2,\\ldots,\\ldots k$ the maximizer of entropy is an $f_0$ that is proportional to $\\exp(\\sum c_i h_i)$ for some choice of $c_i$. An extension of this to a continuum of constraints and many examples are presented.
Zanetti, Massimo; Bovolo, Francesca; Bruzzone, Lorenzo
2015-12-01
The problem of estimating the parameters of a Rayleigh-Rice mixture density is often encountered in image analysis (e.g., remote sensing and medical image processing). In this paper, we address this general problem in the framework of change detection (CD) in multitemporal and multispectral images. One widely used approach to CD in multispectral images is based on the change vector analysis. Here, the distribution of the magnitude of the difference image can be theoretically modeled by a Rayleigh-Rice mixture density. However, given the complexity of this model, in applications, a Gaussian-mixture approximation is often considered, which may affect the CD results. In this paper, we present a novel technique for parameter estimation of the Rayleigh-Rice density that is based on a specific definition of the expectation-maximization algorithm. The proposed technique, which is characterized by good theoretical properties, iteratively updates the parameters and does not depend on specific optimization routines. Several numerical experiments on synthetic data demonstrate the effectiveness of the method, which is general and can be applied to any image processing problem involving the Rayleigh-Rice mixture density. In the CD context, the Rayleigh-Rice model (which is theoretically derived) outperforms other empirical models. Experiments on real multitemporal and multispectral remote sensing images confirm the validity of the model by returning significantly higher CD accuracies than those obtained by using the state-of-the-art approaches.
An R function for imputation of missing cells in two-way data sets by EM-AMMI algorithm
Jakub Paderewski
2014-06-01
Full Text Available Various statistical methods for two-way classification data sets (including AMMI or GGE analyses, used in crop science for interpreting genotype-by-environment interaction require the data to be complete, that is, not to have missing cells. If there are such, however, one might impute the missing cells. The paper offers R code for imputing missing values by the EM-AMMI algorithm. In addition, a function to check the repeatability of this algorithm is proposed. This function could be used to evaluate if the missing data were imputed reliably (unambiguously, which is important especially for small data sets
Sá, Ana Cravo; Coelho, Carina Marques; Monsanto, Fátima
2014-01-01
Objectivo do estudo: comparar o desempenho dos algoritmos Pencil Beam Convolution (PBC) e do Analytical Anisotropic Algorithm (AAA) no planeamento do tratamento de tumores de mama com radioterapia conformacional a 3D.
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Maximizing Entropy over Markov Processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2013-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of an system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
Maximizing entropy over Markov processes
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... as a reward function, a polynomial algorithm to verify the existence of a system maximizing entropy among those respecting a specification, a procedure for the maximization of reward functions over Interval Markov Chains and its application to synthesize an implementation maximizing entropy. We show how...
EM-CDKF algorithm and its applications on SINS′initial alignment%EM-CDKF算法及其SINS初始对准应用
丁国强; 徐洁; 周卫东; 张志艳
2014-01-01
针对非线性捷联惯导系统噪声先验统计信息未知问题，基于中心差分卡尔曼滤波基本算法，采用极大似然准则构造极大期望最速下降梯度算法展开系统未知噪声统计特性在线估计计算研究，构建一类捷联惯导系统初始对准极大期望自适应中心差分最优滤波算法。该算法利用极大似然准则构造系统噪声统计特性对数似然函数，采用极大期望最速下降梯度法把系统噪声统计特性估计转化为对数似然函数期望最大值计算，获得系统过程噪声和观测噪声在线递推估计的自适应极大期望中心差分卡尔曼算法。经过大方位失准角捷联惯导系统初始对准仿真实验，与中心差分卡尔曼滤波基本算法相比，自适应极大期望中心差分卡尔曼算法能够有效解决基本算法在系统噪声先验知识未知情形下的滤波精度下降甚至发散问题，并且能够实现系统噪声统计特性的在线递推估计。%As the unknow n priori statistical properties of the nonlinear strap-dow n inertial navigation system (SINS) noises ,based on the central divided Kalman filtering (CDKF) algorithm ,the expecta-tion maximum steepest descent method was presented to develop the adaptive expectation maximum based central divided Kalman filtering (EM-CDKF) algorithm with maximum likelihood criterion ,to evaluate on-line the system noise’s statistical properties .The EM-CDKF algorithm constructs the Log-likelihood function of system noises statistical properties with maximum likelihood criterion ,and transforms the estimation evaluation of system noise statistical properties into the maximum evalua-tion of Log-likelihood function with the expectation maximum steepest descent method ,and with the EM-CDKF algorithm ,the process and measurement noises can be evaluated by online recursive pat-tern .T he simulink experiments of SINS’ large azimuth misalignment angle indicate that ,compared to
Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)
2015-04-29
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Razali, Azhani Mohd; Abdullah, Jaafar
2015-04-01
Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.
Garbarino, Sara; Massone, Anna Maria; Sannino, Alessia; Boselli, Antonella; Wang, Xuan; Spinelli, Nicola; Piana, Michele
2016-01-01
We consider the problem of retrieving the aerosol extinction coefficient from Raman lidar measurements. This is an ill--posed inverse problem that needs regularization, and we propose to use the Expectation--Maximization (EM) algorithm to provide stable solutions. Indeed, EM is an iterative algorithm that imposes a positivity constraint on the solution, and provides regularization if iterations are stopped early enough. We describe the algorithm and propose a stopping criterion inspired by a statistical principle. We then discuss its properties concerning the spatial resolution. Finally, we validate the proposed approach by using both synthetic data and experimental measurements; we compare the reconstructions obtained by EM with those obtained by the Tikhonov method, by the Levenberg-Marquardt method, as well as those obtained by combining data smoothing and numerical derivation.
Garbarino, Sara; Sorrentino, Alberto; Massone, Anna Maria; Sannino, Alessia; Boselli, Antonella; Wang, Xuan; Spinelli, Nicola; Piana, Michele
2016-09-19
We consider the problem of retrieving the aerosol extinction coefficient from Raman lidar measurements. This is an ill-posed inverse problem that needs regularization, and we propose to use the Expectation-Maximization (EM) algorithm to provide stable solutions. Indeed, EM is an iterative algorithm that imposes a positivity constraint on the solution, and provides regularization if iterations are stopped early enough. We describe the algorithm and propose a stopping criterion inspired by a statistical principle. We then discuss its properties concerning the spatial resolution. Finally, we validate the proposed approach by using both synthetic data and experimental measurements; we compare the reconstructions obtained by EM with those obtained by the Tikhonov method, by the Levenberg-Marquardt method, as well as those obtained by combining data smoothing and numerical derivation.
Langlois, Robert; Frank, Joachim
2011-09-01
Many cyro-EM datasets are heterogeneous stemming from molecules undergoing conformational changes. The need to characterize each of the substrates with sufficient resolution entails a large increase in the data flow and motivates the development of more effective automated particle selection algorithms. Concepts and procedures from the machine-learning field are increasingly employed toward this end. However, a review of recent literature has revealed a discrepancy in terminology of the performance scores used to compare particle selection algorithms, and this has subsequently led to ambiguities in the meaning of claimed performance. In an attempt to curtail the perpetuation of this confusion and to disentangle past mistakes, we review the performance of published particle selection efforts with a set of explicitly defined performance scores using the terminology established and accepted within the field of machine learning. Copyright © 2011 Elsevier Inc. All rights reserved.
Multiscale Unsupervised Segmentation of SAR Imagery Using the Genetic Algorithm.
Wen, Xian-Bin; Zhang, Hua; Jiang, Ze-Tao
2008-03-12
A valid unsupervised and multiscale segmentation of synthetic aperture radar(SAR) imagery is proposed by a combination GA-EM of the Expectation Maximization(EM) algorith with the genetic algorithm (GA). The mixture multiscale autoregressive(MMAR) model is introduced to characterize and exploit the scale-to-scale statisticalvariations and statistical variations in the same scale in SAR imagery due to radar speckle,and a segmentation method is given by combining the GA algorithm with the EMalgorithm. This algorithm is capable of selecting the number of components of the modelusing the minimum description length (MDL) criterion. Our approach benefits from theproperties of the Genetic and the EM algorithm by combination of both into a singleprocedure. The population-based stochastic search of the genetic algorithm (GA) exploresthe search space more thoroughly than the EM method. Therefore, our algorithm enablesescaping from local optimal solutions since the algorithm becomes less sensitive to itsinitialization. Some experiment results are given based on our proposed approach, andcompared to that of the EM algorithms. The experiments on the SAR images show that theGA-EM outperforms the EM method.
Multiscale Unsupervised Segmentation of SAR Imagery Using the Genetic Algorithm
Ze-Tao Jiang
2008-03-01
Full Text Available A valid unsupervised and multiscale segmentation of synthetic aperture radar(SAR imagery is proposed by a combination GA-EM of the Expectation Maximization(EM algorith with the genetic algorithm (GA. The mixture multiscale autoregressive(MMAR model is introduced to characterize and exploit the scale-to-scale statisticalvariations and statistical variations in the same scale in SAR imagery due to radar speckle,and a segmentation method is given by combining the GA algorithm with the EMalgorithm. This algorithm is capable of selecting the number of components of the modelusing the minimum description length (MDL criterion. Our approach benefits from theproperties of the Genetic and the EM algorithm by combination of both into a singleprocedure. The population-based stochastic search of the genetic algorithm (GA exploresthe search space more thoroughly than the EM method. Therefore, our algorithm enablesescaping from local optimal solutions since the algorithm becomes less sensitive to itsinitialization. Some experiment results are given based on our proposed approach, andcompared to that of the EM algorithms. The experiments on the SAR images show that theGA-EM outperforms the EM method.
王继霞; 刘次华
2009-01-01
In this paper,a geometric response and normal covariace model for the missing data are assumed.We fit the model using the Monte Carlo EM(Expectation and Maximization) algorithm.The E-step is derived by Metropolis-Hastings algorithm to generate a sample for missing data,and the M-Step is done by Newton-Raphson to maximize the likelihood function.Asymptotic variances and the standard errors of the MLE of parameters are derived using the observed Fisher information.%本文研究缺失数据下对数线性模型参数的极大似然估计问题.通过Monte-Carlo EM算法去拟合所提出的模型.其中,在期望步中利用Metropolis-Hastings算法产生一个缺失数据的样本,在最大化步中利用Newton-Raphson迭代使似然函数最大化.最后,利用观测数据的Fisher信息得到参数极大似然估计的渐近方差和标准误差.
基于期望值最大算法和离散小波框架的图像融合%Image Fusion Based on EM Algorithm and Discrete Wavelete Frame
刘刚; 敬忠良; 孙韶媛
2005-01-01
The discrete wavelet transform has become an attractive tool for fusing multisensor images. This paper investigates the discrete wavelet frame transform. A major advantage of this method over discrete wavelet transform is aliasing free and translation invariant. The discrete wavelet frame (DWF) transform is used to decompose the registered images into multiscale representation with the low frequency and the high frequency bands. The low frequency band is normalized and fused by using the expectation maximization (EM) algorithm. The informative importance measure is applied to the high frequency band. The final fused image is obtained by taking the inverse transform on the composite coefficient representations. Experiments show that the proposed method is more effective than conventional image fusion methods.
Expectation-Maximization Method for EEG-Based Continuous Cursor Control
Yixiao Wang
2007-01-01
Full Text Available To develop effective learning algorithms for continuous prediction of cursor movement using EEG signals is a challenging research issue in brain-computer interface (BCI. In this paper, we propose a novel statistical approach based on expectation-maximization (EM method to learn the parameters of a classifier for EEG-based cursor control. To train a classifier for continuous prediction, trials in training data-set are first divided into segments. The difficulty is that the actual intention (label at each time interval (segment is unknown. To handle the uncertainty of the segment label, we treat the unknown labels as the hidden variables in the lower bound on the log posterior and maximize this lower bound via an EM-like algorithm. Experimental results have shown that the averaged accuracy of the proposed method is among the best.
Gonzaga, Adilson [Sao Paulo Univ., Sao Carlos, SP (Brazil). Escola de Engenharia. Dept. de Engenharia Eletrica; Franca, Celso Aparecido de [Sao Paulo Univ., SP (Brazil). Inst. de Fisica. Dept. de Fisica e Informatica
1996-12-31
Edge detecting techniques applied to radiographic digital images are discussed. Some algorithms have been implemented and the results are displayed to enhance boundary or hide details. An algorithm applied in a pre processed image with contrast enhanced is proposed and the results are discussed 5 refs., 4 figs.
Description and comparison of algorithms for correcting anisotropic magnification in cryo-EM images
Zhao, Jianhua; Benlekbir, Samir; Rubinstein, John L
2015-01-01
Single particle electron cryomicroscopy (cryo-EM) allows for structures of proteins and protein complexes to be determined from images of non-crystalline specimens. Cryo-EM data analysis requires electron microscope images of randomly oriented ice-embedded protein particles to be rotated and translated to allow for coherent averaging when calculating three-dimensional (3-D) structures. Rotation of 2-D images is usually done with the assumption that the magnification of the electron microscope is the same in all directions, a condition that has been found to be untrue with some electron microscopes when used with the settings necessary for cryo-EM with a direct detector device (DDD) camera (Grant and Grigorieff, in preparation). Correction of images by linear interpolation in real space has allowed high-resolution structures to be calculated from cryo-EM images for symmetric particles (Grant and Grigorieff, in preparation). Here we describe and compare a simple real space method and a somewhat more sophisticat...
Image segmentation algorithm combined PCNN with maximal correlative criterion%PCNN和最大相关准则相结合的图像分割方法
邓成锦; 聂仁灿; 周冬明; 赵东风
2011-01-01
脉冲耦合神经网络(PCNN)是有着生物学背景的新一代人工神经网络,在图像分割方面体现了优异的性能.PCNN模型在参数估计和阈值迭代方面的问题还有待解决.将一维最大相关准则和二维最大相关准则相结合来估计神经元参数,实现了图像分割的自动化并降低了运算的复杂性.仿真结果表明,该方法在分割图效果和运算复杂度方面都得到了提高,具有较好的实用性.%Pulse Coupled Neural Network(PCNN) is a new generation which has a biological background of artificial neural network, reflects excellent performance in the image segmentation. But the problems of PCNN model parameter estimation and threshold iteration are not been resolved. This paper combines one dimension maximal correlative criterion and two dimension maximal correlative criterion to estimate the neuron parameters,achieves the automation of image segmentation and reduces the complexity of computing. Simulation results show that the proposed method results in the segmentation map and computational complexity compared with the related literature have been improved,and has better usability.
Understanding maximal repetitions in strings
Crochemore, Maxime
2008-01-01
The cornerstone of any algorithm computing all repetitions in a string of length n in O(n) time is the fact that the number of runs (or maximal repetitions) is O(n). We give a simple proof of this result. As a consequence of our approach, the stronger result concerning the linearity of the sum of exponents of all runs follows easily.
Algoritmo genético em química Genetic algorithm in Chemistry
Paulo Augusto da Costa Filho
1999-06-01
Full Text Available Genetic algorithm is an optimization technique based on Darwin evolution theory. In last years its application in chemistry is increasing significantly due the special characteristics for optimization of complex systems. The basic principles and some further modifications implemented to improve its performance are presented, as well as a historical development. A numerical example of a function optimization is also shown to demonstrate how the algorithm works in an optimization process. Finally several chemistry applications realized until now is commented to serve as parameter to future applications in this field.
Numerical Stability Improvements for the Pseudo-Spectral EM PIC Algorithm
Godfrey, Brendan B; Haber, Irving
2013-01-01
The pseudo-spectral analytical time-domain (PSATD) particle-in-cell (PIC) algorithm solves the vacuum Maxwell's equations exactly, has no Courant time-step limit (as conventionally defined), and offers substantial flexibility in plasma and particle beam simulations. It is, however, not free of the usual numerical instabilities, including the numerical Cherenkov instability, when applied to relativistic beam simulations. This paper presents several approaches that, when combined with digital filtering, almost completely eliminate the numerical Cherenkov instability. The paper also investigates the numerical stability of the PSATD algorithm at low beam energies.
王泽飞; 熊晓燕; 苏帅团
2015-01-01
模态分析技术是工程振动中的一种重要方法,相比于传统实验模态技术,工作模态技术拥有无需测量系统输入等优势.本文研究了期望最大化(EM)算法在工作模态分析中的应用,EM算法作为一种全目标算法,在统计学上拥有一些最佳属性.通过对自由梁的实验研究,得出EM算法在工作模态分析中的有效性,相比于随机子空间法,EM算法可以摒弃虚假模态,得到更为清晰的稳态图.
Christensen, Lars P.B.; Larsen, Jan
2006-01-01
A general Variational Bayesian framework for iterative data and parameter estimation for coherent detection is introduced as a generalization of the EM-algorithm. Explicit solutions are given for MIMO channel estimation with Gaussian prior and noise covariance estimation with inverse-Wishart prio...
CHEN Fei; LUO Lin; JIN Yaqiu
2004-01-01
To automatically detect and analyze the surface change in the urban area from multi-temporal SAR images, an algorithm of two-threshold expectation maximum (EM) and Markov random field (MRF) is developed. Difference of the SAR images demonstrates variation of backscattering caused by the surface change all over the image pixels. Two thresholds are obtained by the EM iterative process and categorized to three classes: enhanced scattering, reduced scattering and unchanged regimes. Initializing from the EM result, the iterated conditional modes (ICM) algorithm of the MRF is then used to analyze the detection of contexture change in the urban area. As an example, two images of the ERS-2 SAR in 1996 and 2002 over the Shanghai City are studied.
Bayesian tracking of multiple point targets using expectation maximization
Selvan, Raghavendra
The range of applications where target tracking is useful has grown well beyond the classical military and radar-based tracking applications. With the increasing enthusiasm in autonomous solutions for vehicular and robotics navigation, much of the maneuverability can be provided based on solutions...... the measurements from sensors to choose the best data association hypothesis, from which the estimates of target trajectories can be obtained. In an ideal world, we could maintain all possible data association hypotheses from observing all measurements, and pick the best hypothesis. But, it turns out the number...... joint density is maximized over the data association variables, or over the target state variables, two EM-based algorithms for tracking multiple point targets are derived, implemented and evaluated. In the first algorithm, the data association variable is integrated out, and the target states...
Enders, Craig K.; Peugh, James L.
2004-01-01
Two methods, direct maximum likelihood (ML) and the expectation maximization (EM) algorithm, can be used to obtain ML parameter estimates for structural equation models with missing data (MD). Although the 2 methods frequently produce identical parameter estimates, it may be easier to satisfy missing at random assumptions using EM. However, no…
Models and Algorithms for Tracking Target with Coordinated Turn Motion
Xianghui Yuan
2014-01-01
Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.
Suppression of EM Fields using Active Control Algorithms and MIMO Antenna System
A. Mohammed
2004-09-01
Full Text Available Active methods for attenuating acoustic pressure fields have beensuccessfully used in many applications. In this paper we investigatesome of these active control methods in combination with a MIMO antennasystem in order to assess their validity and performance when appliedto electromagnetic fields. The application that we evaluated in thispaper is a model of a mobile phone equipped with one ordinarytransmitting antenna and two actuator-antennas which purpose is toreduce the electromagnetic field at a specific area in space (e.g. atthe human head. Simulation results show the promise of using theadaptive active control algorithms and MIMO system to attenuate theelectromagnetic field power density.
Geoff McLachlan
2013-11-01
The usefulness of the proposed algorithm is demonstrated in three applications to real datasets. The first example illustrates the use of the main function fmmst in the package by fitting a MST distribution to a bivariate unimodal flow cytometric sample. The second example fits a mixture of MST distributions to the Australian Institute of Sport (AIS data, and demonstrates that EMMIXuskew can provide better clustering results than mixtures with restricted MST components. In the third example, EMMIXuskew is applied to classify cells in a trivariate flow cytometric dataset. Comparisons with some other available methods suggest that EMMIXuskew achieves a lower misclassification rate with respect to the labels given by benchmark gating analysis.
Qin, Jing; Garcia, Tanya P; Ma, Yanyuan; Tang, Ming-Xin; Marder, Karen; Wang, Yuanjia
2014-01-01
In certain genetic studies, clinicians and genetic counselors are interested in estimating the cumulative risk of a disease for individuals with and without a rare deleterious mutation. Estimating the cumulative risk is difficult, however, when the estimates are based on family history data. Often, the genetic mutation status in many family members is unknown; instead, only estimated probabilities of a patient having a certain mutation status are available. Also, ages of disease-onset are subject to right censoring. Existing methods to estimate the cumulative risk using such family-based data only provide estimation at individual time points, and are not guaranteed to be monotonic, nor non-negative. In this paper, we develop a novel method that combines Expectation-Maximization and isotonic regression to estimate the cumulative risk across the entire support. Our estimator is monotonic, satisfies self-consistent estimating equations, and has high power in detecting differences between the cumulative risks of different populations. Application of our estimator to a Parkinson's disease (PD) study provides the age-at-onset distribution of PD in PARK2 mutation carriers and non-carriers, and reveals a significant difference between the distribution in compound heterozygous carriers compared to non-carriers, but not between heterozygous carriers and non-carriers.
A Community Mining Algorithm Based on Expansion of Maximal-complete Graph%一种基于极大完全图扩展的社区挖掘算法
刘井莲; 赵卫绩; 佟良
2015-01-01
Community mining is an important work in complex network analysis. There are a lot of good community mining algorithms,but most algorithms are based on the connection relationship between the nodes to discove cohesive social groups,but the nodes of actual network generally have different behaviors and influences. Based on this,we take full account of the characteristics of which nodes closely interconnected within the community and nodes that have different influences,propose a two-stage mining algorithm based on expansion of maximal-complete graph. The first stage:from the influences of the sub-group cohesion and degree centrality nodes ,we select dispersed k cohesive and influential maximal-complete graphs as the initial communities;The second stage:based on local community module expansion method,the overlapping nodes and nodes outside initial communities are expanded to more closely connecting corresponding communities. Finally,experiment results show that the method is effective in detecting community structure.%社区挖掘是复杂网络分析中一项重要工作。目前已有许多好的社区挖掘算法，但这些算法大多基于节点间的连接关系发现内聚的社会团体，而实际网络中节点大多具有不同的行为和影响力。基于此，充分考虑社区内节点相互连接紧密以及节点具有不同影响力的特性，提出一种基于极大完全图扩展的社区挖掘两阶段算法。第一阶段：从内聚的子团和度中心性节点的影响力出发，从网络中选取分散的k个内聚的且有影响力的极大完全图作为初始社区；第二阶段，基于局部社区模块度扩展方法，将重叠节点和初始社区外节点扩展到与其连接紧密的相应社区内。最后通过仿真实验验证了该算法的有效性。
Profit maximization mitigates competition
Dierker, Egbert; Grodal, Birgit
1996-01-01
We consider oligopolistic markets in which the notion of shareholders' utility is well-defined and compare the Bertrand-Nash equilibria in case of utility maximization with those under the usual profit maximization hypothesis. Our main result states that profit maximization leads to less price...... competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...... of profits among consumers fully into account and partial equilibrium analysis suffices...
Fu, J C; Chen, C C; Chai, J W; Wong, S T C; Li, I C
2010-06-01
We propose an automatic hybrid image segmentation model that integrates the statistical expectation maximization (EM) model and the spatial pulse coupled neural network (PCNN) for brain magnetic resonance imaging (MRI) segmentation. In addition, an adaptive mechanism is developed to fine tune the PCNN parameters. The EM model serves two functions: evaluation of the PCNN image segmentation and adaptive adjustment of the PCNN parameters for optimal segmentation. To evaluate the performance of the adaptive EM-PCNN, we use it to segment MR brain image into gray matter (GM), white matter (WM) and cerebrospinal fluid (CSF). The performance of the adaptive EM-PCNN is compared with that of the non-adaptive EM-PCNN, EM, and Bias Corrected Fuzzy C-Means (BCFCM) algorithms. The result is four sets of boundaries for the GM and the brain parenchyma (GM+WM), the two regions of most interest in medical research and clinical applications. Each set of boundaries is compared with the golden standard to evaluate the segmentation performance. The adaptive EM-PCNN significantly outperforms the non-adaptive EM-PCNN, EM, and BCFCM algorithms in gray mater segmentation. In brain parenchyma segmentation, the adaptive EM-PCNN significantly outperforms the BCFCM only. However, the adaptive EM-PCNN is better than the non-adaptive EM-PCNN and EM on average. We conclude that of the three approaches, the adaptive EM-PCNN yields the best results for gray matter and brain parenchyma segmentation.
Algebraic curves of maximal cyclicity
Caubergh, Magdalena; Dumortier, Freddy
2006-01-01
The paper deals with analytic families of planar vector fields, studying methods to detect the cyclicity of a non-isolated closed orbit, i.e. the maximum number of limit cycles that can locally bifurcate from it. It is known that this multi-parameter problem can be reduced to a single-parameter one, in the sense that there exist analytic curves in parameter space along which the maximal cyclicity can be attained. In that case one speaks about a maximal cyclicity curve (mcc) in case only the number is considered and of a maximal multiplicity curve (mmc) in case the multiplicity is also taken into account. In view of obtaining efficient algorithms for detecting the cyclicity, we investigate whether such mcc or mmc can be algebraic or even linear depending on certain general properties of the families or of their associated Bautin ideal. In any case by well chosen examples we show that prudence is appropriate.
Maximal lattice free bodies, test sets and the Frobenius problem
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
L. F. A. Campos
2007-12-01
Full Text Available A utilização de planejamentos experimentais para o estudo de misturas encontra uma grande gama de aplicações tanto em pesquisas laboratoriais como em trabalhos de desenvolvimento industrial. Assim, este trabalho tem por objetivo utilizar o planejamento experimental aplicado ao estudo de misturas para avaliar a potencialidade do uso conjunto dos resíduos do beneficiamento do caulim e da serragem do granito para a produção de blocos e revestimentos cerâmicos. Utilizando o planejamento experimental foram formuladas determinadas composições com as matérias-primas. As matérias-primas foram misturadas em determinadas proporções e confeccionados corpos de prova por extrusão e por prensagem uniaxial. Os corpos de prova foram queimados e, em seguida, determinou-se sua absorção de água e módulo de ruptura à flexão. Foram ajustados modelos matemáticos de regressão relacionando a absorção de água e o módulo de ruptura com as proporções das matérias-primas. Os resultados mostraram que o procedimento de planejamento experimental utilizado permite maximizar a quantidade de resíduo incorporado às formulações para blocos e revestimentos cerâmicos, sendo possível a incorporação de teores de resíduos de até 50% em formulações para a produção de blocos cerâmicos e de ate 40% em composições para a produção de revestimentos cerâmicos.The use of experimental design to the study of mixtures has found a wide range of applications, even in laboratory scale or in industrial development works. Thus, this work has as aim apply the experimental design used in the study of mixtures to evaluate the suitability of use kaolin processing waste and granite sawing waste together for the production of ceramic bricks and tiles. Based on the raw materials, specific formulations were developed using the experimental design. The raw materials were mixed and sample bodies were produced using extrusion and uniaxial pressing. The sample
Walker, Matthew G; Olszewski, Edward W; Sen, Bodhisattva; Woodroofe, Michael
2008-01-01
(abridged) We develop an algorithm for estimating parameters of a distribution sampled with contamination, employing a statistical technique known as ``expectation maximization'' (EM). Given models for both member and contaminant populations, the EM algorithm iteratively evaluates the membership probability of each discrete data point, then uses those probabilities to update parameter estimates for member and contaminant distributions. The EM approach has wide applicability to the analysis of astronomical data. Here we tailor an EM algorithm to operate on spectroscopic samples obtained with the Michigan-MIKE Fiber System (MMFS) as part of our Magellan survey of stellar radial velocities in nearby dwarf spheroidal (dSph) galaxies. These samples are presented in a companion paper and contain discrete measurements of line-of-sight velocity, projected position, and Mg index for ~1000 - 2500 stars per dSph, including some fraction of contamination by foreground Milky Way stars. The EM algorithm quantifies both dSp...
Approximations for Monotone and Non-monotone Submodular Maximization with Knapsack Constraints
Kulik, Ariel; Tamir, Tami
2011-01-01
Submodular maximization generalizes many fundamental problems in discrete optimization, including Max-Cut in directed/undirected graphs, maximum coverage, maximum facility location and marketing over social networks. In this paper we consider the problem of maximizing any submodular function subject to $d$ knapsack constraints, where $d$ is a fixed constant. We establish a strong relation between the discrete problem and its continuous relaxation, obtained through {\\em extension by expectation} of the submodular function. Formally, we show that, for any non-negative submodular function, an $\\alpha$-approximation algorithm for the continuous relaxation implies a randomized $(\\alpha - \\eps)$-approximation algorithm for the discrete problem. We use this relation to improve the best known approximation ratio for the problem to $1/4- \\eps$, for any $\\eps > 0$, and to obtain a nearly optimal $(1-e^{-1}-\\eps)-$approximation ratio for the monotone case, for any $\\eps>0$. We further show that the probabilistic domain ...
Raqueli Biscayno Viecili
2011-12-01
Full Text Available OBJETIVO: Identificar o papel do broncodilatador no tempo de apneia voluntária máxima em pacientes com distúrbios ventilatórios obstrutivos (DVOs. MÉTODOS: Estudo caso-controle incluindo pacientes com DVOs e grupo controle. Foram realizadas espirometrias antes e após o uso de broncodilatador, assim como testes de apneia respiratória, utilizando-se um microprocessador eletrônico e um pneumotacógrafo como transdutor de fluxo. As curvas de fluxo respiratório foram exibidas em tempo real em um computador portátil, e os tempos de apneia voluntária inspiratória e expiratória máximos (TAVIM e TAVEM, respectivamente foram determinados a partir do sinal adquirido. RESULTADOS: Um total de 35 pacientes com DVOs e 16 controles foram incluídos no estudo. O TAVIM sem o uso de broncodilatador foi significativamente menor no grupo DVO que no grupo controle (22,27 ± 11,81 s vs. 31,45 ± 15,73; p = 0,025, mas essa diferença não foi significativa após o uso de broncodilatador (24,94 ± 12,89 s vs. 31,67 ± 17,53 s. Os valores de TAVEM foram significativamente menores no grupo DVO que no grupo controle antes (16,88 ± 6,58 s vs. 22,09 ± 7,95 s; p = 0,017 e após o uso de broncodilatador (21,22 ± 9,37 s vs. 28,53 ± 12,46 s; p = 0,024. CONCLUSÕES: Estes resultados fornecem uma evidência adicional da utilidade clínica do teste de apneia na avaliação da função pulmonar e do papel do broncodilatador nesse teste.OBJECTIVE: To identify the role of bronchodilators in the maximal breath-hold time in patients with obstructive lung disease (OLD. METHODS: We conducted a case-control study including patients with OLD and a control group. Spirometric tests were performed prior to and after the use of a bronchodilator, as were breath-hold tests, using an electronic microprocessor and a pneumotachograph as a flow transducer. Respiratory flow curves were displayed in real time on a portable computer. The maximal breath-hold times at end
Raquel Aparecida Pizolato
2007-09-01
Full Text Available Parafunctional habits, such as bruxism, are contributory factors for temporomandibular disorders (TMD. The aim of this study was to evaluate the maximal bite force (MBF in the presence of TMD and bruxism (TMDB in young adults. Twelve women (mean age 21.5 years and 7 men (mean age 22.4 years, composed the TMDB group. Ten healthy women and 9 men (mean age 21.4 and 22.4 years, respectively formed the control group. TMD symptoms were evaluated by a structured questionnaire and clinical signs/symptoms were evaluated during clinical examination. A visual analogical scale (VAS was applied for stress assessment. MBF was measured with a gnatodynamometer. The subjects were asked to bite 2 times with maximal effort, during 5 seconds, with a rest interval of about one minute. The highest values were considered. The data were analyzed with Shapiro-Wilks W-test, descriptive statistics, paired or unpaired t tests or Mann-Whitney tests when indicated, and Fisher's exact test (p Hábitos parafuncionais, como o bruxismo, podem contribuir para a disfunção temporomandibular (DTM. O objetivo deste trabalho foi avaliar a força de mordida máxima (FMM na presença de DTM e bruxismo (DTMB em adultos jovens. Doze mulheres (idade média de 21,5 anos e sete homens (idade média 22,4 anos compuseram o grupo DTMB. O grupo controle foi formado por 10 mulheres e 9 homens saudáveis, com idades médias de 21,4 e 22,4 anos, respectivamente. Os sintomas de DTM foram avaliados com um questionário estruturado, e os sinais/sintomas clínicos foram avaliados no exame clínico. Para avaliar estresse, utilizou-se a escala analógica visual (VAS. A FMM foi mensurada com gnatodinamômetro, e o participante foi orientado a morder com o máximo esforço durante 5 segundos, duas vezes, com intervalo de aproximadamente 1 minuto, considerando-se os valores máximos. Os dados foram analisados pelo teste de Shapiro-Wilks, estatística descritiva, teste t pareado e independente, Mann
孙彦景; 田红; 王迎
2012-01-01
According to the problem of energy hole which caused by unbalanced consumption in wireless sensor networks (WSN), the paper proposes the multiple Sinks cooperative mobility optimization algorithm to maximize the lifetime for wireless sensor networks. In this algorithm, the interest region is divided into a quantity of virtual cells. It cooperates with ACO(Ant Colony Optimization) in the mobility of multiple Sinks based on network conditions. The time of Sinks sojourning at optional sites is converted to LP( Linear Program) and extending the lifetime of network. Simulation results indicate that LP-ACO ( Linear Program-Ant Colony Optimization) is effective on balancing the energy consumption It not only makes the network lifetime significantly longer than static, deployment ( STATIC) and random movement( RDM)of Sinks,but also more scalable.%针对无线传感器网络中因能量消耗不平衡造成的“能量洞”问题,提出多Sink协同移动的最大化网络生存期优化算法.该算法将监测区域分割成有限个虚拟单元格,通过蚁群优化算法ACO(Ant Colony Optimization)协同多Sink节点移动;同时,将多Sink节点在备选位置的停留时间归结为LP(Linear Program),最大化网络寿命.仿真结果表明,LP-ACO(Linear Program-Ant Colony Optimization)较好地均衡了传感器网络节点间的负载,网络寿命优于多Sink节点静态部署(STATIC)和随机移动(RDM)时场景,且具有良好的可扩展性.
An Online Algorithm for Maximizing Submodular Functions
2007-12-20
0, acheiving an approximation ratio of 1 − 1 e + for MAX k-COVERAGE is NP-hard [10]. Recently, Feige, Lovász, and Tetali [11] introduced MIN...Journal of the ACM, 45(4):634– 652, 1998. [11] Uriel Feige, László Lovász, and Prasad Tetali . Approximating min sum set cover. Algorith- mica, 40(4
Maximally incompatible quantum observables
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
STEME: efficient EM to find motifs in large data sets
Reid, John E.; Wernisch, Lorenz
2011-01-01
MEME and many other popular motif finders use the expectation–maximization (EM) algorithm to optimize their parameters. Unfortunately, the running time of EM is linear in the length of the input sequences. This can prohibit its application to data sets of the size commonly generated by high-throughput biological techniques. A suffix tree is a data structure that can efficiently index a set of sequences. We describe an algorithm, Suffix Tree EM for Motif Elicitation (STEME), that approximates EM using suffix trees. To the best of our knowledge, this is the first application of suffix trees to EM. We provide an analysis of the expected running time of the algorithm and demonstrate that STEME runs an order of magnitude more quickly than the implementation of EM used by MEME. We give theoretical bounds for the quality of the approximation and show that, in practice, the approximation has a negligible effect on the outcome. We provide an open source implementation of the algorithm that we hope will be used to speed up existing and future motif search algorithms. PMID:21785132
Fabiana Andrade Machado
2002-02-01
Full Text Available O objetivo deste estudo foi determinar a influência da idade cronológica e da maturação biológica sobre o consumo máximo de oxigênio (VO2max e a velocidade de corrida correspondente ao VO2max em crianças e adolescentes brasileiros, do sexo masculino, com idade entre 10 e 15 anos, não praticantes de atividade física sistemática. Participaram do estudo 40 voluntários, divididos em dois grupos, segundo a idade cronológica (GC1 - n = 20; 11,4 ± 0,6 anos; 38,8 ± 8,6kg; 143,6 ± 8,2cm e GC2 - n = 20; 14,1 ± 0,6 anos; 55,9 ± 14,2kg; 163,3 ± 10,2cm e maturação biológica (GB1 - n = 20; estágios 1, 2 e 3; e GB2 - n = 20; estágios 4 e 5. O VO2max foi mensurado em um teste progressivo e intermitente de corrida em esteira rolante, com estágios de três minutos e pausa de 20 segundos, incrementos de 1km/h a começar com 9km/h, até a exaustão voluntária. A velocidade correspondente ao VO2max (vVO2max foi considerada como a menor velocidade em que se observou o maior valor de VO2. A máxima velocidade aeróbia (Va max foi calculada pela fórmula proposta por di Prampero (1986. Houve diferença significante para os valores de VO2max(l/min, Va max(km/he vVO2max(km/h entre os grupos GC1 e GC2 (1,84 ± 0,41 / 2,81 ± 0,61; 11,8 ± 1,2 / 12,6 ± 1,2; 12,1 ± 1,2 / 12,9 ± 1,1, respectivamente, GB1 e GB2 (1,80 ± 0,37 / 2,87 ± 0,56; 12,1 ± 1,2 / 12,9 ± 1,1; 11,8 ± 1,2 / 12,5 ± 1,1, respectivamente, mas não para os valores de VO2max em ml.kg-1.min-1 para todos os grupos (GC1 e GC2: 47,9 ± 6,8 / 50,4 ± 5,5; GB1 e GB2: 47,9 ± 6,8 / 50,3 ± 5,5, respectivamente. Com base nos resultados obtidos, pode-se concluir que o VO2max (l/min, aVa max e a vVO2max têm seus valores aumentados como um provável efeito do crescimento e desenvolvimento, podendo, ainda, expressar melhora da economia de movimento, mesmo em indivíduos não praticantes de atividade física sistemática.The purpose of this study was to determine the influence of
Parker, Andrew M.; Wandi Bruine de Bruin; Baruch Fischhoff
2007-01-01
Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007). Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002), we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions...
Siqueira, Newton Norat
2006-12-15
This work shows a new approach to solve availability maximization problems in electromechanical systems, under periodic preventive scheduled tests. This approach uses a new Optimization tool called PSO developed by Kennedy and Eberhart (2001), Particle Swarm Optimization, integrated with probabilistic safety analysis model. Two maintenance optimization problems are solved by the proposed technique, the first one is a hypothetical electromechanical configuration and the second one is a real case from a nuclear power plant (Emergency Diesel Generators). For both problem PSO is compared to a genetic algorithm (GA). In the experiments made, PSO was able to obtain results comparable or even slightly better than those obtained b GA. Therefore, the PSO algorithm is simpler and its convergence is faster, indicating that PSO is a good alternative for solving such kind of problems. (author)
Ming Yi WANG; Guo ZHAO
2005-01-01
A right R-module E over a ring R is said to be maximally injective in case for any maximal right ideal m of R, every R-homomorphism f : m → E can be extended to an R-homomorphism f' : R → E. In this paper, we first construct an example to show that maximal injectivity is a proper generalization of injectivity. Then we prove that any right R-module over a left perfect ring R is maximally injective if and only if it is injective. We also give a partial affirmative answer to Faith's conjecture by further investigating the property of maximally injective rings. Finally, we get an approximation to Faith's conjecture, which asserts that every injective right R-module over any left perfect right self-injective ring R is the injective hull of a projective submodule.
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
A new hybrid imperialist competitive algorithm on data clustering
Taher Niknam; Elahe Taherian Fard; Shervin Ehrampoosh; Alireza Rousta
2011-06-01
Clustering is a process for partitioning datasets. This technique is very useful for optimum solution. -means is one of the simplest and the most famous methods that is based on square error criterion. This algorithm depends on initial states and converges to local optima. Some recent researches show that -means algorithm has been successfully applied to combinatorial optimization problems for clustering. In this paper, we purpose a novel algorithm that is based on combining two algorithms of clustering; -means and Modify Imperialist Competitive Algorithm. It is named hybrid K-MICA. In addition, we use a method called modiﬁed expectation maximization (EM) to determine number of clusters. The experimented results show that the new method carries out better results than the ACO, PSO, Simulated Annealing (SA), Genetic Algorithm (GA), Tabu Search (TS), Honey Bee Mating Optimization (HBMO) and -means.
Renato Aparecido Corrêa Caritá
2009-10-01
Full Text Available O principal objetivo deste estudo foi comparar a intensidade correspondente à máxima fase estável de lactato (MLSS e a potência crítica (PC durante o ciclismo em indivíduos bem treinados. Seis ciclistas do sexo masculino (25,5 ± 4,4 anos, 68,8 ± 3,0kg, 173,0 ± 4,0cm realizaram em diferentes dias os seguintes testes: exercício incremental até a exaustão para a determinação do pico de consumo de oxigênio (VO2pico e sua respectiva intensidade (IVO2pico; cinco a sete testes de carga constante para a determinação da MLSS e da PC; e um exercício até a exaustão na PC. A MLSS foi considerada com a maior intensidade de exercício onde a concentração de lactato não aumentou mais do que 1mM entre o 10º e o 30º min de exercício. Os valores individuais de potência (95, 100 e 110% IVO2pico e seu respectivo tempo máximo de exercício (Tlim foram ajustados a partir do modelo hiperbólico de dois parâmetros para a determinação da PC. Embora altamente correlacionadas (r = 0,99; p = 0,0001, a PC (313,5 ± 32,3W foi significantemente maior do que a MLLS (287,0 ± 37,8W (p = 0,0002. A diferença percentual da PC em relação à MLSS foi de 9,5 ± 3,1%. No exercício realizado na PC, embora tenha existido componente lento do VO2 (CL = 400,8 ± 267,0 ml.min-1, o VO2pico não foi alcançado (91,1 ± 3,3 %. Com base nesses resultados pode-se concluir que a PC e a MLSS identificam diferentes intensidades de exercício, mesmo em atletas com elevada aptidão aeróbia. Entretanto, o percentual da diferença entre a MLLS e PC (9% indica que relação entre esses dois índices pode depender da aptidão aeróbia. Durante o exercício realizado até a exaustão na PC, o CL que é desenvolvido não permite que o VO2pico seja alcançado.The main objective of this study was to compare the intensity corresponding to the maximal lactate steady state (MLSS and critical power (CP in well-trained cyclists. Six male cyclists (25.5 ± 4.4 years, 68.8 ± 3
Sakashita, Makiko; Kitasaka, Takayuki; Mori, Kensaku; Suenaga, Yasuhito; Nawano, Shigeru
2007-03-01
This paper presents a method for extracting multi-organs from four-phase contrasted CT images taken at different contrast timings (non-contrast, early, portal, and late phases). First, we apply a median filter to each CT image and align four-phase CT images by performing non-rigid volumetric image registration. Then, a three-dimensional joint histogram of CT values is computed from three-phase (early-, portal-, and late-) CT images. We assume that this histogram is a mixture of normal distributions corresponding to the liver, spleen, kidney, vein, artery, muscle, and bone regions. The EM algorithm is employed to estimate each normal distribution. Organ labels are assigned to each voxel using the mahalanobis distance measure. Connected component analysis is applied to correct the shape of each organ region. After that, the pancreas region is extracted from non-contrasted CT images in which other extracted organs and vessel regions are excluded. The EM algorithm is also employed for estimating the distribution of CT values inside the pancreas. We applied this method to seven cases of four-phase CT images. Extraction results show that the proposed method extracted multi-organs satisfactorily.
Entropy Maximization and the Spatial Distribution of Species
Haegeman, Bart; Etienne, Rampal S.
2010-01-01
Entropy maximization (EM, also known as MaxEnt) is a general inference procedure that originated in statistical mechanics. It has been applied recently to predict ecological patterns, such as species abundance distributions and species-area relationships. It is well known in physics that the EM resu
Entropy Maximization and the Spatial Distribution of Species
Haegeman, Bart; Etienne, Rampal S.
Entropy maximization (EM, also known as MaxEnt) is a general inference procedure that originated in statistical mechanics. It has been applied recently to predict ecological patterns, such as species abundance distributions and species-area relationships. It is well known in physics that the EM
Maximal Frequent Itemset Generation Using Segmentation Apporach
M.Rajalakshmi
2011-09-01
Full Text Available Finding frequent itemsets in a data source is a fundamental operation behind Association Rule Mining.Generally, many algorithms use either the bottom-up or top-down approaches for finding these frequentitemsets. When the length of frequent itemsets to be found is large, the traditional algorithms find all thefrequent itemsets from 1-length to n-length, which is a difficult process. This problem can be solved bymining only the Maximal Frequent Itemsets (MFS. Maximal Frequent Itemsets are frequent itemsets whichhave no proper frequent superset. Thus, the generation of only maximal frequent itemsets reduces thenumber of itemsets and also time needed for the generation of all frequent itemsets as each maximal itemsetof length m implies the presence of 2m-2 frequent itemsets. Furthermore, mining only maximal frequentitemset is sufficient in many data mining applications like minimal key discovery and theory extraction. Inthis paper, we suggest a novel method for finding the maximal frequent itemset from huge data sourcesusing the concept of segmentation of data source and prioritization of segments. Empirical evaluationshows that this method outperforms various other known methods.
Rudiger Bubner
1998-12-01
Full Text Available Even though the maxims' theory is not at thecenter of Kant's ethics, it is the unavoidable basis of the categoric imperative's formulation. Kant leanson the transmitted representations of modem moral theory. During the last decades, the notion of maxims has deserved more attention, due to the philosophy of language's debates on rules, and due to action theory's interest in this notion. I here by brietly expound my views in these discussions.
DNA solution of the maximal clique problem.
Ouyang, Q; Kaplan, P D; Liu, S; Libchaber, A
1997-10-17
The maximal clique problem has been solved by means of molecular biology techniques. A pool of DNA molecules corresponding to the total ensemble of six-vertex cliques was built, followed by a series of selection processes. The algorithm is highly parallel and has satisfactory fidelity. This work represents further evidence for the ability of DNA computing to solve NP-complete search problems.
Bo LI; Sung-kwon PARK
2016-01-01
In the IEEE 802.16e/m standard, three power saving classes (PSCs) are defined to save the energy of a mobile sub-scriber station (MSS). However, how to set the parameters of PSCs to maximize the power saving and guarantee the quality of service is not specified in the standard. Thus, many algorithms were proposed to set the PSCs in IEEE 802.16 networks. However, most of the proposed algorithms consider only the power saving for a single MSS. In the algorithms designed for multiple MSSs, the sleep state, which is set for activation of state transition overhead power, is not considered. The PSC setting for real-time connections in multiple MSSs with consideration of the state transition overhead is studied. The problem is non-deterministic polynomial time hard (NP-hard), and a suboptimal algorithm for the problem is proposed. Simulation results demonstrate that the energy saving of the proposed algorithm is higher than that of state-of-the-art algorithms and approaches the optimum limit.
Marco Antônio Mota Gomes
2008-09-01
Full Text Available FUNDAMENTO: As diretrizes nacionais e internacionais enfatizam a importância do tratamento eficaz da hipertensão arterial. Apesar disso, verificam-se baixos índices de controle e alcance das metas preconizadas, indicando que é importante planejar e implementar melhores estratégias de tratamento. OBJETIVO: Avaliar a eficácia de um tratamento, em escalonamento de doses, tendo como base a olmesartana medoxomila. MÉTODOS: Este é um estudo aberto, nacional, multicêntrico e prospectivo, de 144 pacientes com hipertensão arterial primária nos estágios 1 e 2, virgens de tratamento ou após período de washout de duas a três semanas para aqueles em tratamento ineficaz. Avaliou-se o uso da olmesartana medoxomila num algoritmo de tratamento, em quatro fases: (i monoterapia (20 mg, (ii-iii associada à hidroclorotiazida (20/12,5 mg e 40/25 mg e (iv adição de besilato de anlodipino (40/25 mg + 5 mg. RESULTADOS: Ao fim do tratamento, em escalonamento, 86% dos sujeitos de pesquisa alcançaram a meta de pressão arterial (PA 20 mmHg foi de 87,5% e diastólicos (PAD > 10 mmHg de 92,4%. CONCLUSÃO: O estudo se baseou em um esquema de tratamento semelhante à abordagem terapêutica da prática clínica diária e mostrou que o uso da olmesartana medoxomila, em monoterapia ou em associação a hidroclorotiazida e anlodipino, foi eficaz para o alcance de meta para hipertensos dos estágios 1 e 2.BACKGROUND: The national and international guidelines emphasize the importance of the effective treatment of essenssial hypertension. Nevertheless, low levels of control are observed, as well as low attainment of the recommended goals, indicating that it is important to plan and implement better treatment strategies. OBJECTIVE: To evaluate the efficacy of a based treatment algorithm with olmesartan medoxomil. METHODS: This is an open, national, multicentric and prospective study of 144 patients with primary arterial hypertension, stages 1 and 2, naïve to
Hoogerheide, L.F.; Opschoor, A.; Dijk, van, Nico M.
2012-01-01
This discussion paper was published in the Journal of Econometrics (2012). Vol. 171(2), 101-120. A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of...
Submodular Function Maximization via the Multilinear Relaxation and Contention Resolution Schemes
Chekuri, Chandra; Zenklusen, Rico
2011-01-01
We consider the problem of maximizing a non-negative submodular set function $f:2^N \\rightarrow \\mathbb{R}_+$ over a ground set $N$ subject to a variety of packing type constraints. In this paper we develop a general framework leading to a number of new results, in particular when $f$ may be a {\\em non-monotone} function. Our algorithms are based on (approximately) maximizing the multilinear extension $F$ of $f$ \\cite{CCPV07} over a polytope $P$ that represents the constraints, and then effectively rounding the fractional solution. Although this approach has been used quite successfully in some settings \\cite{CCPV09,KulikST09,LeeMNS09,CVZ10,BansalKNS10}, it has been limited in some important ways. We overcome these limitations as follows. First, we give constant factor approximation algorithms to maximize $F$ over any down-closed polytope $P$ that has an efficient separation oracle. Previously this was known only for monotone functions \\cite{Vondrak08}. For non-monotone functions, a constant factor was known ...
Knowledge discovery by accuracy maximization.
Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo
2014-04-01
Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold's topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan's presidency and not from its beginning.
王继霞; 刘次华
2011-01-01
研究含有缺失数据的多元正态模型参数的极大似然估计问题,利用Monte Carlo EM算法求得多元正态模型参数的迭代解,并证明了此迭代解收敛到最优解,且其收敛速度是二阶的.%Maximum likelihood estimations of the parameters of multivariate normal distribution models under missing data were studied. The iterative solution of the parameters of multivariate normal distribution models were obtained through the Monte Carlo EM algorithm and this solution converge to the optimum solution were proved and the convergence rate of this solution was secondary.
Le, Thanh; Altman, Tom; Gardiner, Katheleen
2010-02-01
Identification of motifs in biological sequences is a challenging problem because such motifs are often short, degenerate, and may contain gaps. Most algorithms that have been developed for motif-finding use the expectation-maximization (EM) algorithm iteratively. Although EM algorithms can converge quickly, they depend strongly on initialization parameters and can converge to local sub-optimal solutions. In addition, they cannot generate gapped motifs. The effectiveness of EM algorithms in motif finding can be improved by incorporating methods that choose different sets of initial parameters to enable escape from local optima, and that allow gapped alignments within motif models. We have developed HIGEDA, an algorithm that uses the hierarchical gene-set genetic algorithm (HGA) with EM to initiate and search for the best parameters for the motif model. In addition, HIGEDA can identify gapped motifs using a position weight matrix and dynamic programming to generate an optimal gapped alignment of the motif model with sequences from the dataset. We show that HIGEDA outperforms MEME and other motif-finding algorithms on both DNA and protein sequences. Source code and test datasets are available for download at http://ouray.cudenver.edu/~tnle/, implemented in C++ and supported on Linux and MS Windows.
IMRank: Influence Maximization via Finding Self-Consistent Ranking
Cheng, Suqi; Shen, Hua-Wei; Huang, Junming; Chen, Wei; Cheng, Xue-Qi
2014-01-01
Influence maximization, fundamental for word-of-mouth marketing and viral marketing, aims to find a set of seed nodes maximizing influence spread on social network. Early methods mainly fall into two paradigms with certain benefits and drawbacks: (1)Greedy algorithms, selecting seed nodes one by one, give a guaranteed accuracy relying on the accurate approximation of influence spread with high computational cost; (2)Heuristic algorithms, estimating influence spread using efficient heuristics,...
The Fuzzy Modeling Algorithm for Complex Systems Based on Stochastic Neural Network
李波; 张世英; 李银惠
2002-01-01
A fuzzy modeling method for complex systems is studied. The notation of general stochastic neural network (GSNN) is presented and a new modeling method is given based on the combination of the modified Takagi and Sugeno's(MTS) fuzzy model and one-order GSNN. Using expectation-maximization (EM) algorithm, parameter estimation and model selection procedures are given. It avoids the shortcomings brought by other methods such as BP algorithm, when the number of parameters is large, BP algorithm is still difficult to apply directly without fine tuning and subjective tinkering. Finally, the simulated example demonstrates the effectiveness.
Katia Abbas
2008-12-01
Full Text Available Junto com o crescimento do setor de serviços tem-se o princípio da escassez, ou seja, é necessário fazer opções, não apenas relacionadas ao que fazer, mas também sobre o que não fazer, incluindo a prioridade hierárquica dos objetivos a serem atingidos. Uma das formas como os recursos podem ser utilizados deve levar em consideração a maximização da qualidade dos serviços oferecidos aos clientes. Esta alocação de recursos deve reconhecer o valor dos ativos intangíveis no resultado percebido pelos clientes do serviço. Nesse sentido, o objetivo deste artigo é propor uma sistemática para alocação de recursos nos ativos intangíveis, os quais maximizem a percepção da qualidade pelos clientes de serviços. Para tanto, os ativos intangíveis serão relacionados aos atributos dos serviços percebidos obtendo-se assim o papel dos ativos intangíveis na formação dos atributos dos serviços, percebidos como prioritários. Para reforçar esta constatação, será utilizado o diagrama de enlace causal para obter a influência dos ativos intangíveis, seu comportamento, a interação entre eles e como um pode influenciar o outro. A partir do conhecimento dos ativos intangíveis, a alocação de recursos nestes ativos intangíveis possibilitará melhorias em outros ativos intangíveis, visto que há uma inter-relação entre eles, e entre os atributos considerados prioritários nos serviços mais relevantes.Together with the growth of the services sector is the principle of the scarcity, i.e. it is necessary to make options including the hierarchy of the objectives to be reached. When making use of the resources, the maximization of the quality of the services offered to consumers should be taken into take consideration. The allocation of resources must acknowledge the value of intangible assets as it is perceived by the customers. The objective of this article is to consider a system for allocating resources in intangible assets that
Biswas, Abhishek; Ranjan, Desh; Zubair, Mohammad; He, Jing
2015-09-01
The determination of secondary structure topology is a critical step in deriving the atomic structures from the protein density maps obtained from electron cryomicroscopy technique. This step often relies on matching the secondary structure traces detected from the protein density map to the secondary structure sequence segments predicted from the amino acid sequence. Due to inaccuracies in both sources of information, a pool of possible secondary structure positions needs to be sampled. One way to approach the problem is to first derive a small number of possible topologies using existing matching algorithms, and then find the optimal placement for each possible topology. We present a dynamic programming method of Θ(Nq(2)h) to find the optimal placement for a secondary structure topology. We show that our algorithm requires significantly less computational time than the brute force method that is in the order of Θ(q(N) h).
Al-Jabr, Ahmad Ali
2013-01-01
This paper presents methods of simulating gain media in the finite difference time-domain (FDTD) algorithm utilizing a generalized polarization formulation. The gain can be static or dynamic. For static gain, Lorentzian and non-Lorentzian models are presented and tested. For the dynamic gain, rate equations for two-level and four-level models are incorporated in the FDTD scheme. The simulation results conform with the expected behavior of wave amplification and dynamic population inversion.
Abed, Azher M.; Abed, Issa Ahmed; Majdi, Hasan Sh.; Al-Shamani, Ali Najah; Sopian, K.
2016-12-01
This study proposes a new procedure for optimal design of shell and tube heat exchangers. The electromagnetism-like algorithm is applied to save on heat exchanger capital cost and designing a compact, high performance heat exchanger with effective use of the allowable pressure drop (cost of the pump). An optimization algorithm is then utilized to determine the optimal values of both geometric design parameters and maximum allowable pressure drop by pursuing the minimization of a total cost function. A computer code is developed for the optimal shell and tube heat exchangers. Different test cases are solved to demonstrate the effectiveness and ability of the proposed algorithm. Results are also compared with those obtained by other approaches available in the literature. The comparisons indicate that a proposed design procedure can be successfully applied in the optimal design of shell and tube heat exchangers. In particular, in the examined cases a reduction of total costs up to 30, 29, and 56.15 % compared with the original design and up to 18, 5.5 and 7.4 % compared with other approaches for case study 1, 2 and 3 respectively, are observed. In this work, economic optimization resulting from the proposed design procedure are relevant especially when the size/volume is critical for high performance and compact unit, moderate volume and cost are needed.
Janusz Brzozowski
2014-05-01
Full Text Available The atoms of a regular language are non-empty intersections of complemented and uncomplemented quotients of the language. Tight upper bounds on the number of atoms of a language and on the quotient complexities of atoms are known. We introduce a new class of regular languages, called the maximally atomic languages, consisting of all languages meeting these bounds. We prove the following result: If L is a regular language of quotient complexity n and G is the subgroup of permutations in the transition semigroup T of the minimal DFA of L, then L is maximally atomic if and only if G is transitive on k-subsets of 1,...,n for 0 <= k <= n and T contains a transformation of rank n-1.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline with...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems
Zibar, Darko; Winther, Ole; Franceschi, Niccolo
2012-01-01
We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth.......We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....
Claudio Gil Soares de Araújo
2005-07-01
Full Text Available OBJETIVO: Comparar, retrospectivamente, os valores de freqüência cardíaca máxima (FCM e o descenso da freqüência cardíaca no primeiro minuto da recuperação (dFC, obtidos em teste de exercício (TE realizados em dois ergômetros e momentos distintos. MÉTODOS: Sessenta indivíduos (29 a 80 anos de idade, submetidos a TE cardiopulmonar em ciclo de membros inferiores (CMI em nosso laboratório e que possuíam TE prévio (até 36 meses em esteira (EST em outros laboratórios, nas condições idênticas de medicações de ação cronotrópica negativa. RESULTADOS: FCM foi semelhante no CMI: 156±3 e EST: 154±2 bpm (p=0,125, enquanto o dFC foi maior em CMI: 33±2, EST: 26±3 bpm (média ± erro padrão da média (pOBJECTIVE: To compare, retrospectively, the values of maximum heart rate (MHR and the decrease of the heart rate at the first minute of recovery, which were obtained in an exercise test (ET performed in two different ergometers and at different moments. METHODS: Sixty individuals (from 29 to 80 years old, submitted to cardiopulmonary ET in a cycle of lower limbs (CLL in our laboratory and who had previous ET (up to 36 months in a treadmill (TRM in other laboratories, under identical conditions of medications of negative chronotropic action. RESULTS: MHR was similar in CLL: 156±3 and TRM: 154±2 bpm (p=0.125, whereas dHR was higher in CLL: 33±2, EST: 26±3 bpm (mean ± standard error of the mean (p<0.001. In hemodynamic variables studied, the systolic blood pressure and the double product were higher in the ET-CLL (p<0.001. The electrocardiogram (ECG was similar in both ETs, except due to more frequent supraventricular arrhythmias in CLL. CONCLUSION: a With some diligence from the examiner and previous knowledge of MHR in a previous ET it is possible to obtain high levels of MHR in an ET-CLL; b interrupting the MHR-based ET forecast through equations tends to lead to sub-maximum efforts; c dHR differs in active and passive
StaticGreedy: solving the scalability-accuracy dilemma in influence maximization
Cheng, Suqi; Shen, Huawei; Huang, Junming; Zhang, Guoqing; Cheng, Xueqi
2012-01-01
Influence maximization, defined as a problem of finding a set of seed nodes to trigger a maximized spread of influence, is crucial to viral marketing on social networks. For practical viral marketing on large scale social networks, it is required that influence maximization algorithms should have both guaranteed accuracy and high scalability. However, existing algorithms suffer a scalability-accuracy dilemma: conventional greedy algorithms guarantee the accuracy with expensive computation, wh...
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Leandro dos Santos Afonso
2006-12-01
Full Text Available Como muitas medidas do desempenho humano apresentam variações circadianas que parecem acompanhar o ritmo da temperatura corporal, o objetivo deste estudo foi comparar a freqüência cardíaca máxima (FCmax no teste de Bruce (Tbruce em diferentes horários do dia. Foram estudados 11 indivíduos do gênero masculino, com 22,0 ± 1,6 anos, fisicamente ativos e do cronotipo intermediário. Observaram-se FC de repouso (FCrep, FC máxima (FCmax, percepção de esforço (PE e tempo até a exaustão (TBruce. Para medir a FC, foi utilizado o cardiofreqüencímetro Polar Vantage NV. A PE foi obtida pela escala de Borg (6-20. Aplicou-se o protocolo de Bruce para esteira ergométrica, até a exaustão, em seis horários distintos: 9:00, 12:00, 15:00, 18:00, 21:00 e 24:00 horas. Os resultados foram submetidos à análise de variância para medidas repetidas, seguida do teste de Tukey (p Debido a que muchas medidas de desempeño humano presentan variaciones circadianas que parecen acompañar el ritmo de la temperatura corporal, el objetivo de este estudio ha sido el de comparar la frecuencia cardíaca máxima (FCmax en el test de Bruce (TBruce en diferentes horarios del día. Fueron estudiados 11 individuos del género masculino, con 22,0 ± 1,6 años, físicamente activos y de cronotipo intermedio. Se observó la FC de reposo (FCrep, FC máxima (FCmax, percepción de esfuerzo (PE y tiempo hasta la extenuación (TBruce. Para medir la FC se usó el cardiofrecuencímetro Polar Vantage NV. La PE se obtuvo por la escala de Borg (6-20. Se aplicó el protocolo de Bruce para cinta ergométrica, hasta la extenuación, en 6 horarios distintos: 9:00, 12:00, 15:00, 18:00, 21:00 y 24:00 horas. Los resultados fueron sometidos a análisis de varianza para medidas repetidas, seguida del test de Tukey (p The aim of this study was to compare the maximal heart rate (HRmax in the Bruce test (TBruce at different times of the day, since several measurements of the human
Ricardo Augusto Cassel
2006-08-01
Full Text Available O presente artigo parte do problema do cálculo de custos em indústrias de produção conjunta. Observa-se que a abordagem tradicional adotada para tratar o tema, o método do custo conjunto, apresenta limitações no que tange ao modelo adotado e aos resultados obtidos em termos da análise da lucratividade dos diferentes produtos finais. Como forma de tratar o problema apresenta-se uma abordagem alternativa baseada na Teoria das Restrições suportada por técnicas de Pesquisa Operacional. Visando mostrar as diferenças dos resultados na utilização dos dois métodos, é realizado um estudo de caso em uma unidade de abate e industrialização de aves. Os resultados obtidos, em termos da lucratividade global da operação, apresentam diferenças significativas. A abordagem da Teoria das Restrições aplicada com base na utilização de técnicas de Pesquisa Operacional demonstra-se mais eficaz do prisma da tomada de decisão.This paper discusses the problem related to the costing in joint product industries. Traditional approaches to this subject present limitations regarding the analysis of the individual profitability of each final product. This problem is discussed and an alternative approach based on the Theory of Constraints, and supported by the Operational Research, is proposed. To demonstrate the differences between this new approach and the traditional one, a case study in a poultry company is presented, and the results are discussed.
2PL模型的EM缺失数据处理方法研究%Extension of EM Algorithm for Finite Mixture in IRT for Missing Response Data
张淑梅; 辛涛; 曾莉; 孙佳楠
2011-01-01
Item Response Theory (IRT) model is a dramatically important model in educational and psychological measurement. There are two kinds of parameters in the model - item parameters and ability parameters. Nowadays, a commonly used method for estimating item parameters of IRT model is given by Woodruff and Hanson (1997). They treated the ability parameter θ as missing and applied EM Algorithm for finite mixture to estimate item parameters under the condition that the examinees' responses are complete. Here, we extend the Woodruff's method to deal with incomplete response data. That is,we keep the incomplete response cases and regard missing response data as "missing" like θ and then apply EM Algorithm. In our simulation study, we compare the relative performance of the missing data treatment method of us with that of the software BILOG-MG under different sample size and missing ratio. The simulation results show that our new method can obtain better estimation than BILOG-MG in most cases.%项目反应理论(IRT)模型是教育统计与测量中一种十分重要的模型,它包含项目参数和能力参数.目前一种常用的估计IRT模型项目参数的方法是由Woodruff和Hanson(1997)应用EM算法给出的,它用于完全反应数据,而把能力参数看作缺失数据.本文将Woodruff的方法推广到处理缺失反应的情况,基本思想是把能力参数和缺失反应均看作缺失数据,再运用EM算法估计参数.通过模拟研究,在不同被试人数和不同缺失比例的情况下,本文比较了我们给出的方法和BILOG-MG软件的缺失数据处理方法的参数估计效果.结果表明,在大多数情况下,本文提出的方法能得到更好的估计.
Maximizing without difficulty: A modified maximizing scale and its correlates
Linda Lai
2010-01-01
This article presents several studies that replicate and extend previous research on maximizing. A modified scale for measuring individual maximizing tendency is introduced. The scale has adequate psychometric properties and reflects maximizers' aspirations for high standards and their preference for extensive alternative search, but not the decision difficulty aspect included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cogniti...
Hugo Fort
2015-11-01
Full Text Available Mutualistic networks in nature are widespread and play a key role in generating the diversity of life on Earth. They constitute an interdisciplinary field where physicists, biologists and computer scientists work together. Plant-pollinator mutualisms in particular form complex networks of interdependence between often hundreds of species. Understanding the architecture of these networks is of paramount importance for assessing the robustness of the corresponding communities to global change and management strategies. Advances in this problem are currently limited mainly due to the lack of methodological tools to deal with the intrinsic complexity of mutualisms, as well as the scarcity and incompleteness of available empirical data. One way to uncover the structure underlying complex networks is to employ information theoretical statistical inference methods, such as the expectation maximization (EM algorithm. In particular, such an approach can be used to cluster the nodes of a network based on the similarity of their node neighborhoods. Here, we show how to connect network theory with the classical ecological niche theory for mutualistic plant-pollinator webs by using the EM algorithm. We apply EM to classify the nodes of an extensive collection of mutualistic plant-pollinator networks according to their connection similarity. We find that EM recovers largely the same clustering of the species as an alternative recently proposed method based on resource overlap, where one considers each party as a consuming resource for the other party (plants providing food to animals, while animals assist the reproduction of plants. Furthermore, using the EM algorithm, we can obtain a sequence of successfully-refined classifications that enables us to identify the fine-structure of the ecological network and understand better the niche distribution both for plants and animals. This is an example of how information theoretical methods help to systematize and
Blood detection in wireless capsule endoscopy using expectation maximization clustering
Hwang, Sae; Oh, JungHwan; Cox, Jay; Tang, Shou Jiang; Tibbals, Harry F.
2006-03-01
Wireless Capsule Endoscopy (WCE) is a relatively new technology (FDA approved in 2002) allowing doctors to view most of the small intestine. Other endoscopies such as colonoscopy, upper gastrointestinal endoscopy, push enteroscopy, and intraoperative enteroscopy could be used to visualize up to the stomach, duodenum, colon, and terminal ileum, but there existed no method to view most of the small intestine without surgery. With the miniaturization of wireless and camera technologies came the ability to view the entire gestational track with little effort. A tiny disposable video capsule is swallowed, transmitting two images per second to a small data receiver worn by the patient on a belt. During an approximately 8-hour course, over 55,000 images are recorded to a worn device and then downloaded to a computer for later examination. Typically, a medical clinician spends more than two hours to analyze a WCE video. Research has been attempted to automatically find abnormal regions (especially bleeding) to reduce the time needed to analyze the videos. The manufacturers also provide the software tool to detect the bleeding called Suspected Blood Indicator (SBI), but its accuracy is not high enough to replace human examination. It was reported that the sensitivity and the specificity of SBI were about 72% and 85%, respectively. To address this problem, we propose a technique to detect the bleeding regions automatically utilizing the Expectation Maximization (EM) clustering algorithm. Our experimental results indicate that the proposed bleeding detection method achieves 92% and 98% of sensitivity and specificity, respectively.
HEMI: Hyperedge Majority Influence Maximization
Gangal, Varun; Narayanam, Ramasuri
2016-01-01
In this work, we consider the problem of influence maximization on a hypergraph. We first extend the Independent Cascade (IC) model to hypergraphs, and prove that the traditional influence maximization problem remains submodular. We then present a variant of the influence maximization problem (HEMI) where one seeks to maximize the number of hyperedges, a majority of whose nodes are influenced. We prove that HEMI is non-submodular under the diffusion model proposed.
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...... with 30% (v/v) ethanol or saline, respectively. Relative viscosity was used as one measure of physical properties of the emulsion. Higher degrees of sensitization (but not rates) were obtained at the 48 h challenge reading with the oil/propylene glycol and oil/saline + ethanol emulsions compared...... to the saline/oil emulsion. Placing of the challenge patches affected the response, as simultaneous chlorocresol challenge on the flank located 2 cm closer to the abdomen than the usual challenge site gave decreased reactions....
Maximal switchability of centralized networks
Vakulenko, Sergei; Morozov, Ivan; Radulescu, Ovidiu
2016-08-01
We consider continuous time Hopfield-like recurrent networks as dynamical models for gene regulation and neural networks. We are interested in networks that contain n high-degree nodes preferably connected to a large number of N s weakly connected satellites, a property that we call n/N s -centrality. If the hub dynamics is slow, we obtain that the large time network dynamics is completely defined by the hub dynamics. Moreover, such networks are maximally flexible and switchable, in the sense that they can switch from a globally attractive rest state to any structurally stable dynamics when the response time of a special controller hub is changed. In particular, we show that a decrease of the controller hub response time can lead to a sharp variation in the network attractor structure: we can obtain a set of new local attractors, whose number can increase exponentially with N, the total number of nodes of the nework. These new attractors can be periodic or even chaotic. We provide an algorithm, which allows us to design networks with the desired switching properties, or to learn them from time series, by adjusting the interactions between hubs and satellites. Such switchable networks could be used as models for context dependent adaptation in functional genetics or as models for cognitive functions in neuroscience.
Distributed Maximality based CTL Model Checking
Djamel Eddine Saidouni
2010-05-01
Full Text Available In this paper we investigate an approach to perform a distributed CTL Model checker algorithm on a network of workstations using Kleen three value logic, the state spaces is partitioned among the network nodes, We represent the incomplete state spaces as a Maximality labeled Transition System MLTS which are able to express true concurrency. we execute in parallel the same algorithm in each node, for a certain property on an incomplete MLTS , this last compute the set of states which satisfy or which if they fail are assigned the value .The third value mean unknown whether true or false because the partial state space lacks sufficient information needed for a precise answer concerning the complete state space .To solve this problem each node exchange the information needed to conclude the result about the complete state space. The experimental version of the algorithm is currently being implemented using the functional programming language Erlang.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low
孙聚波; 徐平峰
2015-01-01
图模型是处理高维数据的有力工具,其中图模型的选择是统计推断的重要方面.利用E-MS算法进行缺失数据的图模型选择.E-MS算法是基于EM迭代思想的,把模型下的参数和模型选择参数统一为新的参数,这样模型选择就成为E-M迭代过程的一部分.最后给出了3,4,5变量情形下的模拟研究.%Graphical model is a powerful tool to deal with high dimensional data, and the graphical model selection is an important aspect of statistical inference.In this paper, the E-MS algorithm was used to select the appropriate graphical model with missing data.The E-MS algorithm is based on the idea of EM iteration, and combines the parameters of the model with the model selection parameters as the new parameters, so the model selection becomes the part of the E-M iterative process.Finally, a simulation study of 3, 4 and 5 variables was given.
MAXIMS VIOLATIONS IN LITERARY WORK
Widya Hanum Sari Pertiwi
2015-12-01
Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims
EM- ja MCEM-algoritmi apuvälineenä suurimman uskottavuuden estimoinnissa
Kuismin, M. (Markku)
2013-01-01
Tutkielmassa tutkitaan suurimman uskottavuuden menetelmään perustuvaa Expected Maximization-algoritmia (EM-algoritmi). Työn pääpaino on algoritmin ominaisuuksien teoreettisessa tarkastelussa eikä siinä käsitellä todellisia tutkimusongelmia tai empiirisiä aineistoja. Aluksi tarkastellaan algoritmia matemaattisesti SU-menetelmän tavoin. Tämä teoriaosuus perustuu pääsääntöisesti McLachlanin ja Krishnanin kirjaan The EM Algorithm and Extensions (1997). Algoritmin avulla tutkitaan kahden norma...
Swanepoel, Konrad J
2011-01-01
A subset of a normed space X is called equilateral if the distance between any two points is the same. Let m(X) be the smallest possible size of an equilateral subset of X maximal with respect to inclusion. We first observe that Petty's construction of a d-dimensional X of any finite dimension d >= 4 with m(X)=4 can be generalised to show that m(X\\oplus_1\\R)=4 for any X of dimension at least 2 which has a smooth point on its unit sphere. By a construction involving Hadamard matrices we then show that both m(\\ell_p) and m(\\ell_p^d) are finite and bounded above by a function of p, for all 1 1 such that m(X) <= d+1 for all d-dimensional X with Banach-Mazur distance less than c from \\ell_p^d. Using Brouwer's fixed-point theorem we show that m(X) <= d+1 for all d-\\dimensional X with Banach-Mazur distance less than 3/2 from \\ell_\\infty^d. A graph-theoretical argument furthermore shows that m(\\ell_\\infty^d)=d+1. The above results lead us to conjecture that m(X) <= 1+\\dim X.
Unified Maximally Natural Supersymmetry
Huang, Junwu
2016-01-01
Maximally Natural Supersymmetry, an unusual weak-scale supersymmetric extension of the Standard Model based upon the inherently higher-dimensional mechanism of Scherk-Schwarz supersymmetry breaking (SSSB), possesses remarkably good fine tuning given present LHC limits. Here we construct a version with precision $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ unification: $\\sin^2 \\theta_W(M_Z) \\simeq 0.231$ is predicted to $\\pm 2\\%$ by unifying $SU(2)_{\\rm L} \\times U(1)_{\\rm Y} $ into a 5D $SU(3)_{\\rm EW}$ theory at a Kaluza-Klein scale of $1/R_5 \\sim 4.4\\,{\\rm TeV}$, where SSSB is simultaneously realised. Full unification with $SU(3)_{\\rm C}$ is accommodated by extending the 5D theory to a $N=4$ supersymmetric $SU(6)$ gauge theory on a 6D rectangular orbifold at $1/R_6 \\sim 40 \\,{\\rm TeV}$. TeV-scale states beyond the SM include exotic charged fermions implied by $SU(3)_{\\rm EW}$ with masses lighter than $\\sim 1.2\\,{\\rm TeV}$, and squarks in the mass range $1.4\\,{\\rm TeV} - 2.3\\,{\\rm TeV}$, providing distinct signature...
A comparison of two estimation algorithms for Samejima's continuous IRT model.
Zopluoglu, Cengiz
2013-03-01
This study compares two algorithms, as implemented in two different computer softwares, that have appeared in the literature for estimating item parameters of Samejima's continuous response model (CRM) in a simulation environment. In addition to the simulation study, a real-data illustration is provided, and CRM is used as a potential psychometric tool for analyzing measurement outcomes in the context of curriculum-based measurement (CBM) in the field of education. The results indicate that a simplified expectation-maximization (EM) algorithm is as effective and efficient as the traditional EM algorithm for estimating the CRM item parameters. The results also show promise for using this psychometric model to analyze CBM outcomes, although more research is needed in order to recommend CRM as a standard practice in the CBM context.
Predicting Contextual Sequences via Submodular Function Maximization
Dey, Debadeepta; Hebert, Martial; Bagnell, J Andrew
2012-01-01
Sequence optimization, where the items in a list are ordered to maximize some reward has many applications such as web advertisement placement, search, and control libraries in robotics. Previous work in sequence optimization produces a static ordering that does not take any features of the item or context of the problem into account. In this work, we propose a general approach to order the items within the sequence based on the context (e.g., perceptual information, environment description, and goals). We take a simple, efficient, reduction-based approach where the choice and order of the items is established by repeatedly learning simple classifiers or regressors for each "slot" in the sequence. Our approach leverages recent work on submodular function maximization to provide a formal regret reduction from submodular sequence optimization to simple cost-sensitive prediction. We apply our contextual sequence prediction algorithm to optimize control libraries and demonstrate results on two robotics problems: ...
廖福蓉; 王成良
2012-01-01
频繁项集的挖掘受到大量候选频繁项集和较高计算花费的限制,只挖掘最大长度频繁项集已满足很多应用.提出一种基于有序FP-tree结构挖掘最大长度频繁项集的算法.即对有序FP-tree的头表进行改造,增加一个max-level域,记录该项在有序FP-tree中的最大高度.挖掘时仅对max-level大于等于已有最大长度频繁项集长度的项进行遍历,不产生条件模式基,无需递归构造条件FP-tree,且计算出最大长度频繁项集的支持度.实验结果表明该算法挖掘效率高、速度快.%The mining of frequent itemsets has been limited by the large number of resulting itemsets as well as the high computational cost. In many application domains, however, it is often sufficient to mine maximum length frequent itemsets. An order FP-tree-based algorithm is proposed for the mining problem. A field max-level is added in head-table to record the greatest height of item. In the mining process, only the item which max-level value is equal or greater than the length of existing maximum length frequent itemsets is traversed. Neither producing conditional pattern base nor constructing conditional frequent pattern tree recursively is needed, and the support of maximum length frequent itemsets is calculated. The experimental results show that the algorithm accelerates the speed to traverse the tree and improves the mining efficiency.
Algorithms over partially ordered sets
Baer, Robert M.; Østerby, Ole
1969-01-01
in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....
Online Learning of Assignments that Maximize Submodular Functions
Golovin, Daniel; Streeter, Matthew
2009-01-01
Which ads should we display in sponsored search in order to maximize our revenue? How should we dynamically rank information sources to maximize value of information? These applications exhibit strong diminishing returns: Selection of redundant ads and information sources decreases their marginal utility. We show that these and other problems can be formalized as repeatedly selecting an assignment of items to positions to maximize a sequence of monotone submodular functions that arrive one by one. We present an efficient algorithm for this general problem and analyze it in the no-regret model. Our algorithm possesses strong theoretical guarantees, such as a performance ratio that converges to the optimal constant of 1-1/e. We empirically evaluate our algorithm on two real-world online optimization problems on the web: ad allocation with submodular utilities, and dynamically ranking blogs to detect information cascades.
Maximal subgroups of finite groups
S. Srinivasan
1990-01-01
Full Text Available In finite groups maximal subgroups play a very important role. Results in the literature show that if the maximal subgroup has a very small index in the whole group then it influences the structure of the group itself. In this paper we study the case when the index of the maximal subgroups of the groups have a special type of relation with the Fitting subgroup of the group.
Leandro Ricardo Altimari
2006-06-01
Full Text Available Este estudo investigou o efeito de um longo período de suplementação com creatina monoidratada (Cr m sobre o trabalho total relativo (TTR em esforços intermitentes máximos no cicloergômetro de homens treinados. Vinte seis indivíduos foram divididos aleatoriamente em grupo creatina (CR, n=13 e grupo placebo (PL, n=13. Os sujeitos receberam em sistema duplo-cego, doses de Cr m ou placebo-maltodextrina (20 g.d-1 por 5 dias e 3 g.d-1 durante 51 dias subseqüentes. Os grupos tiveram seus hábitos alimentares e sua condição física previamente controlados. Para determinação do TTR os sujeitos foram submetidos a protocolo de exercício em cicloergômetro composto de três Testes de Wingate de 30s separados por dois minutos recuperação, antes e após o período de suplementação. ANOVA, seguido pelo teste post hoc de Tukey, quando pThis study investigated the effect of long-term supplementation with creatine monohydrate (Cr m on relative total work (RTW in intermittent maximal efforts in the cycle ergometer of trained men. Twenty six individuals were randomly divided in creatine group (CR, n=13 and placebo group (PL, n=13. The subjects received in a double-blind manner, doses of Cr m or placebo-maltodextrin (20 g.d-1 for 5 days and 3 g.d-1 for 51 subsequent days. The groups had their alimentary habits and physical fitness controlled previously. For determination of the RTW the subjects were submitted to exercise protocol in cycle ergometer comprised three 30s Anaerobic Wingate Test interspersed with two minutes recovery, before and after the supplementation period. ANOVA, followed by the Tukey post hoc test, when p<0.05, were used for data treatment. There was a significant time effect for RTW (F1,24=8.00; p<0.05, with the CR group demonstrating significant greater (3% on the RTW production compared to PL group after the supplementation period (690.54 ± 46.83 vs 655.71 ± 74.34 J.kg-1 respectively; p<0.05. The results of the present study
Gonzalez-Sanchez, Jon
2010-01-01
Let $w = w(x_1,..., x_n)$ be a word, i.e. an element of the free group $F =$ on $n$ generators $x_1,..., x_n$. The verbal subgroup $w(G)$ of a group $G$ is the subgroup generated by the set $\\{w (g_1,...,g_n)^{\\pm 1} | g_i \\in G, 1\\leq i\\leq n \\}$ of all $w$-values in $G$. We say that a (finite) group $G$ is $w$-maximal if $|G:w(G)|> |H:w(H)|$ for all proper subgroups $H$ of $G$ and that $G$ is hereditarily $w$-maximal if every subgroup of $G$ is $w$-maximal. In this text we study $w$-maximal and hereditarily $w$-maximal (finite) groups.
Marco Antônio Mota Gomes; Audes Diógenes de Magalhães Feitosa; Wille Oigman; José Márcio Ribeiro; Emílio Hideyuki Moriguchi; José Francisco Kerr Saraiva; Dalton Bertolim Précoma; Artur Beltrame Ribeiro; Celso Amodeo; Andréa Araujo Brandão
2008-01-01
... após período de washout de duas a três semanas para aqueles em tratamento ineficaz. Avaliou-se o uso da olmesartana medoxomila num algoritmo de tratamento, em quatro fases: (i) monoterapia (20 mg), (ii-iii) associada à hidroclorotiazida...
Maximizing without difficulty: A modified maximizing scale and its correlates
Lai, Linda
2010-01-01
... included in several previous studies. Based on this scale, maximizing is positively correlated with optimism, need for cognition, desire for consistency, risk aversion, intrinsic motivation, self-efficacy and perceived workload, whereas...
Maximizing and customer loyalty: Are maximizers less loyal?
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Are maximizers really unhappy? The measurement of maximizing tendency,
Dalia L. Diab
2008-06-01
Full Text Available Recent research suggesting that people who maximize are less happy than those who satisfice has received considerable fanfare. The current study investigates whether this conclusion reflects the construct itself or rather how it is measured. We developed an alternative measure of maximizing tendency that is theory-based, has good psychometric properties, and predicts behavioral outcomes. In contrast to the existing maximization measure, our new measure did not correlate with life (dissatisfaction, nor with most maladaptive personality and decision-making traits. We conclude that the interpretation of maximizers as unhappy may be due to poor measurement of the construct. We present a more reliable and valid measure for future researchers to use.
Principles of maximally classical and maximally realistic quantum mechanics
S M Roy
2002-08-01
Recently Auberson, Mahoux, Roy and Singh have proved a long standing conjecture of Roy and Singh: In 2-dimensional phase space, a maximally realistic quantum mechanics can have quantum probabilities of no more than + 1 complete commuting cets (CCS) of observables coexisting as marginals of one positive phase space density. Here I formulate a stationary principle which gives a nonperturbative deﬁnition of a maximally classical as well as maximally realistic phase space density. I show that the maximally classical trajectories are in fact exactly classical in the simple examples of coherent states and bound states of an oscillator and Gaussian free particle states. In contrast, it is known that the de Broglie–Bohm realistic theory gives highly nonclassical trajectories.
Finding the Maximizers of the Information Divergence from an Exponential Family
Rauh, Johannes
2009-01-01
This paper investigates maximizers of the information divergence from an exponential family $E$. It is shown that the $rI$-projection of a maximizer $P$ to $E$ is a convex combination of $P$ and a probability measure $P_-$ with disjoint support and the same value of the sufficient statistics $A$. This observation can be used to transform the original problem of maximizing $D(\\cdot||E)$ over the set of all probability measures into the maximization of a function $\\Dbar$ over a convex subset of $\\ker A$. The global maximizers of both problems correspond to each other. Furthermore, finding all local maximizers of $\\Dbar$ yields all local maximizers of $D(\\cdot||E)$. This paper also proposes two algorithms to find the maximizers of $\\Dbar$ and applies them to two examples, where the maximizers of $D(\\cdot||E)$ were not known before.
Fúlvia de Barros Manchado
2006-10-01
Full Text Available A máxima fase estável de lactato (MFEL é considerada padrão-ouro para a determinação da intensidade de transição metabólica aeróbia-anaeróbia em exercício contínuo, porém a resposta lactacidêmica nessa intensidade é, em humanos, dependente do ergômetro utilizado na avaliação. Uma ferramenta importante para estudos em fisiologia e áreas correlatas é a aplicação de modelos experimentais utilizando animais. Entretanto, ainda são restritas as pesquisas destinadas a investigar protocolos de avaliação em ratos. O objetivo do estudo foi verificar se a MFEL é dependente do ergômetro utilizado para a avaliação aeróbia de ratos. Para isso, 40 ratos Wistar adultos foram avaliados em dois diferentes exercícios: natação e corrida em esteira. Em ambos, a MFEL foi verificada após aplicação de quatro testes contínuos, em diferentes intensidades, com duração de 25 minutos, separados por intervalo de 48 horas. Em todos os testes houve coleta sanguínea da cauda dos animais a cada cinco minutos de exercício para análise do lactato sanguíneo. Os testes de natação ocorreram em tanque cilíndrico profundo, com a temperatura da água a 31 ± 1°C. As cargas adotadas para os testes foram de 4,5; 5,0; 5,5; 6,0% do peso corporal, atadas ao dorso dos animais. Para a determinação da MFEL em corrida, houve seleção dos ratos corredores e as velocidades dos testes foram de 15, 20, 25, 30m.min¹. A MFEL foi interpretada como a mais alta intensidade de exercício na qual o aumento da lactacidemia foi igual ou inferior a 1mM, do 10º ao 25º minuto. Anova one-way identificou diferenças entre as concentrações de lactato sanguíneo nos diversos tempos de exercício e ergômetros. A MFEL na natação ocorreu a 5,0% do peso corporal (pc, em concentração de lactato de 5,20 ± 0,22mM. Para o exercício em esteira rolante, observou-se MFEL a 20m.min¹, em concentração 3,87 ± 0,33mM. Dessa forma, é possível concluir que a MFEL
赵越; 李红
2016-01-01
针对标准EM算法在汉语分词的应用中还存在收敛性能不好、分词准确性不高的问题，本文提出了一种基于极大似然估计规则优化EM算法的汉语分词认知模型，首先使用当前词的概率值计算每个可能切分的可能性，对切分可能性进行“归一化”处理，并对每种切分进行词计数，然后针对标准EM算法得到的估计值只能保证收敛到似然函数的一个稳定点，并不能使其保证收敛到全局最大值点或者局部最大值点的问题，采用极大似然估计规则对其进行优化，从而可以使用非线性最优化中的有效方法进行求解达到加速收敛的目的。仿真试验结果表明，本文提出的基于极大似然估计规则优化EM算法的汉语分词认知模型收敛性能更好，且在汉语分词的精确性较高。%In view of bad convergence and inaccurate word segmentation of standard EM algorithm in Chinese words segmentation, this paper put forward a cognitive model based on optimized EM algorithm by maximum likelihood estimation rule. Firstly, it uses the probability of current word to calculate the possibility of each possible segmentation and normalize them. Each segmentation is counted by words. Standard EM algorithm cannot make sure converging to a stable point of likelihood function, and converging to a global or local maximum point. Therefore, the maximum likelihood estimation rule is adopted to optimize it so as to use an effective method in nonlinear optimization and accelerate the convergence. the simulation experiments show that the optimized EM algorithm by maximum likelihood estimation rule has better convergence performance in the Chinese words cognitive model and more accurate in the words segmentation.
Maximizing ROI with yield management
Neil Snyder
2001-01-01
.... the technology is based on the concept of yield management, which aims to sell the right product to the right customer at the right price and the right time therefore maximizing revenue, or yield...
Are CEOs Expected Utility Maximizers?
John List; Charles Mason
2009-01-01
Are individuals expected utility maximizers? This question represents much more than academic curiosity. In a normative sense, at stake are the fundamental underpinnings of the bulk of the last half-century's models of choice under uncertainty. From a positive perspective, the ubiquitous use of benefit-cost analysis across government agencies renders the expected utility maximization paradigm literally the only game in town. In this study, we advance the literature by exploring CEO's preferen...
Gaussian maximally multipartite entangled states
Facchi, Paolo; Lupo, Cosmo; Mancini, Stefano; Pascazio, Saverio
2009-01-01
We introduce the notion of maximally multipartite entangled states (MMES) in the context of Gaussian continuous variable quantum systems. These are bosonic multipartite states that are maximally entangled over all possible bipartitions of the system. By considering multimode Gaussian states with constrained energy, we show that perfect MMESs, which exhibit the maximum amount of bipartite entanglement for all bipartitions, only exist for systems containing n=2 or 3 modes. We further numerically investigate the structure of MMESs and their frustration for n <= 7.
All maximally entangling unitary operators
Cohen, Scott M. [Department of Physics, Duquesne University, Pittsburgh, Pennsylvania 15282 (United States); Department of Physics, Carnegie-Mellon University, Pittsburgh, Pennsylvania 15213 (United States)
2011-11-15
We characterize all maximally entangling bipartite unitary operators, acting on systems A and B of arbitrary finite dimensions d{sub A}{<=}d{sub B}, when ancillary systems are available to both parties. Several useful and interesting consequences of this characterization are discussed, including an understanding of why the entangling and disentangling capacities of a given (maximally entangling) unitary can differ and a proof that these capacities must be equal when d{sub A}=d{sub B}.
Salvio, Alberto; Strumia, Alessandro; Urbano, Alfredo
2016-01-01
Motivated by the 750 GeV diphoton excess found at LHC, we compute the maximal width into $\\gamma\\gamma$ that a neutral scalar can acquire through a loop of charged fermions or scalars as function of the maximal scale at which the theory holds, taking into account vacuum (meta)stability bounds. We show how an extra gauge symmetry can qualitatively weaken such bounds, and explore collider probes and connections with Dark Matter.
Two Time Point MS Lesion Segmentation in Brain MRI: An Expectation-Maximization Framework.
Jain, Saurabh; Ribbens, Annemie; Sima, Diana M; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2016-01-01
Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint expectation-maximization (EM) framework for two time point white matter (WM) lesion segmentation. MSmetrix-long takes as input a 3D T1-weighted and a 3D FLAIR MR image and segments lesions in three steps: (1) cross-sectional lesion segmentation of the two time points; (2) creation of difference image, which is used to model the lesion evolution; (3) a joint EM lesion segmentation framework that uses output of step (1) and step (2) to provide the final lesion segmentation. The accuracy (Dice score) and reproducibility (absolute lesion volume difference) of MSmetrix-long is evaluated using two datasets. Results: On the first dataset, the median Dice score between MSmetrix-long and expert lesion segmentation was 0.63 and the Pearson correlation coefficient (PCC) was equal to 0.96. On the second dataset, the median absolute volume difference was 0.11 ml. Conclusions: MSmetrix-long is accurate and consistent in segmenting MS lesions. Also, MSmetrix-long compares favorably with the publicly available longitudinal MS lesion segmentation algorithm of Lesion Segmentation Toolbox.
PENDUGAAN DATA HILANG DENGAN METODE YATES DAN ALGORITMA EM PADA RANCANGAN LATTICE SEIMBANG
MADE SUSILAWATI
2015-06-01
Full Text Available Missing data often occur in agriculture and animal husbandry experiment. The missing data in experimental design makes the information that we get less complete. In this research, the missing data was estimated with Yates method and Expectation Maximization (EM algorithm. The basic concept of the Yates method is to minimize sum square error (JKG, meanwhile the basic concept of the EM algorithm is to maximize the likelihood function. This research applied Balanced Lattice Design with 9 treatments, 4 replications and 3 group of each repetition. Missing data estimation results showed that the Yates method was better used for two of missing data in the position on a treatment, a column and random, meanwhile the EM algorithm was better used to estimate one of missing data and two of missing data in the position of a group and a replication. The comparison of the result JKG of ANOVA showed that JKG of incomplete data larger than JKG of incomplete data that has been added with estimator of data. This suggest thatwe need to estimate the missing data.
Marco Antônio Mota Gomes; Audes Diógenes de Magalhães Feitosa; Wille Oigman; José Márcio Ribeiro; Emílio Hideyuki Moriguchi; José Francisco Kerr Saraiva; Dalton Bertolim Précoma; Artur Beltrame Ribeiro; Celso Amodeo; Andréa Araujo Brandão
2008-01-01
... importante planejar e implementar melhores estratégias de tratamento. OBJETIVO: Avaliar a eficácia de um tratamento, em escalonamento de doses, tendo como base a olmesartana medoxomila. MÉTODOS: Este é...
Message-Passing Algorithms for Quadratic Programming Formulations of MAP Estimation
Kumar, Akshat
2012-01-01
Computing maximum a posteriori (MAP) estimation in graphical models is an important inference problem with many applications. We present message-passing algorithms for quadratic programming (QP) formulations of MAP estimation for pairwise Markov random fields. In particular, we use the concave-convex procedure (CCCP) to obtain a locally optimal algorithm for the non-convex QP formulation. A similar technique is used to derive a globally convergent algorithm for the convex QP relaxation of MAP. We also show that a recently developed expectation-maximization (EM) algorithm for the QP formulation of MAP can be derived from the CCCP perspective. Experiments on synthetic and real-world problems confirm that our new approach is competitive with max-product and its variations. Compared with CPLEX, we achieve more than an order-of-magnitude speedup in solving optimally the convex QP relaxation.
Ana Paula Iannoni
2006-04-01
Full Text Available O modelo hipercubo, conhecido na literatura de problemas de localização de sistemas servidor para cliente, é um modelo baseado em teoria de filas espacialmente distribuídas e aproximações Markovianas. O modelo pode ser modificado para analisar os sistemas de atendimentos emergenciais (SAEs em rodovias, considerando as particularidades da política de despacho destes sistemas. Neste estudo, combinou-se o modelo hipercubo com um algoritmo genético para otimizar a configuração e operação de SAEs em rodovias. A abordagem é efetiva para apoiar decisões relacionadas ao planejamento e operação destes sistemas, por exemplo, em determinar o tamanho ideal para as áreas de cobertura de cada ambulância, de forma a minimizar o tempo médio de resposta aos usuários e o desbalanceamento das cargas de trabalho das ambulâncias. Os resultados computacionais desta abordagem foram analisados utilizando dados reais do sistema Anjos do Asfalto (rodovia Presidente Dutra.The hypercube model, well-known in the literature on problems of server-to-customer localization systems, is based on the spatially distributed queuing theory and Markovian analysis approximations. The model can be modified to analyze Emergency Medical Systems (EMSs on highways, considering the particularities of these systems' dispatching policies. In this study, we combine the hypercube model with a genetic algorithm to optimize the configuration and operation of EMSs on highways. This approach is effective to support planning and operation decisions, such as determining the ideal size of the area each ambulance should cover to minimize not only the average time of response to the user but also ambulance workload imbalances, as well as generating a Pareto efficient boundary between these measures. The computational results of this approach were analyzed using real data Anjos do Asfalto EMS (which covers the Presidente Dutra highway.
Entropy Message Passing Algorithm
Ilic, Velimir M; Branimir, Todorovic T
2009-01-01
Message passing over factor graph can be considered as generalization of many well known algorithms for efficient marginalization of multivariate function. A specific instance of the algorithm is obtained by choosing an appropriate commutative semiring for the range of the function to be marginalized. Some examples are Viterbi algorithm, obtained on max-product semiring and forward-backward algorithm obtained on sum-product semiring. In this paper, Entropy Message Passing algorithm (EMP) is developed. It operates over entropy semiring, previously introduced in automata theory. It is shown how EMP extends the use of message passing over factor graphs to probabilistic model algorithms such as Expectation Maximization algorithm, gradient methods and computation of model entropy, unifying the work of different authors.
LIU Yuan-feng; ZHAO Mei
2005-01-01
An algorithm based on the data-adaptive filtering characteristics of singular spectrum analysis (SSA) is proposed to denoise chaotic data. Firstly, the empirical orthogonal functions ( EOFs ) and principal components ( PCs ) of the signal were calculated, reconstruct the signal using the EOFs and PCs, and choose the optimal reconstructing order based on sigular spectrum to obtain the denoised signal. The noise of the signal can influence the calculating precision of maximal Liapunov exponents. The proposed denoising algorithm was applied to the maximal Liapunov exponents calculations of two chaotic system, Henon map and Logistic map. Some numerical results show that this denoising algorithm could improve the calculating precision of maximal Liapunov exponent.
Maria da Cruz CL Moura
2010-06-01
Full Text Available A estimativa da variabilidade genética existente em um banco de germoplasma é importante não só para a conservação dos recursos genéticos, mas também para sua utilização no melhoramento de plantas. Os acessos de um banco são estudados com base em descritores quantitativos e qualitativos. Porém, nem sempre esses dados são analisados simultaneamente. O presente trabalho teve como objetivo estudar a divergência genética entre 56 acessos de Capsicum chinense procedentes da Coleção de Germoplasma da Universidade Estadual do Norte Fluminense Darcy Ribeiro, com base em 44 descritores morfoagronômicos, 37 qualitativos e sete quantitativos, utilizando-se a análise conjunta baseada no algoritmo de Gower. Utilizou-se o delineamento inteiramente ao acaso, com três repetições e três plantas por parcela. As plantas estudadas cresceram em vasos de 5 L. Houve variabilidade fenotípica entre os acessos de pimenta estudados, principalmente nos frutos, que mostraram diferenças acentuadas em tamanho, formato, coloração, teores de sólidos solúveis totais e vitamina C. O método aglomerativo utilizado foi UPGMA por ter maior coeficiente de correlação cofenética (r = 0,82. Os acessos estudados dividiram-se em seis grupos. O agrupamento com base na distância de Gower revelou maior eficiência na disjunção dos genótipos quando foram utilizadas as variáveis qualitativas em comparação às quantitativas, indicando uma maior contribuição daquelas na explicação dos agrupamentos. A análise conjunta dos dados quantitativos e qualitativos resultou em maior eficiência na determinação da divergência genética entre os acessos avaliados, sendo uma alternativa viável e uma ferramenta importante para o conhecimento da variabilidade em bancos de germoplasma.The estimation of genetic variability in germplasm collections is important not only for the conservation of genetic resources, but also for plant breeding purposes. Accessions in a
Lee, N.Y. [Radioisotope Research Division, Korea Atomic Energy Research Institute, P.O. Box 105, Yuseong, Daejeon 305-353 (Korea, Republic of)], E-mail: nayoung4493@gmail.com; Jung, S.H. [Radioisotope Research Division, Korea Atomic Energy Research Institute, P.O. Box 105, Yuseong, Daejeon 305-353 (Korea, Republic of)], E-mail: shjung3@kaeri.re.kr; Kim, J.B. [Radioisotope Research Division, Korea Atomic Energy Research Institute, P.O. Box 105, Yuseong, Daejeon 305-353 (Korea, Republic of)], E-mail: jong@kaeri.re.kr
2009-07-15
In this paper, we evaluated the measurement geometries and data processing algorithms for industrial gamma tomography technology. Several phantoms simulating industrial objects were tested in various conditions with the gamma-ray CT system developed in KAERI (Korea Atomic Energy Research Institute). Radiation was measured with lead shielded 24 1x1 in Nal detectors. Regarding the parallel beam geometry, the EM algorithm showed the best resolution among the algebraic reconstruction technique (ART), simultaneous iterative reconstructive technique (SIRT) and expectation maximization (EM). However, the fan beam scanning was more time efficient than the parallel projection for the similar quality of reconstructed image. Future developments of the industrial gamma ray CT will be focused on a large-scale application which is more practical for a diagnosis in the petrochemical industry.
Benvenuto, Federico
2012-01-01
In this paper we propose a new statistical stopping rule for constrained maximum likelihood iterative algorithms applied to ill-posed inverse problems. To this aim we extend the definition of Tikhonov regularization in a statistical framework and prove that the application of the proposed stopping rule to the Iterative Space Reconstruction Algorithm (ISRA) in the Gaussian case and Expectation Maximization (EM) in the Poisson case leads to well defined regularization methods according to the given definition. We also prove that, if an inverse problem is genuinely ill-posed in the sense of Tikhonov, the same definition is not satisfied when ISRA and EM are optimized by classical stopping rule like Morozov's discrepancy principle, Pearson's test and Poisson discrepancy principle. The stopping rule is illustrated in the case of image reconstruction from data recorded by the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI). First, by using a simulated image consisting of structures analogous to those ...
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
Ricardo Vinicius Ledesma Contarteze
2007-06-01
Full Text Available INTRODUÇÃO: O estresse alcançado durante exercício agudo/crônico é relevante, pois altos índices de estresse podem prejudicar o bem-estar dos animais. As concentrações dos hormônios adrenocorticotrófico (ACTH e corticosterona, bem como as concentrações de ácido ascórbico e colesterol das glândulas adrenais são importantes biomarcadores de estresse. OBJETIVO: Analisar a sensibilidade de diferentes biomarcadores de estresse em ratos durante exercício agudo de natação em diferentes intensidades. MÉTODO: Ratos (18 adaptados à natação foram submetidos a três testes de 25 minutos suportando cargas 5,0; 5,5 e 6,0% do peso corporal (PC, para obtenção da máxima fase estável de lactato (MFEL. Em seguida, os animais foram divididos em dois grupos: M (n = 9, sacrificado após 25 minutos de exercício na intensidade de MFEL e S (n = 9, sacrificado após exercício exaustivo, em intensidade 25% superior a MFEL. Para comparações, um grupo controle C (n = 10 foi sacrificado em repouso. RESULTADOS: As concentrações séricas de ACTH e corticosterona foram superiores após exercício em ambas as intensidades comparadas com o grupo controle (P INTRODUCTION: The level of stress during acute/chronic exercise is important, since higher levels of stress may impair animal welfare. The adrenocorticotrophic (ACTH and corticosterone hormone concentrations, as well as cholesterol and ascorbic acid concentrations in adrenal gland, are considered an important stress biomarker. PURPOSE: To analyze the sensitivity of the different biomarkers during acute swimming exercise in different intensities performed by rats. METHODS: Male Wistar adult rats (n = 18 previously adapted to swimming were submitted to three 25 min. swimming tests with loads of 5.0; 5.5 and 6.0% of their body weight (BW, for maximum lactate steady state (MLSS determination. After MLSS attainment, the animals were divided into two groups: M (n = 9 sacrificed shortly after a 25
An Affinity Propagation-Based DNA Motif Discovery Algorithm
Chunxiao Sun
2015-01-01
Full Text Available The planted (l,d motif search (PMS is one of the fundamental problems in bioinformatics, which plays an important role in locating transcription factor binding sites (TFBSs in DNA sequences. Nowadays, identifying weak motifs and reducing the effect of local optimum are still important but challenging tasks for motif discovery. To solve the tasks, we propose a new algorithm, APMotif, which first applies the Affinity Propagation (AP clustering in DNA sequences to produce informative and good candidate motifs and then employs Expectation Maximization (EM refinement to obtain the optimal motifs from the candidate motifs. Experimental results both on simulated data sets and real biological data sets show that APMotif usually outperforms four other widely used algorithms in terms of high prediction accuracy.
An Affinity Propagation-Based DNA Motif Discovery Algorithm.
Sun, Chunxiao; Huo, Hongwei; Yu, Qiang; Guo, Haitao; Sun, Zhigang
2015-01-01
The planted (l, d) motif search (PMS) is one of the fundamental problems in bioinformatics, which plays an important role in locating transcription factor binding sites (TFBSs) in DNA sequences. Nowadays, identifying weak motifs and reducing the effect of local optimum are still important but challenging tasks for motif discovery. To solve the tasks, we propose a new algorithm, APMotif, which first applies the Affinity Propagation (AP) clustering in DNA sequences to produce informative and good candidate motifs and then employs Expectation Maximization (EM) refinement to obtain the optimal motifs from the candidate motifs. Experimental results both on simulated data sets and real biological data sets show that APMotif usually outperforms four other widely used algorithms in terms of high prediction accuracy.
BOUNDEDNESS OF MAXIMAL SINGULAR INTEGRALS
CHEN JIECHENG; ZHU XIANGRONG
2005-01-01
The authors study the singular integrals under the Hormander condition and the measure not satisfying the doubling condition. At first, if the corresponding singular integral is bounded from L2 to itseff, it is proved that the maximal singu lar integral is bounded from L∞ to RBMO except that it is infinite μ-a.e. on Rd. A sufficient condition and a necessary condition such that the maximal singular integral is bounded from L2 to itself are also obtained. There is a small gap between the two conditions.
Dispatch Scheduling to Maximize Exoplanet Detection
Johnson, Samson; McCrady, Nate; MINERVA
2016-01-01
MINERVA is a dedicated exoplanet detection telescope array using radial velocity measurements of nearby stars to detect planets. MINERVA will be a completely robotic facility, with a goal of maximizing the number of exoplanets detected. MINERVA requires a unique application of queue scheduling due to its automated nature and the requirement of high cadence observations. A dispatch scheduling algorithm is employed to create a dynamic and flexible selector of targets to observe, in which stars are chosen by assigning values through a weighting function. I designed and have begun testing a simulation which implements the functions of a dispatch scheduler and records observations based on target selections through the same principles that will be used at the commissioned site. These results will be used in a larger simulation that incorporates weather, planet occurrence statistics, and stellar noise to test the planet detection capabilities of MINERVA. This will be used to heuristically determine an optimal observing strategy for the MINERVA project.
Symmetry and approximability of submodular maximization problems
Vondrak, Jan
2011-01-01
A number of recent results on optimization problems involving submodular functions have made use of the multilinear relaxation of the problem. These results hold typically in the value oracle model, where the objective function is accessible via a black box returning f(S) for a given S. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of symmetry gap. Our main result is that for any fixed instance that exhibits a certain symmetry gap in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap would require exponentially many oracle queries. This unifies several known hardness results for submodular maximization, and implies several new ones. In particular, we prove that there is no constant-factor approximation for the problem of maximizing a non-negative submodular function over the bases of a matroid. We also provide a closely matching approximation algorithm for...
Gap processing for adaptive maximal Poisson-disk sampling
Yan, Dongming
2013-09-01
In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.
Greedy SINR Maximization in Collaborative Multibase Wireless Systems
Popescu Otilia
2004-01-01
Full Text Available We present a codeword adaptation algorithm for collaborative multibase wireless systems. The system is modeled with multiple inputs and multiple outputs (MIMO in which information is transmitted using multicode CDMA, and codewords are adapted based on greedy maximization of the signal-to-interference-plus-noise ratio. The procedure monotonically increases the sum capacity and, when repeated iteratively for all codewords in the system, converges to a fixed point. Fixed-point properties and a connection with sum capacity maximization, along with a discussion of simulations that corroborate the basic analytic results, are included in the paper.
Camila Coelho Greco
2010-04-01
Full Text Available Atletas de endurance frequentemente realizam exercícios intermitentes, com o objetivo de aumentar a intensidade do treinamento. Um índice bastante importante na avaliação desses atletas é a máxima fase estável de lactato sanguíneo (MLSS, que em geral é determinada com um protocolo contínuo. No entanto, as pausas existentes durante o exercício intermitente podem modificar as condições metabólicas dele. O objetivo deste estudo foi comparar a intensidade de nado correspondente à MLSS determinada de forma contínua (MLSSc e intermitente (MLSSi em atletas com diferentes níveis de rendimento aeróbio. Doze nadadores (22 ± 8 anos; 69,9 ± 7,63kg e 1,76 ± 0,07m e oito triatletas do gênero masculino (22 ± 9 anos; 69,5 ± 10,4kg e 1,76 ± 0,13m, realizaram os seguintes testes em diferentes dias em uma piscina de 25 m: 1 teste máximo na distância de 400m (v400; 2 duas a quatro repetições com duração de 30 min em diferentes intensidades, para a determinação da MLSSc, e; 4 duas a quatro tentativas de 12 x 150s com intervalo de 30s (5:1 em diferentes intensidades, para a determinação da MLSSi. Os nadadores apresentaram maiores valores em relação aos triatletas da v400 (1,38 ± 0,05 e 1,26 ± 0,06m.s-1, respectivamente, MLSSc (1,23 ± 0,05 e 1,08 ± 0,04m.s-1, respectivamente e MLSSi (1,26 ± 0,05 e 1,11 ± 0,05m.s-1, respectivamente. No entanto, a diferença percentual entre a MLSSc e a MLSSi foi estatisticamente similar entre os grupos (3%. Não houve diferença significante entre a concentração de lactato na MLSSc e na MLSSi nos dois grupos. Com base nesses resultados, pode-se concluir que o exercício intervalado utilizado permite um aumento na intensidade do exercício correspondente a MLSS, sem modificação na concentração de lactato, independente do nível de desempenho aeróbio.Endurance athletes frequently perform intermittent exercises with the aim to increase training intensity. A very important index in the
Extension of the SAEM algorithm for nonlinear mixed models with 2 levels of random effects.
Panhard, Xavière; Samson, Adeline
2009-01-01
This article focuses on parameter estimation of multilevel nonlinear mixed-effects models (MNLMEMs). These models are used to analyze data presenting multiple hierarchical levels of grouping (cluster data, clinical trials with several observation periods, ...). The variability of the individual parameters of the regression function is thus decomposed as a between-subject variability and higher levels of variability (e.g. within-subject variability). We propose maximum likelihood estimates of parameters of those MNLMEMs with 2 levels of random effects, using an extension of the stochastic approximation version of expectation-maximization (SAEM)-Monte Carlo Markov chain algorithm. The extended SAEM algorithm is split into an explicit direct expectation-maximization (EM) algorithm and a stochastic EM part. Compared to the original algorithm, additional sufficient statistics have to be approximated by relying on the conditional distribution of the second level of random effects. This estimation method is evaluated on pharmacokinetic crossover simulated trials, mimicking theophylline concentration data. Results obtained on those data sets with either the SAEM algorithm or the first-order conditional estimates (FOCE) algorithm (implemented in the nlme function of R software) are compared: biases and root mean square errors of almost all the SAEM estimates are smaller than the FOCE ones. Finally, we apply the extended SAEM algorithm to analyze the pharmacokinetic interaction of tenofovir on atazanavir, a novel protease inhibitor, from the Agence Nationale de Recherche sur le Sida 107-Puzzle 2 study. A significant decrease of the area under the curve of atazanavir is found in patients receiving both treatments.
Budget Allocation for Maximizing Viral Advertising in Social Networks
Bo-Lei Zhang; Zhu-Zhong Qian; Wen-Zhong Li; Bin Tang; Xiaoming Fu
2016-01-01
Viral advertising in social networks has arisen as one of the most promising ways to increase brand awareness and product sales. By distributing a limited budget, we can incentivize a set of users as initial adopters so that the advertising can start from the initial adopters and spread via social links to become viral. Despite extensive researches in how to target the most influential users, a key issue is often neglected: how to incentivize the initial adopters. In the problem of influence maximization, the assumption is that each user has a fixed cost for being initial adopters, while in practice, user decisions for accepting the budget to be initial adopters are often probabilistic rather than deterministic. In this paper, we study optimal budget allocation in social networks to maximize the spread of viral advertising. In particular, a concave probability model is introduced to characterize each user’s utility for being an initial adopter. Under this model, we show that it is NP-hard to find an optimal budget allocation for maximizing the spread of viral advertising. We then present a novel discrete greedy algorithm with near optimal performance, and further propose scaling-up techniques to improve the time-eﬃciency of our algorithm. Extensive experiments on real-world social graphs are implemented to validate the effectiveness of our algorithm in practice. The results show that our algorithm can outperform other intuitive heuristics significantly in almost all cases.
Comparison with reconstruction algorithms in magnetic induction tomography.
Han, Min; Cheng, Xiaolin; Xue, Yuyan
2016-05-01
Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image
Jailton Gregório Pelarigo
2007-06-01
Full Text Available O principal objetivo deste estudo foi verificar o efeito do nível de performance aeróbia na relação entre os índices técnicos correspondentes à velocidade crítica (VC e à velocidade máxima de 30 minutos (V30 em nadadores. Participaram deste estudo, 23 nadadores do gênero masculino com características antropométricas similares, divididos segundo o nível de performance aeróbia em grupo G1 (maior performance (n = 13 e G2 (menor performance (n = 10. Os indivíduos tinham pelo menos quatro anos de experiência no esporte e treinavam um volume semanal de 30.000 a 45.000m. A VC foi determinada através do coeficiente angular da regressão linear entre as distâncias (200 e 400m e seus respectivos tempos. A V30 foi determinada através da máxima distância realizada em um teste de 30 minutos. Todas as variáveis foram determinadas no nado crawl. A VC foi significantemente maior do que a V30 no grupo G1 (1,30 ± 0,04 vs. 1,23 ± 0,06m.s-1 e no G2 (1,17 ± 0,08 vs. 1,07 ± 0,06m.s-1. As duas variáveis foram maiores no grupo G1. As taxas de braçada correspondentes à VC (TBVC e à V30 (TBV30 obtidas nos grupos G1 (33,07 ± 4,34 vs. 31,38 ± 4,15 ciclos.min-1 e G2 (35,57 ± 6,52 vs. 33,54 ± 5,89 ciclos.min-1 foram similares entre si. A TBVC foi significantemente menor no grupo 1 do que no grupo 2, enquanto que a TBV30 não foi diferente entre os grupos. Os comprimentos de braçada correspondentes à VC (CBVC e à V30 (CBV30 foram significantemente maiores no grupo G1 (2,41 ± 0,33 vs. 2,38 ± 0,30m.ciclo-1 do que no G2 (2,04 ± 0,43 vs. 1,97 ± 0,40m.ciclo-1, e similares entre si nos dois grupos. As correlações (r entre a VC e a V30 e as variáveis técnicas correspondentes às duas velocidades foram significantes em todas as comparações (0,68 a 0,91. Portanto, a relação entre a velocidade e as variáveis técnicas correspondentes à VC e à V30 não é modificada pelo nível de performance aeróbia.The main objective of this study
Danilo Marcelo Leite do Prado
2010-04-01
Full Text Available FUNDAMENTO: Pouco se sabe sobre a resposta cardiorrespiratória e metabólica em crianças saudáveis durante teste de esforço progressivo máximo. OBJETIVO: Testar a hipótese de que as crianças apresentam respostas diferentes nos parâmetros cardiorrespiratórios e metabólicos durante teste de esforço progressivo máximo em comparação aos adultos. MÉTODOS: Vinte e cinco crianças saudáveis (sexo, 15M/10F; idade, 10,2 ± 0,2 e 20 adultos saudáveis (sexo, 11M/9F; idade, 27,5 ± 0,4 foram submetidos a um teste cardiopulmonar progressivo em esteira ergométrica até a exaustão para determinar a capacidade aeróbia máxima e limiar anaeróbio ventilatório (LAV. RESULTADOS: A carga de pico (5,9 ± 0,1 vs 5,6 ± 0,1 mph, respectivamente; p > 0,05, tempo de exercício (9,8 ± 0,4 vs 10,2 ± 0,4 min, respectivamente, p > 0,05, e aptidão cardiorrespiratória (VO2pico, 39,4 ± 2,1 vs 39,1 ± 2,0 ml.kg-1.min-1, respectivamente, p > 0,05 foram semelhantes em crianças e adultos. No limiar anaeróbio ventilatório, a frequência cardíaca, VO2 ml.kg-1.min-1, a frequência respiratória (FR, o espaço morto funcional estimado (VD/VT, o equivalente ventilatório de oxigênio (VE/VO2 e a pressão expiratória final do oxigênio (PETO2 foram maiores nas crianças, enquanto o volume corrente (VC, pulso de O2 e a pressão expiratória final do dióxido de carbono (PETCO2 foram menores. No pico do exercício, as crianças apresentaram FR e VD/VT superiores. No entanto, o pulso de O2, o VC, a ventilação pulmonar, o PETCO2 e a razão de troca respiratória foram menores nas crianças do que em adultos. CONCLUSÃO: Respostas cardiorrespiratórias e metabólicas durante o teste de esforço progressivo são diferentes em crianças em comparação aos adultos. Especificamente, essas diferenças sugerem que as crianças têm menor eficiência cardiovascular e respiratória. No entanto, as crianças apresentaram maior eficiência metabólica durante o teste
Vinicius Minatel
2012-10-01
Full Text Available CONTEXTUALIZAÇÃO: A medida de pressão expiratória máxima (PEmáx possui algumas contraindicações, pois acredita-se que as respostas obtidas nessa medida são similares às respostas encontradas na manobra de Valsalva (MV. OBJETIVOS: O objetivo principal é avaliar a resposta da frequência cardíaca (FC durante a medida da PEmáx e da MV em jovens saudáveis, em diferentes posturas, para identificar se e em qual condição a PEmáx reproduz as respostas obtidas na MV e, adicionalmente, estimar o trabalho realizado nas manobras. MÉTODO: Doze jovens saudáveis foram avaliados, orientados e familiarizados com as manobras. A MV foi composta por um esforço expiratório (40 mmHg durante 15 segundos contra um manômetro. A PEmáx foi executada segundo a American Thoracic Society. Ambas as medidas foram realizadas nas posturas supino e sentado. Para a análise da variação da frequência cardíaca (∆FC, índice de Valsalva (IV, índice da PEmáx (IPEmáx e o trabalho estimado das manobras (Wtotal, Wisotime, Wtotal/∆FCtotal e Wisotime/∆FCisotime , utilizou-se ANOVA two-way com post-hoc de Holm-Sidak (pBACKGROUND: The measure of the maximal expiratory pressure (MEP has some contraindications, as it is believed that the responses obtained in this measure are similar to the Valsalva maneuver (VM. OBJECTIVE: The main purpose of this study was to evaluate the heart rate responses (HR during the MEP and the VM measures in healthy young men into different postures aiming to identify whether and in which situation the MEP reproduces the responses obtained in the VM. Additionally we aim to estimate the workload realized during the maneuvers. METHOD: Twelve healthy young men were evaluated, instructed and familiarized with the maneuvers. The VM was characterized by an expiratory effort (40 mmHg against a manometer for 15 seconds. The MEP measure has been performed according to the American Thoracic Society. Both measures were performed at sitting
Note on maximal distance separable codes
YANG Jian-sheng; WANG De-xiu; JIN Qing-fang
2009-01-01
In this paper, the maximal length of maximal distance separable(MDS)codes is studied, and a new upper bound formula of the maximal length of MDS codes is obtained. Especially, the exact values of the maximal length of MDS codes in some parameters are given.
李旭超
2012-01-01
期望最大值算法是近年来图像统计模型参数估计技术领域的研究热点之一.在对期望最大值算法分析的基础上,结合其在图像统计模型参数估计中的应用研究,对改变标准期望最大值算法的3种方式进行比较分析.结合图像恢复、分割、目标跟踪以及与其他优化算法的融合应用,从丢失数据集的选取、丢失数据集和不完全数据集统计模型的建立,以及统计模型参数估计3个方面,评述期望最大值算法优缺点.丢失数据的选取和不完全数据的描述形式直接决定期望最大值算法的结构和计算复杂度,以致算法的成败.最后,讨论期望最大值算法目前存在的问题及未来的发展方向,指出其在具有丢失数据统计模型参数估计中广泛应用.%Expectation maximization (EM) algorithm for parameter estimation of image statistical model is one of the striking rsearch fields in recent decades. Based on the analysis of the EM algorithm, combining the current application research in parameter estimation of image statistical model, analysis and comparison are conducted in terms of the three improvement schemes of standard EM algorithm. In this paper, integrating image restoration, segmentation, object tracking and the fusion of other evolution optimization algorithms, through three aspects, such as the selection of missing data sets, the statistical model establishments of missing and incomplete data sets, and parameter estimation of image statistical models, as well as the advantages and disadvantages of the corresponding EM algorithm are expanded. The structure and complexity of EM algorithm, so far as to success or failure, are directly determined by the selection of missing data and the expression form of incomplete data. In the end, challenges and possible trends are discussed, and extensive applications of EM algorithm to parameter estimation of statistical model with missing data are pointed out.
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Range Condition and ML-EM Checkerboard Artifacts.
You, Jiangsheng; Wang, Jing; Liang, Zhengrong
2007-10-01
The expectation maximization (EM) algorithm for the maximum likelihood (ML) image reconstruction criterion generates severe checkerboard artifacts in the presence of noise. A classical remedy is to impose an a priori constraint for a penalized ML or maximum a posteriori probability solution. The penalty reduces the checkerboard artifacts and also introduces uncertainty because a priori information is usually unknown in clinic. Recent theoretical investigation reveals that the noise can be divided into two components: one is called null-space noise and the other is range-space noise. The null-space noise can be numerically estimated using filtered backprojection (FBP) algorithm. By the FBP algorithm, the null-space noise annihilates in the reconstruction while the range-space noise propagates into the reconstructed image. The aim of this work is to investigate the relation between the null-space noise and the checkerboard artifacts in the ML-EM reconstruction from noisy projection data. Our study suggests that removing the null-space noise from the projection data could improve the signal-to-noise ratio of the projection data and, therefore, reduce the checkerboard artifacts in the ML-EM reconstructed images. This study reveals an in-depth understanding of the different noise propagations in analytical and iterative image reconstructions, which may be useful to single photon emission computed tomography, where the noise has been a major factor for image degradation. The reduction of the ML-EM checkerboard artifacts by removing the null-space noise avoids the uncertainty of using a priori penalty.
[An adaptive scaling hybrid algorithm for reduction of CT artifacts caused by metal objects].
Chen, Yu; Luo, Hai; Zhou, He-qin
2009-03-01
A new adaptively hybrid filtering algorithm is proposed to reduce the artifacts caused by metal in CT image. Firstly, the method is used to preprocess the projection data of metal region and is reconstruct by filtered back projection (FBP) method. Then the expectation maximization algorithm (EM) is performed on the iterative original metal project data. Finally, a compensating procedure is applied to the reconstructed metal region. The simulation result has demonstrated that the proposed algorithm can remove the metal artifacts and keep the structure information of metal object effectively. It ensures that the tissues around the metal will not be distorted. The method is also computational efficient and effective for the CT images which contains several metal objects.
Document Classification Using Expectation Maximization with Semi Supervised Learning
Nigam, Bhawna; Salve, Sonal; Vamney, Swati
2011-01-01
As the amount of online document increases, the demand for document classification to aid the analysis and management of document is increasing. Text is cheap, but information, in the form of knowing what classes a document belongs to, is expensive. The main purpose of this paper is to explain the expectation maximization technique of data mining to classify the document and to learn how to improve the accuracy while using semi-supervised approach. Expectation maximization algorithm is applied with both supervised and semi-supervised approach. It is found that semi-supervised approach is more accurate and effective. The main advantage of semi supervised approach is "Dynamically Generation of New Class". The algorithm first trains a classifier using the labeled document and probabilistically classifies the unlabeled documents. The car dataset for the evaluation purpose is collected from UCI repository dataset in which some changes have been done from our side.
Maximizing the Probability of Detecting an Electromagnetic Counterpart of Gravitational-wave Events
Coughlin, Michael W
2016-01-01
Compact binary coalescences are a promising source of gravitational waves for second-generation interferometric gravitational-wave detectors such as advanced LIGO and advanced Virgo. These are among the most promising sources for joint detection of electromagnetic (EM) and gravitational-wave (GW) emission. To maximize the science performed with these objects, it is essential to undertake a followup observing strategy that maximizes the likelihood of detecting the EM counterpart. We present a follow-up strategy that maximizes the counterpart detection probability, given a fixed investment of telescope time. We show how the prior assumption on the luminosity function of the electro-magnetic counterpart impacts the optimized followup strategy. Our results suggest that if the goal is to detect an EM counterpart from among a succession of GW triggers, the optimal strategy is to perform long integrations in the highest likelihood regions, with a time investment that is proportional to the $2/3$ power of the surface...
A new solution for maximal clique problem based sticker model.
Darehmiraki, Majid
2009-02-01
In this paper, we use stickers to construct a solution space of DNA for the maximal clique problem (MCP). Simultaneously, we also apply the DNA operation in the sticker-based model to develop a DNA algorithm. The results of the proposed algorithm show that the MCP is resolved with biological operations in the sticker-based model for the solution space of the sticker. Moreover, this work presents clear evidence of the ability of DNA computing to solve the NP-complete problem. The potential of DNA computing for the MCP is promising given the operational time complexity of O(nxk).
Maximizing residual capacity in connection-oriented networks
Krzysztof Walkowiak
2006-07-01
Full Text Available The following problem arises in the study of survivable connection-oriented networks. Given a demand matrix to be routed between nodes, we want to route all demands, so that the residual capacity given by the difference between link capacity and link flow is maximized. Each demand can use only one path. Therefore, the flow is modeled as nonbifurcated multicommodity flow. We call the considered problem nonbifurcated congestion (NBC problem. Solving NBC problem enables robust restoration of failed connections in a case of network failure. We propose a new heuristic algorithm for NBC problem and compare its performance with existing algorithms.
Synthetic-aperture radar autofocus by maximizing sharpness.
Fienup, J R
2000-02-15
To focus a synthetic-aperture radar image that is suffering from phase errors, a phase-error estimate is found that, when it is applied, maximizes the sharpness of the image. Closed-form expressions are derived for the gradients of a sharpness metric with respect to phase-error parameters, including both a point-by-point (nonparametric) phase function and coefficients of a polynomial expansion. Use of these expressions allows for a highly efficient gradient-search algorithm for high-order phase errors. The effectiveness of the algorithm is demonstrated with an example.
Statistical mechanics of influence maximization with thermal noise
Lynn, Christopher W.; Lee, Daniel D.
2017-03-01
The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.
Marco Antônio Mota Gomes; Audes Diógenes de Magalhães Feitosa; Wille Oigman; José Márcio Ribeiro; Emílio Hideyuki Moriguchi; José Francisco Kerr Saraiva; Dalton Bertolim Précoma; Artur Beltrame Ribeiro; Celso Amodeo; Andréa Araujo Brandão
2008-01-01
FUNDAMENTO: As diretrizes nacionais e internacionais enfatizam a importância do tratamento eficaz da hipertensão arterial. Apesar disso, verificam-se baixos índices de controle e alcance das metas preconizadas, indicando que é importante planejar e implementar melhores estratégias de tratamento. OBJETIVO: Avaliar a eficácia de um tratamento, em escalonamento de doses, tendo como base a olmesartana medoxomila. MÉTODOS: Este é um estudo aberto, nacional, multicêntrico e prospectivo, de 144 paci...
Asymptotics of robust utility maximization
Knispel, Thomas
2012-01-01
For a stochastic factor model we maximize the long-term growth rate of robust expected power utility with parameter $\\lambda\\in(0,1)$. Using duality methods the problem is reformulated as an infinite time horizon, risk-sensitive control problem. Our results characterize the optimal growth rate, an optimal long-term trading strategy and an asymptotic worst-case model in terms of an ergodic Bellman equation. With these results we propose a duality approach to a "robust large deviations" criterion for optimal long-term investment.
Multivariate residues and maximal unitarity
Søgaard, Mads; Zhang, Yang
2013-12-01
We extend the maximal unitarity method to amplitude contributions whose cuts define multidimensional algebraic varieties. The technique is valid to all orders and is explicitly demonstrated at three loops in gauge theories with any number of fermions and scalars in the adjoint representation. Deca-cuts realized by replacement of real slice integration contours by higher-dimensional tori encircling the global poles are used to factorize the planar triple box onto a product of trees. We apply computational algebraic geometry and multivariate complex analysis to derive unique projectors for all master integral coefficients and obtain compact analytic formulae in terms of tree-level data.
Beeping a Maximal Independent Set
Afek, Yehuda; Alon, Noga; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot...
Maximal Congruences on Some Semigroups
Jintana Sanwong; R.P. Sullivan
2007-01-01
In 1976 Howie proved that a finite congruence-free semigroup is a simple group if it has at least three elements but no zero elementInfinite congruence-free semigroups are far more complicated to describe, but some have been constructed using semigroups of transformations (for example, by Howie in 1981 and by Marques in 1983)Here, forcertain semigroups S of numbers and of transformations, we determine all congruences p on S such that S/p is congruence-free, that is, we describe all maximal congruences on such semigroups S.
Souza, Claudio Eduardo Scriptori de
1996-02-01
In the Operating Center of Electrical Energy System has been every time more and more important the understanding of the difficulties related to the electrical power behavior. In order to have adequate operation of the system the state estimation process is very important. However before performing the system state estimation owe needs to know if the system is observable otherwise the estimation will no be possible. This work has a main objective, to develop a software that allows one to visualize the whole network in case that network is observable or the observable island of the entire network. As theoretical background the theories and algorithm using the triangular factorization of gain matrix as well as the concepts contained on factorization path developed by Bretas et alli were used. The algorithm developed by him was adapted to the Windows graphical form so that the numerical results of the network observability are shown in the computer screen in graphical form. This algorithm is essentially instead of numerical as the one based on the factorization of gain matrix only. To implement the algorithm previously referred it was used the Borland C++ compiler for windows version 4.0 due to the facilities for sources generation it presents. The results of the tests in the network with 6, 14 and 30 bus leads to: (1) the simplification of observability analysis, using sparse vectors and triangular factorization of the gain matrix; (2) the behavior similarity of the three testes systems with effective clues that the routine developed works well for any systems mainly for systems with bigger quantities of bus and lines; (3) the alternative way of presenting numerical results using the program developed here in graphical forms. (author)
Fast Deterministic Distributed Maximal Independent Set Computation on Growth-Bounded Graphs
Kuhn, Fabian; Moscibroda, Thomas; Nieberg, Tim; Wattenhofer, Roger; Fraigniaud, Pierre
2005-01-01
The distributed complexity of computing a maximal independent set in a graph is of both practical and theoretical importance. While there exists an elegant O(log n) time randomized algorithm for general graphs, no deterministic polylogarithmic algorithm is known. In this paper, we study the problem
A VARIATIONAL EXPECTATION-MAXIMIZATION METHOD FOR THE INVERSE BLACK BODY RADIATION PROBLEM
Jiantao Cheng; Tie Zhou
2008-01-01
The inverse black body radiation problem, which is to reconstruct the area tempera-ture distribution from the measurement of power spectrum distribution, is a well-known ill-posed problem. In this paper, a variational expectation-maximization (EM) method is developed and its convergence is studied. Numerical experiments demonstrate that the variational EM method is more efficient and accurate than the traditional methods, in-cluding the Tikhonov regularization method, the Landweber method and the conjugate gradient method.
Inapproximability of maximal strip recovery
Jiang, Minghui
2009-01-01
In comparative genomic, the first step of sequence analysis is usually to decompose two or more genomes into syntenic blocks that are segments of homologous chromosomes. For the reliable recovery of syntenic blocks, noise and ambiguities in the genomic maps need to be removed first. Maximal Strip Recovery (MSR) is an optimization problem proposed by Zheng, Zhu, and Sankoff for reliably recovering syntenic blocks from genomic maps in the midst of noise and ambiguities. Given $d$ genomic maps as sequences of gene markers, the objective of \\msr{d} is to find $d$ subsequences, one subsequence of each genomic map, such that the total length of syntenic blocks in these subsequences is maximized. For any constant $d \\ge 2$, a polynomial-time 2d-approximation for \\msr{d} was previously known. In this paper, we show that for any $d \\ge 2$, \\msr{d} is APX-hard, even for the most basic version of the problem in which all gene markers are distinct and appear in positive orientation in each genomic map. Moreover, we provi...
Maximal right smooth extension chains
Huang, Yun Bao
2010-01-01
If $w=u\\alpha$ for $\\alpha\\in \\Sigma=\\{1,2\\}$ and $u\\in \\Sigma^*$, then $w$ is said to be a \\textit{simple right extension}of $u$ and denoted by $u\\prec w$. Let $k$ be a positive integer and $P^k(\\epsilon)$ denote the set of all $C^\\infty$-words of height $k$. Set $u_{1},\\,u_{2},..., u_{m}\\in P^{k}(\\epsilon)$, if $u_{1}\\prec u_{2}\\prec ...\\prec u_{m}$ and there is no element $v$ of $P^{k}(\\epsilon)$ such that $v\\prec u_{1}\\text{or} u_{m}\\prec v$, then $u_{1}\\prec u_{2}\\prec...\\prec u_{m}$ is said to be a \\textit{maximal right smooth extension (MRSE) chains}of height $k$. In this paper, we show that \\textit{MRSE} chains of height $k$ constitutes a partition of smooth words of height $k$ and give the formula of the number of \\textit{MRSE} chains of height $k$ for each positive integer $k$. Moreover, since there exist the minimal height $h_1$ and maximal height $h_2$ of smooth words of length $n$ for each positive integer $n$, we find that \\textit{MRSE} chains of heights $h_1-1$ and $h_2+1$ are good candidates t...
Bruno Honorato da Silveira
2012-03-01
Full Text Available O objetivo do estudo foi comparar o ponto de deflexão da freqüência cardíaca (PDFC visual e método DMAX com a máxima fase estável de lactato (MFEL. Treze corredores executaram teste incremental Vameval e testes de cargas retangulares (TCR. A velocidade do PDFC visual (14,3 ± 1,13km.h-1 foi significantemente maior que o DMAX (13,2 ± 1,35km.h-1 além de apresentarem correlação não significante. Entretanto, nenhuma dessas velocidades foram diferentes da MFEL (13,8 ± 0,90km.h-1 embora somente o PDFC visual tenha apresentado correlação significante com a MFEL (r = 0,75. A concentração de lactato sanguíneo não apresentou estabilidade em oito sujeitos no TCR na intensidade do PDFC visual o qual nos leva a concluir que este não é um índice confiável para estimativa da MFEL. No entanto, este índice pode ser usado como um indicador de capacidade aeróbia.The aim of study was to compare heart rate deflection point (HRDP determined by visual and DMAX methods to Maximal lactate steady state (MLSS. Thirteen runners carried out incremental test Vameval and constant load tests (CLT. Velocity of HRDP (14,3 ± 1,13km.h-1 was significantly higher compared to DMAX (13,2 ± 1,35km.h-1 but they were not significantly correlated. However, both velocities, HRDP and DMAX, were not different from MLSS (13,8 ± 0,90km.h-1 while only HRDP has been significant correlated with MLSS (r = 0,75. On eight runners during CLT the blood lactate concentration did not show stability at HRDP velocity which to let us to conclude that HRPD is not appropriated to estimate MLSS. However, it may be used as aerobic capacity index.
Wireless multi-hop network scenario emulation by controlling maximal error
Huizhou ZHAO; Xiaoming LI; Wei YAN
2009-01-01
A fundamental problem about Ad-hoc network emulation is how to emulate a wireless multi-hop network scenario in a fixed testbed, which, however, is scarcely discussed in depth by existing radio frequency (RF)control wireless network emulators. Therefore, in this article, we first formulate the wireless multi-hop scenario emulation problem based on RF-control, take controlling the maximal error as the solution mentality, and present an algorithm called the matching exact-solution condition algorithm. The experiment results show that the algorithm can solve the wireless multi-hop network scenario emulation problem and maximal error can be achieved.
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Kaarina Matilainen
Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Listing All Maximal Cliques in Large Sparse Real-World Graphs
Eppstein, David
2011-01-01
We implement a new algorithm for listing all maximal cliques in sparse graphs due to Eppstein, L\\"offler, and Strash (ISAAC 2010) and analyze its performance on a large corpus of real-world graphs. Our analysis shows that this algorithm is the first to offer a practical solution to listing all maximal cliques in large sparse graphs. All other theoretically-fast algorithms for sparse graphs have been shown to be significantly slower than the algorithm of Tomita et al. (Theoretical Computer Science, 2006) in practice. However, the algorithm of Tomita et al. uses an adjacency matrix, which requires too much space for large sparse graphs. Our new algorithm opens the door for fast analysis of large sparse graphs whose adjacency matrix will not fit into working memory.
Speckle reduction for medical ultrasound images with an expectation-maximization framework
HOU Tao; WANG Yuanyuan; GUO Yi
2011-01-01
In view of inherent speckle noise in medical images, a speckle reduction method was proposed based on an expectation-maximization （EM） framework. First, the real component of the in-phase/quadrature （I/Q） ultrasound image is extracted. Then, it is used to blindly estimate the point spread function （PSF） of the imaging system. Finally, based on the EM framework, an iterative algorithm alternating between the Wiener Filter and the anisotropic diffusion （AD） is exploited to produce despeckled images. The comparison experiment is carried out on both simulated and in vivo ultrasound images. It is shown that, with respect to the I/Q image, the proposed method averagely improves the speckle-signal-to-noise ratio （S-SNR） and the edge preservation index （β） of images by the factor of 1.94 and 7.52. Meanwhile, it averagely reduces the normalized mean-squared error （NMSE） by the factor of 3.95. The simulation and in vivo results indicates that the proposed method has a better overall performance than exited ones.
The maximal D = 4 supergravities
Wit, Bernard de [Institute for Theoretical Physics and Spinoza Institute, Utrecht University, Postbus 80.195, NL-3508 TD Utrecht (Netherlands); Samtleben, Henning [Laboratoire de Physique, ENS Lyon, 46 allee d' Italie, F-69364 Lyon CEDEX 07 (France); Trigiante, Mario [Dept. of Physics, Politecnico di Torino, Corso Duca degli Abruzzi 24, I-10129 Turin (Italy)
2007-06-15
All maximal supergravities in four space-time dimensions are presented. The ungauged Lagrangians can be encoded in an E{sub 7(7)}-Sp(56; R)/GL(28) matrix associated with the freedom of performing electric/magnetic duality transformations. The gauging is defined in terms of an embedding tensor {theta} which encodes the subgroup of E{sub 7(7)} that is realized as a local invariance. This embedding tensor may imply the presence of magnetic charges which require corresponding dual gauge fields. The latter can be incorporated by using a recently proposed formulation that involves tensor gauge fields in the adjoint representation of E{sub 7(7)}. In this formulation the results take a universal form irrespective of the electric/magnetic duality basis. We present the general class of supersymmetric and gauge invariant Lagrangians and discuss a number of applications.
Maximizing profit using recommender systems
Das, Aparna; Ricketts, Daniel
2009-01-01
Traditional recommendation systems make recommendations based solely on the customer's past purchases, product ratings and demographic data without considering the profitability the items being recommended. In this work we study the question of how a vendor can directly incorporate the profitability of items into its recommender so as to maximize its expected profit while still providing accurate recommendations. Our approach uses the output of any traditional recommender system and adjust them according to item profitabilities. Our approach is parameterized so the vendor can control how much the recommendation incorporating profits can deviate from the traditional recommendation. We study our approach under two settings and show that it achieves approximately 22% more profit than traditional recommendations.
The maximal D=5 supergravities
de Wit, Bernard; Trigiante, M; Wit, Bernard de; Samtleben, Henning; Trigiante, Mario
2007-01-01
The general Lagrangian for maximal supergravity in five spacetime dimensions is presented with vector potentials in the \\bar{27} and tensor fields in the 27 representation of E_6. This novel tensor-vector system is subject to an intricate set of gauge transformations, describing 3(27-t) massless helicity degrees of freedom for the vector fields and 3t massive spin degrees of freedom for the tensor fields, where the (even) value of t depends on the gauging. The kinetic term of the tensor fields is accompanied by a unique Chern-Simons coupling which involves both vector and tensor fields. The Lagrangians are completely encoded in terms of the embedding tensor which defines the E_6 subgroup that is gauged by the vectors. The embedding tensor is subject to two constraints which ensure the consistency of the combined vector-tensor gauge transformations and the supersymmetry of the full Lagrangian. This new formulation encompasses all possible gaugings.
Constraint Propagation as Information Maximization
Abdallah, A Nait
2012-01-01
Dana Scott used the partial order among partial functions for his mathematical model of recursively defined functions. He interpreted the partial order as one of information content. In this paper we elaborate on Scott's suggestion of regarding computation as a process of information maximization by applying it to the solution of constraint satisfaction problems. Here the method of constraint propagation can be interpreted as decreasing uncertainty about the solution -- that is, as gain in information about the solution. As illustrative example we choose numerical constraint satisfaction problems to be solved by interval constraints. To facilitate this approach to constraint solving we formulate constraint satisfaction problems as formulas in predicate logic. This necessitates extending the usual semantics for predicate logic so that meaning is assigned not only to sentences but also to formulas with free variables.
A NOVEL THRESHOLD BASED EDGE DETECTION ALGORITHM
Y. RAMADEVI,
2011-06-01
Full Text Available Image segmentation is the process of partitioning/subdividing a digital image into multiple meaningful regions or sets of pixels regions with respect to a particular application. Edge detection is one of the frequently used techniques in digital image processing. The level to which the subdivision is carried depends on theproblem being viewed. Edges characterize boundaries and are therefore a problem of fundamental importance in image processing. There are many ways to perform edge detection. In this paper different Edge detection methods such as Sobel, Prewitt, Robert, Canny, Laplacian of Gaussian (LOG are used for segmenting the image. Expectation-Maximization (EM algorithm, OSTU and Genetic algorithms are also used. A new edge detection technique is proposed which detects the sharp and accurate edges that are not possible with the existing techniques. The proposed method with different threshold values for given input image is shown that ranges between 0 and 1 and it are observed that when the threshold value is 0.68 the sharp edges are recognised properly.
张智晟; 龚文杰; 于强; 常德政
2012-01-01
提出基于类电磁机制算法的对角递归神经网络的风电功率预测模型.对角递归神经网络属于动态递归神经网络,具有较好的动态性能；类电磁机制算法模拟电磁场中带电粒子间吸引与排斥机制,可进行全局优化,具有好的收敛性能.模型采用类电磁机制算法对对角递归神经网络进行优化,可避免使神经网络训练陷入局部最小点,提高模型的预测精度.仿真结果表明,模型可有效降低预测误差,获得满意的预测精度.%Diagonal recursive neural network based wind power forecasting model using electro-magnetism-like mechanism algorithm is constructed in this paper. The diagonal recursive neural network is contained in dynamic recursive neural network, which possesses good dynamic performance. Electromagnetism-like mechanism algorithm simulates attraction and repulsion mechanism for particles in the electromagnetic field. It possesses global optimization ability and good convergence performance. Diagonal recursive neural network is optimized by electromagnetism-like mechanism algorithm, which can avoid neural network to immerse in the local minimal points and improve the forecasting precision. The testing results show that the proposed forecasting mode can reduce forecasting error and obtain satisfactory forecasting precision.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2017-01-15
Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.
Barbosa, Diego R.; Silva, Alessandro L. da; Luciano, Edson Jose Rezende; Nepomuceno, Leonardo [Universidade Estadual Paulista (UNESP), Bauru, SP (Brazil). Dept. de Engenharia Eletrica], Emails: diego_eng.eletricista@hotmail.com, alessandrolopessilva@uol.com.br, edson.joserl@uol.com.br, leo@feb.unesp.br
2009-07-01
Problems of DC Optimal Power Flow (OPF) have been solved by various conventional optimization methods. When the modeling of DC OPF involves discontinuous functions or not differentiable, the use of solution methods based on conventional optimization is often not possible because of the difficulty in calculating the gradient vectors at points of discontinuity/non-differentiability of these functions. This paper proposes a method for solving the DC OPF based on Genetic Algorithms (GA) with real coding. The proposed GA has specific genetic operators to improve the quality and viability of the solution. The results are analyzed for an IEEE test system, and its solutions are compared, when possible, with those obtained by a method of interior point primal-dual logarithmic barrier. The results highlight the robustness of the method and feasibility of obtaining the solution to real systems.
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Polarity related influence maximization in signed social networks.
Dong Li
Full Text Available Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
黄宁宁; 贾振红; 杨杰; 庞韶宁
2011-01-01
基于纹理图像的特征,提出了基于灰度共生矩阵(GLCM)和快速极大似然估计(EM)算法相结合的纹理图像分割新算法,为了获得较好的纹理图像分割结果该算法采用灰度共生矩阵的三个常用特征并在四个方向上求平均,从而克服了方向的影响.采用欧式距离度量函数求得两特征向量的距离.通过用改进EM算法对距离矩阵进行聚类,得到纹理图像的初始分割结果,最后用形态学的方法实现对纹理图像边界的精确定位.
基于朴素贝叶斯的EM缺失数据填充算法%EM algorithm to implement missing values based on Nalve Bayesian
邹薇; 王会进
2011-01-01
Dataset with missing values is quite common in real applications. It is a big problem of data pretreatment, and handling missing values has become a research hot issue. EM chooses the center of cluster randomly leading to cluster irregularly, so this pape%实际应用中大量的不完整的数据集，造成了数据中信息的丢失和分析的不方便，所以对缺失数据的处理已经成为目前分类领域研究的热点。由于EM方法随机选取初始代表簇中心会导致聚类不稳定，本文使用朴素贝叶斯算法的分类结果作为EM算法的初始使用范围，然后按E步M步反复求精，利用得到的最大化值填充缺失数据。实验结果表明，本文的算法加强了聚类的稳定性，具有更好的数据填充效果。
Almeida, Adino Americo Heimlich
2009-07-01
Graphics Processing Units (GPU) are high performance co-processors intended, originally, to improve the use and quality of computer graphics applications. Since researchers and practitioners realized the potential of using GPU for general purpose, their application has been extended to other fields out of computer graphics scope. The main objective of this work is to evaluate the impact of using GPU in two typical problems of Nuclear area. The neutron transport simulation using Monte Carlo method and solve heat equation in a bi-dimensional domain by finite differences method. To achieve this, we develop parallel algorithms for GPU and CPU in the two problems described above. The comparison showed that the GPU-based approach is faster than the CPU in a computer with two quad core processors, without precision loss. (author)
Beeping a Maximal Independent Set
Afek, Yehuda; Bar-Joseph, Ziv; Cornejo, Alejandro; Haeupler, Bernhard; Kuhn, Fabian
2012-01-01
We consider the problem of computing a maximal independent set (MIS) in an extremely harsh broadcast model that relies only on carrier sensing. The model consists of an anonymous broadcast network in which nodes have no knowledge about the topology of the network or even an upper bound on its size. Furthermore, it is assumed that an adversary chooses at which time slot each node wakes up. At each time slot a node can either beep, that is, emit a signal, or be silent. At a particular time slot, beeping nodes receive no feedback, while silent nodes can only differentiate between none of its neighbors beeping, or at least one of its neighbors beeping. We start by proving a lower bound that shows that in this model, it is not possible to locally converge to an MIS in sub-polynomial time. We then study four different relaxations of the model which allow us to circumvent the lower bound and find an MIS in polylogarithmic time. First, we show that if a polynomial upper bound on the network size is known, it is possi...
A Maximally Supersymmetric Kondo Model
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
A Distributed Spanning Tree Algorithm
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asynchronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
A distributed spanning tree algorithm
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Svend Hauge
1988-01-01
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asyncronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
A Distributed Spanning Tree Algorithm
Johansen, Karl Erik; Jørgensen, Ulla Lundin; Nielsen, Sven Hauge
We present a distributed algorithm for constructing a spanning tree for connected undirected graphs. Nodes correspond to processors and edges correspond to two-way channels. Each processor has initially a distinct identity and all processors perform the same algorithm. Computation as well...... as communication is asynchronous. The total number of messages sent during a construction of a spanning tree is at most 2E+3NlogN. The maximal message size is loglogN+log(maxid)+3, where maxid is the maximal processor identity....
A Revenue Maximization Approach for Provisioning Services in Clouds
Li Pan
2015-01-01
Full Text Available With the increased reliability, security, and reduced cost of cloud services, more and more users are attracted to having their jobs and applications outsourced into IAAS data centers. For a cloud provider, deciding how to provision services to clients is far from trivial. The objective of this decision is maximizing the provider’s revenue, while fulfilling its IAAS resource constraints. The above problem is defined as IAAS cloud provider revenue maximization (ICPRM problem in this paper. We formulate a service provision approach to help a cloud provider to determine which combination of clients to admit and in what Quality-of-Service (QoS levels and to maximize provider’s revenue given its available resources. We show that the overall problem is a nondeterministic polynomial- (NP- hard one and develop metaheuristic solutions based on the genetic algorithm to achieve revenue maximization. The experimental simulations and numerical results show that the proposed approach is both effective and efficient in solving ICPRM problems.
Mining Maximal Frequent Patterns in a Unidirectional FP-tree
SONG Jing-jing; LIU Rui-xin; WANG Yan; JIANG Bao-qing
2006-01-01
Becausemining complete set of frequent patterns from dense database could be impractical, an interesting alternative has been proposed recently. Instead of mining the complete set of frequent patterns, the new model only finds out the maximal frequent patterns, which can generate all frequent patterns. FP-growth algorithm is one of the most efficient frequent-pattern mining methods published so far. However,because FP-tree and conditional FP-trees must be two-way traversable, a great deal memory is needed in process of mining. This paper proposes an efficient algorithm Unid_FP-Max for mining maximal frequent patterns based on unidirectional FP-tree. Because of generation method of unidirectional FP-tree and conditional unidirectional FP-trees, the algorithm reduces the space consumption to the fullest extent. With the development of two techniques:single path pruning and header table pruning which can cut down many conditional unidirectional FP-trees generated recursively in mining process, Unid_ FP-Max further lowers the expense of time and space.
On the Furthest Hyperplane Problem and Maximal Margin Clustering
Liberty, Edo; Weinstein, Omri
2011-01-01
This paper introduces the Furthest Hyperplane Problem (FHP). Given a set of $n$ points in $\\R^d$, the objective is to produce the hyperplane (through the origin) which maximizes the separation margin, that is, the minimal distance between the hyperplane and an input point. We prove that FHP is NP-hard to approximate to within some small (multiplicative) constant, by presenting a gap preserving reduction from a particular version of the PCP theorem. We also present an algorithm which runs in time $O(n^{\\tilde{O}(1/\\theta^2)})$ where $\\theta$ is the optimal margin. It is based on a dimension reduction technique combined with an $\\eps$-net argument in the reduced dimension. As a consequence, we obtain the first polynomial time algorithm for Maximal Margin Clustering (MMC), which is the unsupervised counterpart of Support Vector Machines (SVM), for the case where the margin is a constant factor of the point cloud diameter. Indeed, this is our main motivation. Our algorithm's running time dependance on the margin ...
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Methodology and basic algorithms of the Livermore Economic Modeling System
Bell, R.B.
1981-03-17
The methodology and the basic pricing algorithms used in the Livermore Economic Modeling System (EMS) are described. The report explains the derivations of the EMS equations in detail; however, it could also serve as a general introduction to the modeling system. A brief but comprehensive explanation of what EMS is and does, and how it does it is presented. The second part examines the basic pricing algorithms currently implemented in EMS. Each algorithm's function is analyzed and a detailed derivation of the actual mathematical expressions used to implement the algorithm is presented. EMS is an evolving modeling system; improvements in existing algorithms are constantly under development and new submodels are being introduced. A snapshot of the standard version of EMS is provided and areas currently under study and development are considered briefly.
Speed-up for N-FINDR algorithm
WANG Li-guo; ZHANG Ye
2008-01-01
N-FINDR is a very popular algorithm of endmember (EM) extraction for its automated property and high efficiency. Unfortunately, innumerable volume calculation, initial random selection of EMs and blind searching for EMs lead to low speed of the algorithm and limit the applications of the algorithm. So in this paper two measures are proposed to speed up the algorithm. One of the measures is substituting distance calculation for volume calculation. Thus the avoidance of volume calculation greatly decreases the computational cost. The other measure is resorting dataset in terms of pixel purity hkelihood based on pixel purity index (PPI) concept. Then, initial EMs can be selected well-founded and a fast searching for EMs is achieved. Numerical experi-ments show that the two measures speed up the original algorithm hundreds of times as the number of EMs is more than ten.
Maximal Neighbor Similarity Reveals Real Communities in Networks
Žalik, Krista Rizman
2015-12-01
An important problem in the analysis of network data is the detection of groups of densely interconnected nodes also called modules or communities. Community structure reveals functions and organizations of networks. Currently used algorithms for community detection in large-scale real-world networks are computationally expensive or require a priori information such as the number or sizes of communities or are not able to give the same resulting partition in multiple runs. In this paper we investigate a simple and fast algorithm that uses the network structure alone and requires neither optimization of pre-defined objective function nor information about number of communities. We propose a bottom up community detection algorithm in which starting from communities consisting of adjacent pairs of nodes and their maximal similar neighbors we find real communities. We show that the overall advantage of the proposed algorithm compared to the other community detection algorithms is its simple nature, low computational cost and its very high accuracy in detection communities of different sizes also in networks with blurred modularity structure consisting of poorly separated communities. All communities identified by the proposed method for facebook network and E-Coli transcriptional regulatory network have strong structural and functional coherence.
Global preferential consistency for the topological sorting-based maximal spanning tree problem
Joseph, Rémy-Robert
2012-01-01
We introduce a new type of fully computable problems, for DSS dedicated to maximal spanning tree problems, based on deduction and choice: preferential consistency problems. To show its interest, we describe a new compact representation of preferences specific to spanning trees, identifying an efficient maximal spanning tree sub-problem. Next, we compare this problem with the Pareto-based multiobjective one. And at last, we propose an efficient algorithm solving the associated preferential consistency problem.
Maximal inequalities for demimartingales and their applications
WANG XueJun; HU ShuHe
2009-01-01
In this paper,we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides.The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob's type maximal inequality for demimartingales,strong laws of large numbers and growth rate for demimartingales and associated random variables.At last,we give an equivalent condition of uniform integrability for demisubmartingales.
Maximal inequalities for demimartingales and their applications
无
2009-01-01
In this paper, we establish some maximal inequalities for demimartingales which generalize and improve the results of Christofides. The maximal inequalities for demimartingales are used as key inequalities to establish other results including Doob’s type maximal inequality for demimartingales, strong laws of large numbers and growth rate for demimartingales and associated random variables. At last, we give an equivalent condition of uniform integrability for demisubmartingales.
SUN Churen
2005-01-01
It is difficult to judge whether a given point is a global maximizer of an unconstrained optimization problem. This paper deals with this problem by considering globa linformation via integral and gives a necessary and sufficient condition judging whether a given point is a global maximizer of an unconstrained optimization problem. An algorithm is offered under such a condition and finally two test problems are verified via the offered algorithm.
Task-oriented maximally entangled states
Agrawal, Pankaj; Pradhan, B, E-mail: agrawal@iopb.res.i, E-mail: bpradhan@iopb.res.i [Institute of Physics, Sachivalaya Marg, Bhubaneswar, Orissa 751 005 (India)
2010-06-11
We introduce the notion of a task-oriented maximally entangled state (TMES). This notion depends on the task for which a quantum state is used as the resource. TMESs are the states that can be used to carry out the task maximally. This concept may be more useful than that of a general maximally entangled state in the case of a multipartite system. We illustrate this idea by giving an operational definition of maximally entangled states on the basis of communication tasks of teleportation and superdense coding. We also give examples and a procedure to obtain such TMESs for n-qubit systems.
Inflation in maximal gauged supergravities
Kodama, Hideo [Theory Center, KEK,Tsukuba 305-0801 (Japan); Department of Particles and Nuclear Physics,The Graduate University for Advanced Studies,Tsukuba 305-0801 (Japan); Nozawa, Masato [Dipartimento di Fisica, Università di Milano, and INFN, Sezione di Milano,Via Celoria 16, 20133 Milano (Italy)
2015-05-18
We discuss the dynamics of multiple scalar fields and the possibility of realistic inflation in the maximal gauged supergravity. In this paper, we address this problem in the framework of recently discovered 1-parameter deformation of SO(4,4) and SO(5,3) dyonic gaugings, for which the base point of the scalar manifold corresponds to an unstable de Sitter critical point. In the gauge-field frame where the embedding tensor takes the value in the sum of the 36 and 36’ representations of SL(8), we present a scheme that allows us to derive an analytic expression for the scalar potential. With the help of this formalism, we derive the full potential and gauge coupling functions in analytic forms for the SO(3)×SO(3)-invariant subsectors of SO(4,4) and SO(5,3) gaugings, and argue that there exist no new critical points in addition to those discovered so far. For the SO(4,4) gauging, we also study the behavior of 6-dimensional scalar fields in this sector near the Dall’Agata-Inverso de Sitter critical point at which the negative eigenvalue of the scalar mass square with the largest modulus goes to zero as the deformation parameter s approaches a critical value s{sub c}. We find that when the deformation parameter s is taken sufficiently close to the critical value, inflation lasts more than 60 e-folds even if the initial point of the inflaton allows an O(0.1) deviation in Planck units from the Dall’Agata-Inverso critical point. It turns out that the spectral index n{sub s} of the curvature perturbation at the time of the 60 e-folding number is always about 0.96 and within the 1σ range n{sub s}=0.9639±0.0047 obtained by Planck, irrespective of the value of the η parameter at the critical saddle point. The tensor-scalar ratio predicted by this model is around 10{sup −3} and is close to the value in the Starobinsky model.
Luiz Guilherme Antonacci Guglielmo
2005-02-01
Full Text Available O objetivo deste estudo foi analisar a relação da potência aeróbica máxima e da força muscular (força isotônica máxima e força explosiva de salto vertical com a economia de corrida (EC em atletas de endurance. Vinte e seis corredores do sexo masculino (27,9 ± 6,4 anos; 62,7 ± 4,3kg; 168,6 ± 6,1cm; 6,6 ± 3,1% de gordura corporal realizaram, em diferentes dias, as seguintes provas: a teste incremental para a determinação do consumo máximo de oxigênio (VO2max e sua respectiva intensidade (IVO2max; b teste submáximo com velocidade constante para determinar a EC; c teste de carga máxima no leg press; e d altura máxima de salto com contramovimento (SV. O VO2max (63,8 ± 8,3ml/kg/min foi significantemente correlacionado (r = 0,63; p El objetivo de este estudio fué el de analizar la relación de la potencia aeróbica máxima y da la fuerza muscular (fuerza isotónica máxima y de la fuerza explosiva de salto vertical con la economía de carrera (EC en atletas de endurance. Veintiseis corredores de sexo masculino (27,9 ± 6,4 años; 62,7 ± 4,3 kg; 168,6 ± 6,1 cm; 6,6 ± 3,1% de grasa corporal realizaron en diferentes días, los seguintes tests: a test incremental para la determinación del consumo máximo de oxígeno (VO2max y su respectiva intensidad (IVO2max; b test submáximo con velocidad constante para determinar la EC; c test de carga máxima como leg press y; d altura máxima de salto con contramovimento (SV. El VO2max (63,8 ± 8,3 ml/kg/min fué significantemente correlacionado (r = 0,63; p The objective of this study was to analyze the relationship of maximal aerobic power and the muscular strength (maximal isotonic strength and vertical jump explosive power with the running economy (RE in endurance athletes. Twenty-six male runners (27.9 ± 6.4 years; 62.7 ± 4.3 kg; 168.6 ± 6.1 cm; 6.6 ± 3.1% of body fat performed in different days the following tests: a incremental test to determine the maximal oxygen uptake (VO2max
最大泛化规则生成%Generation of Maximally Generalized Rules
徐如燕; 鲁汉榕; 郭齐胜
2001-01-01
In this paper,the generation of maximally generalized rules in the course of classitication knowledge discovery based on rough sets theory is discussed. Firstly, an algorithm is introduced. Secondly,we propose that the information-based J-measure is used as another measure of attribute signifi cance value. This measure is used for heuristically selecting the conditions to be removed in the process of extracting a set of maximally generalized rules. Finally,we present an example to illustrate the process of the algorithm.
Solutions of Maximal Compatible Granules and Approximations in Rough Set Models
Chen Wu,
2015-06-01
Full Text Available Abstract—This paper emphasizes studying on a new method to rough set theory by obtaining granules with maximal compatible classes as primitive ones in which any two objects are mutually compatible, proposes the upper and lower approximation computations to extend rough set models for further building multi-granulation rough set theory in incomplete information systems, discusses the properties and relationships of granules and approximations, designs algorithms to solve maximal compatible classes, the lower and upper approximations. It verifies the correctness of algorithms by an example.
Coverage maximization under resource constraints using proliferating random walks
Sudipta Saha; Niloy Ganguly; Abhijit Guria
2015-02-01
Dissemination of information has been one of the prime needs in almost every kind of communication network. The existing algorithms for this service, try to maximize the coverage, i.e., the number of distinct nodes to which a given piece of information could be conveyed under the constraints of time and energy. However, the problem becomes challenging for unstructured and decentralized environments. Due to its simplicity and adaptability, random walk (RW) has been a very useful tool for such environments. Different variants of this technique have been studied. In this paper, we study a history-based non-uniform proliferating random strategy where new walkers are dynamically introduced in the sparse regions of the network. Apart from this, we also study the breadth-first characteristics of the random walk-based algorithms through an appropriately designed metrics.
Computing Maximally Supersymmetric Scattering Amplitudes
Stankowicz, James Michael, Jr.
This dissertation reviews work in computing N = 4 super-Yang--Mills (sYM) and N = 8 maximally supersymmetric gravity (mSUGRA) scattering amplitudes in D = 4 spacetime dimensions in novel ways. After a brief introduction and overview in Ch. 1, the various techniques used to construct amplitudes in the remainder of the dissertation are discussed in Ch. 2. This includes several new concepts such as d log and pure integrand bases, as well as how to construct the amplitude using exactly one kinematic point where it vanishes. Also included in this chapter is an outline of the Mathematica package on shell diagrams and numerics.m (osdn) that was developed for the computations herein. The rest of the dissertation is devoted to explicit examples. In Ch. 3, the starting point is tree-level sYM amplitudes that have integral representations with residues that obey amplitude relations. These residues are shown to have corresponding residue numerators that allow a double copy prescription that results in mSUGRA residues. In Ch. 4, the two-loop four-point sYM amplitude is constructed in several ways, showcasing many of the techniques of Ch. 2; this includes an example of how to use osdn. The two-loop five-point amplitude is also presented in a pure integrand representation with comments on how it was constructed from one homogeneous cut of the amplitude. On-going work on the two-loop n-point amplitude is presented at the end of Ch. 4. In Ch. 5, the three-loop four-point amplitude is presented in the d log representation and in the pure integrand representation. In Ch. 6, there are several examples of four- through seven-loop planar diagrams that illustrate how considerations of the singularity structure of the amplitude underpin dual-conformal invariance. Taken with the previous examples, this is additional evidence that the structure known to exist in the planar sector extends to the full theory. At the end of this chapter is a proof that all mSUGRA amplitudes have a pole at
Are all maximally entangled states pure?
Cavalcanti, D; Terra-Cunha, M O
2005-01-01
In this Letter we study if all maximally entangled states are pure through several entanglement monotones. Our conclusions allow us to generalize the idea of monogamy of entanglement. Then we propose a polygamy of entanglement, which express that if a general multipartite state is maximally entangled it is necessarily factorized by any other system.
Sampling and Representation Complexity of Revenue Maximization
Dughmi, Shaddin; Han, Li; Nisan, Noam
2014-01-01
We consider (approximate) revenue maximization in auctions where the distribution on input valuations is given via "black box" access to samples from the distribution. We observe that the number of samples required -- the sample complexity -- is tightly related to the representation complexity of an approximately revenue-maximizing auction. Our main results are upper bounds and an exponential lower bound on these complexities.
Lisonek, Petr
1996-01-01
our classifications confirmthe maximality of previously known sets, the results in E^7 and E^8are new. Their counterpart in dimension larger than 10is a set of unit vectors with only two values of inner products in the Lorentz space R^{d,1}.The maximality of this set again follows from a bound due...
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing b...
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Cohomology of Weakly Reducible Maximal Triangular Algebras
董浙; 鲁世杰
2000-01-01
In this paper, we introduce the concept of weakly reducible maximal triangular algebras φwhich form a large class of maximal triangular algebras. Let B be a weakly closed algebra containing 5φ, we prove that the cohomology spaces Hn(φ, B) (n≥1) are trivial.
Global haplotype partitioning for maximal associated SNP pairs
Pezeshk Hamid
2009-08-01
Full Text Available Abstract Background Global partitioning based on pairwise associations of SNPs has not previously been used to define haplotype blocks within genomes. Here, we define an association index based on LD between SNP pairs. We use the Fisher's exact test to assess the statistical significance of the LD estimator. By this test, each SNP pair is characterized as associated, independent, or not-statistically-significant. We set limits on the maximum acceptable proportion of independent pairs within all blocks and search for the partitioning with maximal proportion of associated SNP pairs. Essentially, this model is reduced to a constrained optimization problem, the solution of which is obtained by iterating a dynamic programming algorithm. Results In comparison with other methods, our algorithm reports blocks of larger average size. Nevertheless, the haplotype diversity within the blocks is captured by a small number of tagSNPs. Resampling HapMap haplotypes under a block-based model of recombination showed that our algorithm is robust in reproducing the same partitioning for recombinant samples. Our algorithm performed better than previously reported models in a case-control association study aimed at mapping a single locus trait, based on simulation results that were evaluated by a block-based statistical test. Compared to methods of haplotype block partitioning, we performed best on detection of recombination hotspots. Conclusion Our proposed method divides chromosomes into the regions within which allelic associations of SNP pairs are maximized. This approach presents a native design for dimension reduction in genome-wide association studies. Our results show that the pairwise allelic association of SNPs can describe various features of genomic variation, in particular recombination hotspots.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Joan Garriga
Full Text Available The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i minimize the need of supervision, (ii reduce computational costs, (iii minimize the need of prior assumptions (e.g. simple parametrizations, and (iv capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC, a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC, a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering. Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
Expectation-Maximization Binary Clustering for Behavioural Annotation.
Garriga, Joan; Palmer, John R B; Oltra, Aitana; Bartumeus, Frederic
2016-01-01
The growing capacity to process and store animal tracks has spurred the development of new methods to segment animal trajectories into elementary units of movement. Key challenges for movement trajectory segmentation are to (i) minimize the need of supervision, (ii) reduce computational costs, (iii) minimize the need of prior assumptions (e.g. simple parametrizations), and (iv) capture biologically meaningful semantics, useful across a broad range of species. We introduce the Expectation-Maximization binary Clustering (EMbC), a general purpose, unsupervised approach to multivariate data clustering. The EMbC is a variant of the Expectation-Maximization Clustering (EMC), a clustering algorithm based on the maximum likelihood estimation of a Gaussian mixture model. This is an iterative algorithm with a closed form step solution and hence a reasonable computational cost. The method looks for a good compromise between statistical soundness and ease and generality of use (by minimizing prior assumptions and favouring the semantic interpretation of the final clustering). Here we focus on the suitability of the EMbC algorithm for behavioural annotation of movement data. We show and discuss the EMbC outputs in both simulated trajectories and empirical movement trajectories including different species and different tracking methodologies. We use synthetic trajectories to assess the performance of EMbC compared to classic EMC and Hidden Markov Models. Empirical trajectories allow us to explore the robustness of the EMbC to data loss and data inaccuracies, and assess the relationship between EMbC output and expert label assignments. Additionally, we suggest a smoothing procedure to account for temporal correlations among labels, and a proper visualization of the output for movement trajectories. Our algorithm is available as an R-package with a set of complementary functions to ease the analysis.
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Finding the Maximal Empty Rectangle Containing a Query Point
Kaplan, Haim
2011-01-01
Let $P$ be a set of $n$ points in an axis-parallel rectangle $B$ in the plane. We present an $O(n\\alpha(n)\\log^4 n)$-time algorithm to preprocess $P$ into a data structure of size $O(n\\alpha(n)\\log^3 n)$, such that, given a query point $q$, we can find, in $O(\\log^4 n)$ time, the largest-area axis-parallel rectangle that is contained in $B$, contains $q$, and its interior contains no point of $P$. This is a significant improvement over the previous solution of Augustine {\\em et al.} \\cite{qmex}, which uses slightly superquadratic preprocessing and storage.
Inclusive fitness maximization: An axiomatic approach.
Okasha, Samir; Weymark, John A; Bossert, Walter
2014-06-07
Kin selection theorists argue that evolution in social contexts will lead organisms to behave as if maximizing their inclusive, as opposed to personal, fitness. The inclusive fitness concept allows biologists to treat organisms as akin to rational agents seeking to maximize a utility function. Here we develop this idea and place it on a firm footing by employing a standard decision-theoretic methodology. We show how the principle of inclusive fitness maximization and a related principle of quasi-inclusive fitness maximization can be derived from axioms on an individual׳s 'as if preferences' (binary choices) for the case in which phenotypic effects are additive. Our results help integrate evolutionary theory and rational choice theory, help draw out the behavioural implications of inclusive fitness maximization, and point to a possible way in which evolution could lead organisms to implement it. Copyright © 2014 Elsevier Ltd. All rights reserved.
Maximal Hypersurfaces in Spacetimes with Translational Symmetry
Bulawa, Andrew
2016-01-01
We consider four-dimensional vacuum spacetimes which admit a free isometric spacelike R-action. Taking a quotient with respect to the R-action produces a three-dimensional quotient spacetime. We establish several results regarding maximal hypersurfaces (spacelike hypersurfaces of zero mean curvature) in quotient spacetimes. First, we show that complete noncompact maximal hypersurfaces must either be flat cylinders S^1 x R or conformal to the Euclidean plane. Second, we establish a positive mass theorem for certain maximal hypersurfaces. Finally, while it is meaningful to use a bounded lapse when adopting the maximal hypersurface gauge condition in the four-dimensional (asymptotically flat) setting, it is shown here that nontrivial quotient spacetimes admit the maximal hypersurface gauge only with an unbounded lapse.
CLIMP: Clustering Motifs via Maximal Cliques with Parallel Computing Design.
Zhang, Shaoqiang; Chen, Yong
2016-01-01
A set of conserved binding sites recognized by a transcription factor is called a motif, which can be found by many applications of comparative genomics for identifying over-represented segments. Moreover, when numerous putative motifs are predicted from a collection of genome-wide data, their similarity data can be represented as a large graph, where these motifs are connected to one another. However, an efficient clustering algorithm is desired for clustering the motifs that belong to the same groups and separating the motifs that belong to different groups, or even deleting an amount of spurious ones. In this work, a new motif clustering algorithm, CLIMP, is proposed by using maximal cliques and sped up by parallelizing its program. When a synthetic motif dataset from the database JASPAR, a set of putative motifs from a phylogenetic foot-printing dataset, and a set of putative motifs from a ChIP dataset are used to compare the performances of CLIMP and two other high-performance algorithms, the results demonstrate that CLIMP mostly outperforms the two algorithms on the three datasets for motif clustering, so that it can be a useful complement of the clustering procedures in some genome-wide motif prediction pipelines. CLIMP is available at http://sqzhang.cn/climp.html.
Fast Local Computation Algorithms
Rubinfeld, Ronitt; Vardi, Shai; Xie, Ning
2011-01-01
For input $x$, let $F(x)$ denote the set of outputs that are the "legal" answers for a computational problem $F$. Suppose $x$ and members of $F(x)$ are so large that there is not time to read them in their entirety. We propose a model of {\\em local computation algorithms} which for a given input $x$, support queries by a user to values of specified locations $y_i$ in a legal output $y \\in F(x)$. When more than one legal output $y$ exists for a given $x$, the local computation algorithm should output in a way that is consistent with at least one such $y$. Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of $k$-wise independent random variables and Beck's analysis in his algorithmic approach to the Lov{\\'{a}}sz Local Lemma, which und...
Chen, Zhe; Vijayan, Sujith; Barbieri, Riccardo; Wilson, Matthew A; Brown, Emery N
2009-07-01
UP and DOWN states, the periodic fluctuations between increased and decreased spiking activity of a neuronal population, are a fundamental feature of cortical circuits. Understanding UP-DOWN state dynamics is important for understanding how these circuits represent and transmit information in the brain. To date, limited work has been done on characterizing the stochastic properties of UP-DOWN state dynamics. We present a set of Markov and semi-Markov discrete- and continuous-time probability models for estimating UP and DOWN states from multiunit neural spiking activity. We model multiunit neural spiking activity as a stochastic point process, modulated by the hidden (UP and DOWN) states and the ensemble spiking history. We estimate jointly the hidden states and the model parameters by maximum likelihood using an expectation-maximization (EM) algorithm and a Monte Carlo EM algorithm that uses reversible-jump Markov chain Monte Carlo sampling in the E-step. We apply our models and algorithms in the analysis of both simulated multiunit spiking activity and actual multi- unit spiking activity recorded from primary somatosensory cortex in a behaving rat during slow-wave sleep. Our approach provides a statistical characterization of UP-DOWN state dynamics that can serve as a basis for verifying and refining mechanistic descriptions of this process.
Iterative Schemes for Generalized Equilibrium Problem and Two Maximal Monotone Operators
Yao JC
2009-01-01
Full Text Available The purpose of this paper is to introduce and study two new hybrid proximal-point algorithms for finding a common element of the set of solutions to a generalized equilibrium problem and the sets of zeros of two maximal monotone operators in a uniformly smooth and uniformly convex Banach space. We established strong and weak convergence theorems for these two modified hybrid proximal-point algorithms, respectively.
Claudionor Ribeiro da Silva
2010-07-01
Full Text Available O objetivo deste trabalho é a extração semiautomática de estradas vicinais. Este estudo é dividido em duas fases distintas. Na primeira fase é proposto um método para determinar larguras de estradas; na segunda fase é proposta uma função de aptidão para algoritmos genéticos que possibilita a detecção de segmentos candidatos a eixo deestradas vicinais. Três imagens digitais são usadas na realização dos experimentos. Para o estudo, a validação dos resultados obtidos foi realizada a partir de uma imagem de referência. A imagem de referência foi criada por meio de vetorização manual. O julgamento da acurácia foi computado com base nos índices: completeza, correção eRMS. Os resultados obtidos apontam a metodologia proposta como promissora. The aim objective of this paper is to present a semi-automatic extraction of local roads. The research is divided in two different phases. In the first one, a method to determine road width is proposed; in the second one, a fitness function for genetic algorithms is proposed. The referred function makes possible the detection of candidate segments for local roads axis. Three digital images are used in conducting the experiments. For the study, the validation of the obtained results was accomplished based on a reference image. The reference image was created through manualvectoring, and an accuracy evaluation was computed based on the following indexes: completeness, correctness and RMS. The obtained results point out the proposed methodology as promising.
Acceleration of the direct reconstruction of linear parametric images using nested algorithms.
Wang, Guobao; Qi, Jinyi
2010-03-01
Parametric imaging using dynamic positron emission tomography (PET) provides important information for biological research and clinical diagnosis. Indirect and direct methods have been developed for reconstructing linear parametric images from dynamic PET data. Indirect methods are relatively simple and easy to implement because the image reconstruction and kinetic modeling are performed in two separate steps. Direct methods estimate parametric images directly from raw PET data and are statistically more efficient. However, the convergence rate of direct algorithms can be slow due to the coupling between the reconstruction and kinetic modeling. Here we present two fast gradient-type algorithms for direct reconstruction of linear parametric images. The new algorithms decouple the reconstruction and linear parametric modeling at each iteration by employing the principle of optimization transfer. Convergence speed is accelerated by running more sub-iterations of linear parametric estimation because the computation cost of the linear parametric modeling is much less than that of the image reconstruction. Computer simulation studies demonstrated that the new algorithms converge much faster than the traditional expectation maximization (EM) and the preconditioned conjugate gradient algorithms for dynamic PET.
Are all maximally entangled states pure?
Cavalcanti, D.; Brandão, F. G. S. L.; Terra Cunha, M. O.
2005-10-01
We study if all maximally entangled states are pure through several entanglement monotones. In the bipartite case, we find that the same conditions which lead to the uniqueness of the entropy of entanglement as a measure of entanglement exclude the existence of maximally mixed entangled states. In the multipartite scenario, our conclusions allow us to generalize the idea of the monogamy of entanglement: we establish the polygamy of entanglement, expressing that if a general state is maximally entangled with respect to some kind of multipartite entanglement, then it is necessarily factorized of any other system.
An ethical justification of profit maximization
Koch, Carsten Allan
2010-01-01
In much of the literature on business ethics and corporate social responsibility, it is more or less taken for granted that attempts to maximize profits are inherently unethical. The purpose of this paper is to investigate whether an ethical argument can be given in support of profit maximizing...... behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...
Robust utility maximization in a discontinuous filtration
Jeanblanc, Monique; Ngoupeyou, Armand
2012-01-01
We study a problem of utility maximization under model uncertainty with information including jumps. We prove first that the value process of the robust stochastic control problem is described by the solution of a quadratic-exponential backward stochastic differential equation with jumps. Then, we establish a dynamic maximum principle for the optimal control of the maximization problem. The characterization of the optimal model and the optimal control (consumption-investment) is given via a forward-backward system which generalizes the result of Duffie and Skiadas (1994) and El Karoui, Peng and Quenez (2001) in the case of maximization of recursive utilities including model with jumps.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
A NEW RECURSIVE ALGORITHM FOR MULTIUSER DETECTION
Wang Lei; Zheng Baoyu; Li Lei; Chen Chao
2009-01-01
Based on the synthesis and analysis of recursive receivers,a new algorithm,namely partial grouping maximization likelihood algorithm,is proposed to achieve satisfactory performance with moderate computational complexity.During the analysis,some interesting properties shared by the proposed procedures are described.Finally,the performance assessment shows that the new scheme is superior to the linear detector and ordinary grouping algorithm,and achieves a bit-error rate close to that of the optimum receiver.
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Maximizing throughput by evaluating critical utilization paths
Weeda, P.J.
1991-01-01
Recently the relationship between batch structure, bottleneck machine and maximum throughput has been explored for serial, convergent and divergent process configurations consisting of two machines and three processes. In three of the seven possible configurations a multiple batch structure maximize
Relationship between maximal exercise parameters and individual ...
Relationship between maximal exercise parameters and individual time trial ... It is widely accepted that the ventilatory threshold (VT) is an important ... This study investigated whether the physiological responses during a 20km time trial (TT) ...
Simple technique for maximal thoracic muscle harvest.
Marshall, M Blair; Kaiser, Larry R; Kucharczuk, John C
2004-04-01
We present a modification of technique for standard muscle flap harvest, the placement of cutaneous traction sutures. This technique allows for maximal dissection of the thoracic muscles even through minimal incisions. Through improved exposure and traction, complete dissection of the muscle bed can be performed and the tissue obtained maximized. Because more muscle bulk is obtained with this technique, the need for a second muscle may be prevented.
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
Maximal Subgroups of Skew Linear Groups
M. Mahdavi-Hezavehi
2002-01-01
Let D be an infinite division algebra of finite dimension over its centre Z(D) = F, and n a positive integer. The structure of maximal subgroups of skew linear groups are investigated. In particular, assume N is a normal subgroup of GLn(D) and M is a maximal subgroup of N containing Z(N). It is shown that if M/Z(N) is finite, then N is central.
Feature Extraction Using Supervised Independent Component Analysis by Maximizing Class Distance
Sakaguchi, Yoshinori; Ozawa, Seiichi; Kotani, Manabu
Recently, Independent Component Analysis (ICA) has been applied to not only problems of blind signal separation, but also feature extraction of patterns. However, the effectiveness of pattern features extracted by conventional ICA algorithms depends on pattern sets; that is, how patterns are distributed in the feature space. As one of the reasons, we have pointed out that ICA features are obtained by increasing only their independence even if the class information is available. In this context, we can expect that more high-performance features can be obtained by introducing the class information into conventional ICA algorithms. In this paper, we propose a supervised ICA (SICA) that maximizes Mahalanobis distance between features of different classes as well as maximize their independence. In the first experiment, two-dimensional artificial data are applied to the proposed SICA algorithm to see how maximizing Mahalanobis distance works well in the feature extraction. As a result, we demonstrate that the proposed SICA algorithm gives good features with high separability as compared with principal component analysis and a conventional ICA. In the second experiment, the recognition performance of features extracted by the proposed SICA is evaluated using the three data sets of UCI Machine Learning Repository. From the results, we show that the better recognition accuracy is obtained using our proposed SICA. Furthermore, we show that pattern features extracted by SICA are better than those extracted by only maximizing the Mahalanobis distance.
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Consumer-driven profit maximization in broiler production and processing
Ecio de Farias Costa
2004-01-01
Full Text Available Increased emphasis on consumer markets in broiler profit-maximizing modeling generates results that differ from those by traditional profit-maximization models. This approach reveals that the adoption of step pricing and consideration of marketing options (examples of responsiveness to consumers affect the optimal feed formulation levels and types of broiler production to generate maximum profitability. The adoption of step pricing attests that higher profits can be obtained for targeted weights only if premium prices for broiler products are contracted.Um aumento na ênfase dada ao mercado de consumidores de carne de frango e modelos de maximização de lucros na produção de frangos de corte geram resultados que diferem daqueles obtidos em modelos tradicionais de maximização de lucros. Esta metodologia revela que a adoção de step-pricing e considerando opções de mercado (exemplos de resposta às preferências de consumidores afetam os níveis ótimos de formulação de rações e os tipos de produção de frangos de corte que geram uma lucratividade máxima. A adoção de step-pricing atesta que maiores lucros podem ser obtidos para pesos-alvo somente se preços-prêmio para produtos processados de carne de frango forem contratados.
Multiple Sink Positioning and Routing to Maximize the Lifetime of Sensor Networks
Kim, Haeyong; Kwon, Taekyoung; Mah, Pyeongsoo
In wireless sensor networks, the sensor nodes collect data, which are routed to a sink node. Most of the existing proposals address the routing problem to maximize network lifetime in the case of a single sink node. In this paper, we extend this problem into the case of multiple sink nodes. To maximize network lifetime, we consider the two problems: (i) how to position multiple sink nodes in the area, and (ii) how to route traffic flows from sensor nodes to sink nodes. In this paper, the solutions to these problems are formulated into a Mixed Integer Linear Programming (MILP) model. However, it is computationally difficult to solve the MILP formulation as the size of sensor network grows because MILP is NP-hard. Thus, we propose a heuristic algorithm, which produces a solution in polynomial time. From our experiments, we found out that the proposed heuristic algorithm provides a near-optimal solution for maximizing network lifetime in dense sensor networks.
Natural selection and the maximization of fitness.
Birch, Jonathan
2016-08-01
The notion that natural selection is a process of fitness maximization gets a bad press in population genetics, yet in other areas of biology the view that organisms behave as if attempting to maximize their fitness remains widespread. Here I critically appraise the prospects for reconciliation. I first distinguish four varieties of fitness maximization. I then examine two recent developments that may appear to vindicate at least one of these varieties. The first is the 'new' interpretation of Fisher's fundamental theorem of natural selection, on which the theorem is exactly true for any evolving population that satisfies some minimal assumptions. The second is the Formal Darwinism project, which forges links between gene frequency change and optimal strategy choice. In both cases, I argue that the results fail to establish a biologically significant maximization principle. I conclude that it may be a mistake to look for universal maximization principles justified by theory alone. A more promising approach may be to find maximization principles that apply conditionally and to show that the conditions were satisfied in the evolution of particular traits.
He, Xin; Cheng, Lishui; Fessler, Jeffrey A; Frey, Eric C
2011-06-01
In simultaneous dual-isotope myocardial perfusion SPECT (MPS) imaging, data are simultaneously acquired to determine the distributions of two radioactive isotopes. The goal of this work was to develop penalized maximum likelihood (PML) algorithms for a novel cross-tracer prior that exploits the fact that the two images reconstructed from simultaneous dual-isotope MPS projection data are perfectly registered in space. We first formulated the simultaneous dual-isotope MPS reconstruction problem as a joint estimation problem. A cross-tracer prior that couples voxel values on both images was then proposed. We developed an iterative algorithm to reconstruct the MPS images that converges to the maximum a posteriori solution for this prior based on separable surrogate functions. To accelerate the convergence, we developed a fast algorithm for the cross-tracer prior based on the complete data OS-EM (COSEM) framework. The proposed algorithm was compared qualitatively and quantitatively to a single-tracer version of the prior that did not include the cross-tracer term. Quantitative evaluations included comparisons of mean and standard deviation images as well as assessment of image fidelity using the mean square error. We also evaluated the cross tracer prior using a three-class observer study with respect to the three-class MPS diagnostic task, i.e., classifying patients as having either no defect, reversible defect, or fixed defects. For this study, a comparison with conventional ordered subsets-expectation maximization (OS-EM) reconstruction with postfiltering was performed. The comparisons to the single-tracer prior demonstrated similar resolution for areas of the image with large intensity changes and reduced noise in uniform regions. The cross-tracer prior was also superior to the single-tracer version in terms of restoring image fidelity. Results of the three-class observer study showed that the proposed cross-tracer prior and the convergent algorithms improved the
Angelis, G I; Kotasidis, F A; Matthews, J C [Imaging, Proteomics and Genomics, MAHSC, University of Manchester, Wolfson Molecular Imaging Centre, Manchester (United Kingdom); Reader, A J [Montreal Neurological Institute, McGill University, Montreal (Canada); Lionheart, W R, E-mail: georgios.angelis@mmic.man.ac.uk [School of Mathematics, University of Manchester, Alan Turing Building, Manchester (United Kingdom)
2011-07-07
Iterative expectation maximization (EM) techniques have been extensively used to solve maximum likelihood (ML) problems in positron emission tomography (PET) image reconstruction. Although EM methods offer a robust approach to solving ML problems, they usually suffer from slow convergence rates. The ordered subsets EM (OSEM) algorithm provides significant improvements in the convergence rate, but it can cycle between estimates converging towards the ML solution of each subset. In contrast, gradient-based methods, such as the recently proposed non-monotonic maximum likelihood (NMML) and the more established preconditioned conjugate gradient (PCG), offer a globally convergent, yet equally fast, alternative to OSEM. Reported results showed that NMML provides faster convergence compared to OSEM; however, it has never been compared to other fast gradient-based methods, like PCG. Therefore, in this work we evaluate the performance of two gradient-based methods (NMML and PCG) and investigate their potential as an alternative to the fast and widely used OSEM. All algorithms were evaluated using 2D simulations, as well as a single [{sup 11}C]DASB clinical brain dataset. Results on simulated 2D data show that both PCG and NMML achieve orders of magnitude faster convergence to the ML solution compared to MLEM and exhibit comparable performance to OSEM. Equally fast performance is observed between OSEM and PCG for clinical 3D data, but NMML seems to perform poorly. However, with the addition of a preconditioner term to the gradient direction, the convergence behaviour of NMML can be substantially improved. Although PCG is a fast convergent algorithm, the use of a (bent) line search increases the complexity of the implementation, as well as the computational time involved per iteration. Contrary to previous reports, NMML offers no clear advantage over OSEM or PCG, for noisy PET data. Therefore, we conclude that there is little evidence to replace OSEM as the algorithm of choice
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
2010-01-01
FUNDAMENTO: Pouco se sabe sobre a resposta cardiorrespiratória e metabólica em crianças saudáveis durante teste de esforço progressivo máximo. OBJETIVO: Testar a hipótese de que as crianças apresentam respostas diferentes nos parâmetros cardiorrespiratórios e metabólicos durante teste de esforço progressivo máximo em comparação aos adultos. MÉTODOS: Vinte e cinco crianças saudáveis (sexo, 15M/10F; idade, 10,2 ± 0,2) e 20 adultos saudáveis (sexo, 11M/9F; idade, 27,5 ± 0,4) foram submetidos a u...
Excap: maximization of haplotypic diversity of linked markers.
André Kahles
Full Text Available Genetic markers, defined as variable regions of DNA, can be utilized for distinguishing individuals or populations. As long as markers are independent, it is easy to combine the information they provide. For nonrecombinant sequences like mtDNA, choosing the right set of markers for forensic applications can be difficult and requires careful consideration. In particular, one wants to maximize the utility of the markers. Until now, this has mainly been done by hand. We propose an algorithm that finds the most informative subset of a set of markers. The algorithm uses a depth first search combined with a branch-and-bound approach. Since the worst case complexity is exponential, we also propose some data-reduction techniques and a heuristic. We implemented the algorithm and applied it to two forensic caseworks using mitochondrial DNA, which resulted in marker sets with significantly improved haplotypic diversity compared to previous suggestions. Additionally, we evaluated the quality of the estimation with an artificial dataset of mtDNA. The heuristic is shown to provide extensive speedup at little cost in accuracy.
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Kujawińska Agnieszka
2016-06-01
Full Text Available The article presents a study of applying the proposed method of cluster analysis to support purchasing decisions in the welding industry. The authors analyze the usefulness of the non-hierarchical method, Expectation Maximization (EM, in the selection of material (212 combinations of flux and wire melt for the SAW (Submerged Arc Welding method process. The proposed approach to cluster analysis is proved as useful in supporting purchase decisions.
Welfare-maximizing and revenue-maximizing tariffs with a few domestic firms
Bruno Larue; Jean-Philippe Gervais
2002-01-01
In this paper we compare the orthodox optimal tariff formula with the appropriate welfare-maximizing tariff when there are a few producing or importing firms. The welfare-maximizing tariff can be very low, voire negative in some cases, while in others it can even exceed the maximum-revenue tariff. The relationship between the welfare-maximizing tariff and the number of firms need not be monotonically increasing, because the tariff is not strictly used to internalize terms of trade externality...
Maximizing Complementary Quantities by Projective Measurements
M. Souza, Leonardo A.; Bernardes, Nadja K.; Rossi, Romeu
2017-04-01
In this work, we study the so-called quantitative complementarity quantities. We focus in the following physical situation: two qubits ( q A and q B ) are initially in a maximally entangled state. One of them ( q B ) interacts with a N-qubit system ( R). After the interaction, projective measurements are performed on each of the qubits of R, in a basis that is chosen after independent optimization procedures: maximization of the visibility, the concurrence, and the predictability. For a specific maximization procedure, we study in detail how each of the complementary quantities behave, conditioned on the intensity of the coupling between q B and the N qubits. We show that, if the coupling is sufficiently "strong," independent of the maximization procedure, the concurrence tends to decay quickly. Interestingly enough, the behavior of the concurrence in this model is similar to the entanglement dynamics of a two qubit system subjected to a thermal reservoir, despite that we consider finite N. However, the visibility shows a different behavior: its maximization is more efficient for stronger coupling constants. Moreover, we investigate how the distinguishability, or the information stored in different parts of the system, is distributed for different couplings.
Maximizing the probability of detecting an electromagnetic counterpart of gravitational-wave events
Coughlin, Michael; Stubbs, Christopher
2016-10-01
Compact binary coalescences are a promising source of gravitational waves for second-generation interferometric gravitational-wave detectors such as advanced LIGO and advanced Virgo. These are among the most promising sources for joint detection of electromagnetic (EM) and gravitational-wave (GW) emission. To maximize the science performed with these objects, it is essential to undertake a followup observing strategy that maximizes the likelihood of detecting the EM counterpart. We present a follow-up strategy that maximizes the counterpart detection probability, given a fixed investment of telescope time. We show how the prior assumption on the luminosity function of the electro-magnetic counterpart impacts the optimized followup strategy. Our results suggest that if the goal is to detect an EM counterpart from among a succession of GW triggers, the optimal strategy is to perform long integrations in the highest likelihood regions. For certain assumptions about source luminosity and mass distributions, we find that an optimal time investment that is proportional to the 2/3 power of the surface density of the GW location probability on the sky. In the future, this analysis framework will benefit significantly from the 3-dimensional localization probability.
Convergent algorithms for protein structural alignment
Martínez José
2007-08-01
Full Text Available Abstract Background Many algorithms exist for protein structural alignment, based on internal protein coordinates or on explicit superposition of the structures. These methods are usually successful for detecting structural similarities. However, current practical methods are seldom supported by convergence theories. In particular, although the goal of each algorithm is to maximize some scoring function, there is no practical method that theoretically guarantees score maximization. A practical algorithm with solid convergence properties would be useful for the refinement of protein folding maps, and for the development of new scores designed to be correlated with functional similarity. Results In this work, the maximization of scoring functions in protein alignment is interpreted as a Low Order Value Optimization (LOVO problem. The new interpretation provides a framework for the development of algorithms based on well established methods of continuous optimization. The resulting algorithms are convergent and increase the scoring functions at every iteration. The solutions obtained are critical points of the scoring functions. Two algorithms are introduced: One is based on the maximization of the scoring function with Dynamic Programming followed by the continuous maximization of the same score, with respect to the protein position, using a smooth Newtonian method. The second algorithm replaces the Dynamic Programming step by a fast procedure for computing the correspondence between Cα atoms. The algorithms are shown to be very effective for the maximization of the STRUCTAL score. Conclusion The interpretation of protein alignment as a LOVO problem provides a new theoretical framework for the development of convergent protein alignment algorithms. These algorithms are shown to be very reliable for the maximization of the STRUCTAL score, and other distance-dependent scores may be optimized with same strategy. The improved score optimization
Polyploidy Induction of Pteroceltis tatarinowii Maxim
Lin ZHANG; Feng WANG; Zhongkui SUN; Cuicui ZHU; Rongwei CHEN
2015-01-01
3%Objective] This study was conducted to obtain tetraploid Pteroceltis tatari-nowi Maxim. with excel ent ornamental traits. [Method] The stem apex growing points of Pteroceltis tatarinowi Maxim. were treated with different concentrations of colchicine solution for different hours to figure out a proper method and obtain poly-ploids. [Result] The most effective induction was obtained by treatment with 0.6%-0.8% colchicine for 72 h with 34.2% mutation rate. Flow cytometry and chromosome observation of the stem apex growing point of P. tatarinowi Maxim. proved that the tetraploid plants were successful y obtained with chromosome number 2n=4x=36. [Conclusion] The result not only fil s the blank of polyploid breeding of P. tatarinowi , but also provides an effective way to broaden the methods of cultivation of fast-growing, high-quality, disease-resilience, new varieties of Pteroceltis.
Quantum theory allows for absolute maximal contextuality
Amaral, Barbara; Cunha, Marcelo Terra; Cabello, Adán
2015-12-01
Contextuality is a fundamental feature of quantum theory and a necessary resource for quantum computation and communication. It is therefore important to investigate how large contextuality can be in quantum theory. Linear contextuality witnesses can be expressed as a sum S of n probabilities, and the independence number α and the Tsirelson-like number ϑ of the corresponding exclusivity graph are, respectively, the maximum of S for noncontextual theories and for the theory under consideration. A theory allows for absolute maximal contextuality if it has scenarios in which ϑ /α approaches n . Here we show that quantum theory allows for absolute maximal contextuality despite what is suggested by the examination of the quantum violations of Bell and noncontextuality inequalities considered in the past. Our proof is not constructive and does not single out explicit scenarios. Nevertheless, we identify scenarios in which quantum theory allows for almost-absolute-maximal contextuality.
The maximal process of nonlinear shot noise
Eliazar, Iddo; Klafter, Joseph
2009-05-01
In the nonlinear shot noise system-model shots’ statistics are governed by general Poisson processes, and shots’ decay-dynamics are governed by general nonlinear differential equations. In this research we consider a nonlinear shot noise system and explore the process tracking, along time, the system’s maximal shot magnitude. This ‘maximal process’ is a stationary Markov process following a decay-surge evolution; it is highly robust, and it is capable of displaying both a wide spectrum of statistical behaviors and a rich variety of random decay-surge sample-path trajectories. A comprehensive analysis of the maximal process is conducted, including its Markovian structure, its decay-surge structure, and its correlation structure. All results are obtained analytically and in closed-form.
Energy Band Calculations for Maximally Even Superlattices
Krantz, Richard; Byrd, Jason
2007-03-01
Superlattices are multiple-well, semiconductor heterostructures that can be described by one-dimensional potential wells separated by potential barriers. We refer to a distribution of wells and barriers based on the theory of maximally even sets as a maximally even superlattice. The prototypical example of a maximally even set is the distribution of white and black keys on a piano keyboard. Black keys may represent wells and the white keys represent barriers. As the number of wells and barriers increase, efficient and stable methods of calculation are necessary to study these structures. We have implemented a finite-element method using the discrete variable representation (FE-DVR) to calculate E versus k for these superlattices. Use of the FE-DVR method greatly reduces the amount of calculation necessary for the eigenvalue problem.
Land-cover classification with an expert classification algorithm using digital aerial photographs
José L. de la Cruz
2010-05-01
Full Text Available The purpose of this study was to evaluate the usefulness of the spectral information of digital aerial sensors in determining land-cover classification using new digital techniques. The land covers that have been evaluated are the following, (1 bare soil, (2 cereals, including maize (<em>Zea maysem> L., oats (<em>Avena sativaem> L., rye (<em>Secale cereale em>L., wheat (<em>Triticum aestivum em>L. and barley (<em>Hordeun vulgareem> L., (3 high protein crops, such as peas (<em>Pisum sativumem> L. and beans (<em>Vicia fabaem> L., (4 alfalfa (<em>Medicago sativaem> L., (5 woodlands and scrublands, including holly oak (<em>Quercus ilexem> L. and common retama (<em>Retama sphaerocarpaem> L., (6 urban soil, (7 olive groves (<em>Olea europaeaem> L. and (8 burnt crop stubble. The best result was obtained using an expert classification algorithm, achieving a reliability rate of 95%. This result showed that the images of digital airborne sensors hold considerable promise for the future in the field of digital classifications because these images contain valuable information that takes advantage of the geometric viewpoint. Moreover, new classification techniques reduce problems encountered using high-resolution images; while reliabilities are achieved that are better than those achieved with traditional methods.
Alam, Muhammad Mahbub; Hamid, Md. Abdul; Razzaque, Md. Abdur; Hong, Choong Seon
Broadband wireless access networks are promising technology for providing better end user services. For such networks, designing a scheduling algorithm that fairly allocates the available bandwidth to the end users and maximizes the overall network throughput is a challenging task. In this paper, we develop a centralized fair scheduling algorithm for IEEE 802.16 mesh networks that exploits the spatio-temporal bandwidth reuse to further enhance the network throughput. The proposed mechanism reduces the length of a transmission round by increasing the number of non-contending links that can be scheduled simultaneously. We also propose a greedy algorithm that runs in polynomial time. Performance of the proposed algorithms is evaluated by extensive simulations. Results show that our algorithms achieve higher throughput than that of the existing ones and reduce the computational complexity.
Alexandre Tachard Passos
2013-01-01
Resumo: Em processamento de linguagem natural, e em aprendizado de máquina em geral, é comum o uso de modelos gráficos probabilísticos (probabilistic graphical models). Embora estes modelos sejam muito convenientes, possibilitando a expressão de relações complexas entre várias variáveis que se deseja prever dado uma sentença ou um documento, algoritmos comuns de aprendizado e de previsão utilizando estes modelos são frequentemente ineficientes. Por isso têm-se explorado recentemente o uso de ...
Absence of parasympathetic reactivation after maximal exercise.
de Oliveira, Tiago Peçanha; de Alvarenga Mattos, Raphael; da Silva, Rhenan Bartels Ferreira; Rezende, Rafael Andrade; de Lima, Jorge Roberto Perrout
2013-03-01
The ability of the human organism to recover its autonomic balance soon after physical exercise cessation has an important impact on the individual's health status. Although the dynamics of heart rate recovery after maximal exercise has been studied, little is known about heart rate variability after this type of exercise. The aim of this study is to analyse the dynamics of heart rate and heart rate variability recovery after maximal exercise in healthy young men. Fifteen healthy male subjects (21·7 ± 3·4 years; 24·0 ± 2·1 kg m(-2) ) participated in the study. The experimental protocol consisted of an incremental maximal exercise test on a cycle ergometer, until maximal voluntary exhaustion. After the test, recovery R-R intervals were recorded for 5 min. From the absolute differences between peak heart rate values and the heart rate values at 1 and 5 min of the recovery, the heart rate recovery was calculated. Postexercise heart rate variability was analysed from calculations of the SDNN and RMSSD indexes, in 30-s windows (SDNN(30s) and RMSSD(30s) ) throughout recovery. One and 5 min after maximal exercise cessation, the heart rate recovered 34·7 (±6·6) and 75·5 (±6·1) bpm, respectively. With regard to HRV recovery, while the SDNN(30s) index had a slight increase, RMSSD(30s) index remained totally suppressed throughout the recovery, suggesting an absence of vagal modulation reactivation and, possibly, a discrete sympathetic withdrawal. Therefore, it is possible that the main mechanism associated with the fall of HR after maximal exercise is sympathetic withdrawal or a vagal tone restoration without vagal modulation recovery. © 2012 The Authors Clinical Physiology and Functional Imaging © 2012 Scandinavian Society of Clinical Physiology and Nuclear Medicine.
A Novel Robust Scene Change Detection Algorithm for Autonomous Robots Using Mixtures of Gaussians
Luis J. Manso
2014-02-01
Full Text Available Interest in change detection techniques has considerably increased during recent years in the field of autonomous robotics. This is partly because changes in a robot's working environment are useful for several robotic skills (e.g., spatial cognition, modelling or navigation and applications (e.g., surveillance or guidance robots. Changes are usually detected by comparing current data provided by the robot's sensors with a previously known map or model of the environment. When the data consists of a large point cloud, dealing with it is a computationally expensive task, mainly due to the amount of points and the redundancy. Using Gaussian Mixture Models (GMM instead of raw point clouds leads to a more compact feature space that can be used to efficiently process the input data. This allows us to successfully segment the set of 3D points acquired by the sensor and reduce the computational load of the change detection algorithm. However, the segmentation of the environment as a Mixture of Gaussians has some problems that need to be properly addressed. In this paper, a novel change detection algorithm is described in order to improve the robustness and computational cost of previous approaches. The proposal is based on the classic Expectation Maximization (EM algorithm, for which different selection criteria are evaluated. As demonstrated in the experimental results section, the proposed change detection algorithm achieves the detection of changes in the robot's working environment faster and more accurately than similar approaches.
Generalized linear model for mapping discrete trait loci implemented with LASSO algorithm.
Jun Xing
Full Text Available Generalized estimating equation (GEE algorithm under a heterogeneous residual variance model is an extension of the iteratively reweighted least squares (IRLS method for continuous traits to discrete traits. In contrast to mixture model-based expectation-maximization (EM algorithm, the GEE algorithm can well detect quantitative trait locus (QTL, especially large effect QTLs located in large marker intervals in the manner of high computing speed. Based on a single QTL model, however, the GEE algorithm has very limited statistical power to detect multiple QTLs because of ignoring other linked QTLs. In this study, the fast least absolute shrinkage and selection operator (LASSO is derived for generalized linear model (GLM with all possible link functions. Under a heterogeneous residual variance model, the LASSO for GLM is used to iteratively estimate the non-zero genetic effects of those loci over entire genome. The iteratively reweighted LASSO is therefore extended to mapping QTL for discrete traits, such as ordinal, binary, and Poisson traits. The simulated and real data analyses are conducted to demonstrate the efficiency of the proposed method to simultaneously identify multiple QTLs for binary and Poisson traits as examples.
A Novel Robust Scene Change Detection Algorithm for Autonomous Robots Using Mixtures of Gaussians
Luis J. Manso
2014-02-01
Full Text Available Interest in change detection techniques has considerably increased during recent years in the field of autonomous robotics. This is partly because changes in a robot’s working environment are useful for several robotic skills (e.g., spatial cognition, modelling or navigation and applications (e.g., surveillance or guidance robots. Changes are usually detected by comparing current data provided by the robot’s sensors with a previously known map or model of the environment. When the data consists of a large point cloud, dealing with it is a computationally expensive task, mainly due to the amount of points and the redundancy. Using Gaussian Mixture Models (GMM instead of raw point clouds leads to a more compact feature space that can be used to efficiently process the input data. This allows us to successfully segment the set of 3D points acquired by the sensor and reduce the computational load of the change detection algorithm. However, the segmentation of the environment as a Mixture of Gaussians has some problems that need to be properly addressed. In this paper, a novel change detection algorithm is described in order to improve the robustness and computational cost of previous approaches. The proposal is based on the classic Expectation Maximization (EM algorithm, for which different selection criteria are evaluated. As demonstrated in the experimental results section, the proposed change detection algorithm achieves the detection of changes in the robot’s working environment faster and more accurately than similar approaches.
Maximizing band gaps in plate structures
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...... periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated...
Maximal and Minimal Congruences on Some Semigroups
Jintana SANWONG; Boorapa SINGHA; R.P.SULLIVAN
2009-01-01
In 2006,Sanwong and Sullivan described the maximal congruences on the semigroup N consisting of all non-negative integers under standard multiplication,and on the semigroup T(X) consisting of all total transformations of an infinite set X under composition. Here,we determine all maximal congruences on the semigroup Zn under multiplication modulo n. And,when Y X,we do the same for the semigroup T(X,Y) consisting of all elements of T(X) whose range is contained in Y. We also characterise the minimal congruences on T(X,Y).
Maximizing oil yields may not optimize economics
1987-03-01
The Los Alamos National Laboratory has used the ASPEN computer code to calculate the economics of different hydroretorting conditions. When the oil yield was maximized and a oil shale plant designed around this process, the costs turned out much higher than expected. However, calculations based on runs of less than maximum yields showed lower cost estimates. It is recommended that future efforts should be concentrated on minimizing production costs rather than maximizing yields. An oil shale plant has been designed around minimum production cost, but has not been able to be tested experimentally.
Maximal Inequalities for Dependent Random Variables
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X-k. Then a......Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X...
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...... singularities and is free of any poles at infinity—properties closely related to uniform transcendentality and the UV finiteness of the theory. We also briefly comment on implications for maximal (N=8) supergravity theory (SUGRA)....
Designing lattice structures with maximal nearest-neighbor entanglement
Navarro-Munoz, J C; Lopez-Sandoval, R [Instituto Potosino de Investigacion CientIfica y Tecnologica, Camino a la presa San Jose 2055, 78216 San Luis Potosi (Mexico); Garcia, M E [Theoretische Physik, FB 18, Universitaet Kassel and Center for Interdisciplinary Nanostructure Science and Technology (CINSaT), Heinrich-Plett-Str.40, 34132 Kassel (Germany)
2009-08-07
In this paper, we study the numerical optimization of nearest-neighbor concurrence of bipartite one- and two-dimensional lattices, as well as non-bipartite two-dimensional lattices. These systems are described in the framework of a tight-binding Hamiltonian while the optimization of concurrence was performed using genetic algorithms. Our results show that the concurrence of the optimized lattice structures is considerably higher than that of non-optimized systems. In the case of one-dimensional chains, the concurrence increases dramatically when the system begins to dimerize, i.e., it undergoes a structural phase transition (Peierls distortion). This result is consistent with the idea that entanglement is maximal or shows a singularity near quantum phase transitions. Moreover, the optimization of concurrence in two-dimensional bipartite and non-bipartite lattices is achieved when the structures break into smaller subsystems, which are arranged in geometrically distinguishable configurations.
Maximizing sparse matrix vector product performance in MIMD computers
McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.
1994-12-31
A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.
Community Detection via Maximization of Modularity and Its Variants
Chen, Mingming; Szymanski, Boleslaw K
2015-01-01
In this paper, we first discuss the definition of modularity (Q) used as a metric for community quality and then we review the modularity maximization approaches which were used for community detection in the last decade. Then, we discuss two opposite yet coexisting problems of modularity optimization: in some cases, it tends to favor small communities over large ones while in others, large communities over small ones (so called the resolution limit problem). Next, we overview several community quality metrics proposed to solve the resolution limit problem and discuss Modularity Density (Qds) which simultaneously avoids the two problems of modularity. Finally, we introduce two novel fine-tuned community detection algorithms that iteratively attempt to improve the community quality measurements by splitting and merging the given network community structure. The first of them, referred to as Fine-tuned Q, is based on modularity (Q) while the second one is based on Modularity Density (Qds) and denoted as Fine-tu...
Pareto optimization of an industrial ecosystem: sustainability maximization
J. G. M.-S. Monteiro
2010-09-01
Full Text Available This work investigates a procedure to design an Industrial Ecosystem for sequestrating CO2 and consuming glycerol in a Chemical Complex with 15 integrated processes. The Complex is responsible for the production of methanol, ethylene oxide, ammonia, urea, dimethyl carbonate, ethylene glycol, glycerol carbonate, β-carotene, 1,2-propanediol and olefins, and is simulated using UNISIM Design (Honeywell. The process environmental impact (EI is calculated using the Waste Reduction Algorithm, while Profit (P is estimated using classic cost correlations. MATLAB (The Mathworks Inc is connected to UNISIM to enable optimization. The objective is granting maximum process sustainability, which involves finding a compromise between high profitability and low environmental impact. Sustainability maximization is therefore understood as a multi-criteria optimization problem, addressed by means of the Pareto optimization methodology for trading off P vs. EI.
Expectation Maximization for Hard X-ray Count Modulation Profiles
Benvenuto, Federico; Piana, Michele; Massone, Anna Maria
2013-01-01
This paper is concerned with the image reconstruction problem when the measured data are solar hard X-ray modulation profiles obtained from the Reuven Ramaty High Energy Solar Spectroscopic Imager (RHESSI)} instrument. Our goal is to demonstrate that a statistical iterative method classically applied to the image deconvolution problem is very effective when utilized for the analysis of count modulation profiles in solar hard X-ray imaging based on Rotating Modulation Collimators. The algorithm described in this paper solves the maximum likelihood problem iteratively and encoding a positivity constraint into the iterative optimization scheme. The result is therefore a classical Expectation Maximization method this time applied not to an image deconvolution problem but to image reconstruction from count modulation profiles. The technical reason that makes our implementation particularly effective in this application is the use of a very reliable stopping rule which is able to regularize the solution providing, ...
How to Maximize the Likelihood Function for a DSGE Model
Andreasen, Martin Møller
This paper extends two optimization routines to deal with objective functions for DSGE models. The optimization routines are i) a version of Simulated Annealing developed by Corana, Marchesi & Ridella (1987), and ii) the evolutionary algorithm CMA-ES developed by Hansen, Müller & Koumoutsakos (2003......). Following these extensions, we examine the ability of the two routines to maximize the likelihood function for a sequence of test economies. Our results show that the CMA- ES routine clearly outperforms Simulated Annealing in its ability to find the global optimum and in efficiency. With 10 unknown...... structural parameters in the likelihood function, the CMA-ES routine finds the global optimum in 95% of our test economies compared to 89% for Simulated Annealing. When the number of unknown structural parameters in the likelihood function increases to 20 and 35, then the CMA-ES routine finds the global...
Dopaminergic balance between reward maximization and policy complexity
Naama eParush
2011-05-01
Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.
Maximizing Information Diffusion in the Cyber-physical Integrated Network
Hongliang Lu
2015-11-01
Full Text Available Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Maximizing Information Diffusion in the Cyber-physical Integrated Network.
Lu, Hongliang; Lv, Shaohe; Jiao, Xianlong; Wang, Xiaodong; Liu, Juan
2015-11-11
Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS) strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID) algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching;
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...
Gradient dynamics and entropy production maximization
Janečka, Adam
2016-01-01
Gradient dynamics describes irreversible evolution by means of a dissipation potential, which leads to several advantageous features like Maxwell--Onsager relations, distinguishing between thermodynamic forces and fluxes or geometrical interpretation of the dynamics. Entropy production maximization is a powerful tool for predicting constitutive relations in engineering. In this paper, both approaches are compared and their shortcomings and advantages are discussed.
Robust Utility Maximization Under Convex Portfolio Constraints
Matoussi, Anis, E-mail: anis.matoussi@univ-lemans.fr [Université du Maine, Risk and Insurance institut of Le Mans Laboratoire Manceau de Mathématiques (France); Mezghani, Hanen, E-mail: hanen.mezghani@lamsin.rnu.tn; Mnif, Mohamed, E-mail: mohamed.mnif@enit.rnu.tn [University of Tunis El Manar, Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT (Tunisia)
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
Maximizing the Motivated Mind for Emergent Giftedness.
Rea, Dan
2001-01-01
This article explains how the theory of the motivated mind conceptualizes the productive interaction of intelligence, creativity, and achievement motivation and how this theory can help educators to maximize students' emergent potential for giftedness. It discusses the integration of cold-order thinking and hot-chaotic thinking into fluid-adaptive…
The Winning Edge: Maximizing Success in College.
Schmitt, David E.
This book offers college students ideas on how to maximize their success in college by examining the personal management techniques a student needs to succeed. Chapters are as follows: "Getting and Staying Motivated"; "Setting Goals and Tapping Your Resources"; "Conquering Time"; "Think Yourself to College Success"; "Understanding and Remembering…
MAXIMAL ELEMENTS AND EQUILIBRIUM OF ABSTRACT ECONOMY
刘心歌; 蔡海涛
2001-01-01
An existence theorem of maximal elements for a new type of preference correspondences which are Qθ-majorized is given. Then some existence theorems of equilibrium for abstract economy and qualitative game in which the constraint or preference correspondences are Qθ-majorized are obtained in locally convex topological vector spaces.
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main
Maximal workload capacity on moving platforms
Heus, R.; Wertheim, A.H.
1996-01-01
Physical tasks on a moving platform required more energy than the same tasks on a non-moving platform. In this study the maximum aerobic performance (defined as V_O2max) of people working on a moving floor was established compared to the maximal aerobic performance on a non-moving floor. The main qu
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing throughput in an automated test system
朱君
2007-01-01
@@ Overview This guide is collection of whitepapers designed to help you develop test systems that lower your cost, increase your test throughput, and can scale with future requirements. This whitepaper provides strategies for maximizing system throughput. To download the complete developers guide (120 pages), visit ni. com/automatedtest.
The gaugings of maximal D=6 supergravity
Bergshoeff, E.; Samtleben, H.; Sezgin, E.
2008-01-01
We construct the most general gaugings of the maximal D = 6 supergravity. The theory is ( 2, 2) supersymmetric, and possesses an on-shell SO( 5, 5) duality symmetry which plays a key role in determining its couplings. The field content includes 16 vector fields that carry a chiral spinor representat
WEIGHTED BOUNDEDNESS OF A ROUGH MAXIMAL OPERATOR
无
2000-01-01
In this note the authors give the weighted Lp-boundedness fora class of maximal singular integral operators with rough kernel.The result in this note is an improvement and extension ofthe result obtained by Chen and Lin in 1990.
Maximizing the Range of a Projectile.
Brown, Ronald A.
1992-01-01
Discusses solutions to the problem of maximizing the range of a projectile. Presents three references that solve the problem with and without the use of calculus. Offers a fourth solution suitable for introductory physics courses that relies more on trigonometry and the geometry of the problem. (MDH)
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Testing maximality in muon neutrino flavor mixing
Choubey, S; Choubey, Sandhya; Roy, Probir
2003-01-01
The small difference between the survival probabilities of muon neutrino and antineutrino beams, traveling through earth matter in a long baseline experiment such as MINOS, is shown to be an important measure of any possible deviation from maximality in the flavor mixing of those states.
Average utility maximization: A preference foundation
A.V. Kothiyal (Amit); V. Spinu (Vitalie); P.P. Wakker (Peter)
2014-01-01
textabstractThis paper provides necessary and sufficient preference conditions for average utility maximization over sequences of variable length. We obtain full generality by using a new algebraic technique that exploits the richness structure naturally provided by the variable length of the sequen
On the Hardy-Littlewood maximal theorem
Shinji Yamashita
1982-01-01
Full Text Available The Hardy-Littlewood maximal theorem is extended to functions of class PL in the sense of E. F. Beckenbach and T. Radó, with a more precise expression of the absolute constant in the inequality. As applications we deduce some results on hyperbolic Hardy classes in terms of the non-Euclidean hyperbolic distance in the unit disk.
Maximal Cartel Pricing and Leniency Programs
Houba, H.E.D.; Motchenkova, E.; Wen, Q.
2008-01-01
For a general class of oligopoly models with price competition, we analyze the impact of ex-ante leniency programs in antitrust regulation on the endogenous maximal-sustainable cartel price. This impact depends upon industry characteristics including its cartel culture. Our analysis disentangles the
How to Generate Good Profit Maximization Problems
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Maximally entangled mixed states made easy
Aiello, A; Voigt, D; Woerdman, J P
2006-01-01
We show that, contrarily to a recent claim [M. Ziman and V. Bu\\v{z}ek, Phys. Rev. A. \\textbf{72}, 052325 (2005)], it is possible to achieve maximally entangled mixed states of two qubits from the singlet state via the action of local nonunital quantum channels. Moreover, we present a simple, feasible linear optical implementation of one of such channels.
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
Maximizing scientific knowledge from randomized clinical trials
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...
Maximal Heat Generation in Nanoscale Systems
ZHOU Li-Ling; LI Shu-Shen; ZENG Zhao-Yang
2009-01-01
We investigate the heat generation in a nanoscale system coupled to normal leads and find that it is maximal when the average occupation of the electrons in the nanoscale system is 0.5,no matter what mechanism induces the heat generation.
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Linear scaling calculation of maximally localized Wannier functions with atomic basis set.
Xiang, H J; Li, Zhenyu; Liang, W Z; Yang, Jinlong; Hou, J G; Zhu, Qingshi
2006-06-21
We have developed a linear scaling algorithm for calculating maximally localized Wannier functions (MLWFs) using atomic orbital basis. An O(N) ground state calculation is carried out to get the density matrix (DM). Through a projection of the DM onto atomic orbitals and a subsequent O(N) orthogonalization, we obtain initial orthogonal localized orbitals. These orbitals can be maximally localized in linear scaling by simple Jacobi sweeps. Our O(N) method is validated by applying it to water molecule and wurtzite ZnO. The linear scaling behavior of the new method is demonstrated by computing the MLWFs of boron nitride nanotubes.
A data-driven model for maximization of methane production in a wastewater treatment plant.
Kusiak, Andrew; Wei, Xiupeng
2012-01-01
A data-driven approach for maximization of methane production in a wastewater treatment plant is presented. Industrial data collected on a daily basis was used to build the model. Temperature, total solids, volatile solids, detention time and pH value were selected as parameters for the model construction. First, a prediction model of methane production was built by a multi-layer perceptron neural network. Then a particle swarm optimization algorithm was used to maximize methane production based on the model developed in this research. The model resulted in a 5.5% increase in methane production.
Power maximization of a point absorber wave energy converter using improved model predictive control
Milani, Farideh; Moghaddam, Reihaneh Kardehi
2017-08-01
This paper considers controlling and maximizing the absorbed power of wave energy converters for irregular waves. With respect to physical constraints of the system, a model predictive control is applied. Irregular waves' behavior is predicted by Kalman filter method. Owing to the great influence of controller parameters on the absorbed power, these parameters are optimized by imperialist competitive algorithm. The results illustrate the method's efficiency in maximizing the extracted power in the presence of unknown excitation force which should be predicted by Kalman filter.
Automatic control algorithm effects on energy production
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were
ITAC volume assessment through a Gaussian hidden Markov random field model-based algorithm.
Passera, Katia M; Potepan, Paolo; Brambilla, Luca; Mainardi, Luca T
2008-01-01
In this paper, a semi-automatic segmentation method for volume assessment of Intestinal-type adenocarcinoma (ITAC) is presented and validated. The method is based on a Gaussian hidden Markov random field (GHMRF) model that represents an advanced version of a finite Gaussian mixture (FGM) model as it encodes spatial information through the mutual influences of neighboring sites. To fit the GHMRF model an expectation maximization (EM) algorithm is used. We applied the method to a magnetic resonance data sets (each of them composed by T1-weighted, Contrast Enhanced T1-weighted and T2-weighted images) for a total of 49 tumor-contained slices. We tested GHMRF performances with respect to FGM by both a numerical and a clinical evaluation. Results show that the proposed method has a higher accuracy in quantifying lesion area than FGM and it can be applied in the evaluation of tumor response to therapy.
A RELATIVE BENEFIT ALGORITHM FOR BASIC ECONOMIC LOT SIZE PROBLEM
马辉民; 张子刚; 周少甫; 黄卫来
2001-01-01
The paper develops an algorithm that solves economic lot size problem in O(n2) time in the Wagner-Whitin case. The algorithm is based on the standard dynamic programming approach which requires the computation of the maximal relative benefit for some possible subplans of the production plan. In this algorithm the authors have studied the forward property and decomposition properties which can make computation easy. The proposed algorithm appears to perform quite reasonably for practical application.
Parametric image alignment using enhanced correlation coefficient maximization.
Evangelidis, Georgios D; Psarakis, Emmanouil Z
2008-10-01
In this work we propose the use of a modified version of the correlation coefficient as a performance criterion for the image alignment problem. The proposed modification has the desirable characteristic of being invariant with respect to photometric distortions. Since the resulting similarity measure is a nonlinear function of the warp parameters, we develop two iterative schemes for its maximization, one based on the forward additive approach and the second on the inverse compositional method. As it is customary in iterative optimization, in each iteration, the nonlinear objective function is approximated by an alternative expression for which the corresponding optimization is simple. In our case we propose an efficient approximation that leads to a closed-form solution (per iteration) which is of low computational complexity, the latter property being particularly strong in our inverse version. The proposed schemes are tested against the Forward Additive Lucas-Kanade and the Simultaneous Inverse Compositional (SIC) algorithm through simulations. Under noisy conditions and photometric distortions, our forward version achieves more accurate alignments and exhibits faster convergence whereas our inverse version has similar performance as the SIC algorithm but at a lower computational complexity.
Automated quantum conductance calculations using maximally-localised Wannier functions
Shelley, Matthew; Mostofi, Arash A; Marzari, Nicola
2011-01-01
A robust, user-friendly, and automated method to determine quantum conductance in disordered quasi-one-dimensional systems is presented. The scheme relies upon an initial density- functional theory calculation in a specific geometry after which the ground-state eigenfunctions are transformed to a maximally-localised Wannier function (MLWF) basis. In this basis, our novel algorithms manipulate and partition the Hamiltonian for the calculation of coherent electronic transport properties within the Landauer-Buttiker formalism. Furthermore, we describe how short- ranged Hamiltonians in the MLWF basis can be combined to build model Hamiltonians of large (>10,000 atom) disordered systems without loss of accuracy. These automated algorithms have been implemented in the Wannier90 code[Mostofi et al, Comput. Phys. Commun. 178, 685 (2008)], which is interfaced to a number of electronic structure codes such as Quantum-ESPRESSO, AbInit, Wien2k, SIESTA and FLEUR. We apply our methods to an Al atomic chain with a Na defect...
Torque Optimization Algorithm for SRM Drives Using a Robust Predictive Strategy
Argeseanu, Alin; Ritchie, Ewen; Leban, Krisztina Monika
2010-01-01
This paper presents a new torque optimization algorithm to maximize the torque generated by an SRM drive. The new algorithm uses a predictive strategy. The behaviour of the SRM demands a sequential algorithm. To preserve the advantages of SRM drives (simple and rugged topology) the new algorithm...
Measurable Maximal Energy and Minimal Time Interval
Dahab, Eiman Abou El
2014-01-01
The possibility of finding the measurable maximal energy and the minimal time interval is discussed in different quantum aspects. It is found that the linear generalized uncertainty principle (GUP) approach gives a non-physical result. Based on large scale Schwarzshild solution, the quadratic GUP approach is utilized. The calculations are performed at the shortest distance, at which the general relativity is assumed to be a good approximation for the quantum gravity and at larger distances, as well. It is found that both maximal energy and minimal time have the order of the Planck time. Then, the uncertainties in both quantities are accordingly bounded. Some physical insights are addressed. Also, the implications on the physics of early Universe and on quantized mass are outlined. The results are related to the existence of finite cosmological constant and minimum mass (mass quanta).
Maximal temperature in a simple thermodynamical system
Dai, De-Chang
2016-01-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Hamiltonian formalism and path entropy maximization
Davis, Sergio; González, Diego
2015-10-01
Maximization of the path information entropy is a clear prescription for constructing models in non-equilibrium statistical mechanics. Here it is shown that, following this prescription under the assumption of arbitrary instantaneous constraints on position and velocity, a Lagrangian emerges which determines the most probable trajectory. Deviations from the probability maximum can be consistently described as slices in time by a Hamiltonian, according to a nonlinear Langevin equation and its associated Fokker-Planck equation. The connections unveiled between the maximization of path entropy and the Langevin/Fokker-Planck equations imply that missing information about the phase space coordinate never decreases in time, a purely information-theoretical version of the second law of thermodynamics. All of these results are independent of any physical assumptions, and thus valid for any generalized coordinate as a function of time, or any other parameter. This reinforces the view that the second law is a fundamental property of plausible inference.
Nonlinear trading models through Sharpe Ratio maximization.
Choey, M; Weigend, A S
1997-08-01
While many trading strategies are based on price prediction, traders in financial markets are typically interested in optimizing risk-adjusted performance such as the Sharpe Ratio, rather than the price predictions themselves. This paper introduces an approach which generates a nonlinear strategy that explicitly maximizes the Sharpe Ratio. It is expressed as a neural network model whose output is the position size between a risky and a risk-free asset. The iterative parameter update rules are derived and compared to alternative approaches. The resulting trading strategy is evaluated and analyzed on both computer-generated data and real world data (DAX, the daily German equity index). Trading based on Sharpe Ratio maximization compares favorably to both profit optimization and probability matching (through cross-entropy optimization). The results show that the goal of optimizing out-of-sample risk-adjusted profit can indeed be achieved with this nonlinear approach.
Maximally Symmetric Spacetimes emerging from thermodynamic fluctuations
Bravetti, A; Quevedo, H
2015-01-01
In this work we prove that the maximally symmetric vacuum solutions of General Relativity emerge from the geometric structure of statistical mechanics and thermodynamic fluctuation theory. To present our argument, we begin by showing that the pseudo-Riemannian structure of the Thermodynamic Phase Space is a solution to the vacuum Einstein-Gauss-Bonnet theory of gravity with a cosmological constant. Then, we use the geometry of equilibrium thermodynamics to demonstrate that the maximally symmetric vacuum solutions of Einstein's Field Equations -- Minkowski, de-Sitter and Anti-de-Sitter spacetimes -- correspond to thermodynamic fluctuations. Moreover, we argue that these might be the only possible solutions that can be derived in this manner. Thus, the results presented here are the first concrete examples of spacetimes effectively emerging from the thermodynamic limit over an unspecified microscopic theory without any further assumptions.
Consistent 4-form fluxes for maximal supergravity
Godazgar, Hadi; Krueger, Olaf; Nicolai, Hermann
2015-01-01
We derive new ansaetze for the 4-form field strength of D=11 supergravity corresponding to uplifts of four-dimensional maximal gauged supergravity. In particular, the ansaetze directly yield the components of the 4-form field strength in terms of the scalars and vectors of the four-dimensional maximal gauged supergravity---in this way they provide an explicit uplift of all four-dimensional consistent truncations of D=11 supergravity. The new ansaetze provide a substantially simpler method for uplifting d=4 flows compared to the previously available method using the 3-form and 6-form potential ansaetze. The ansatz for the Freund-Rubin term allows us to conjecture a `master formula' for the latter in terms of the scalar potential of d=4 gauged supergravity and its first derivative. We also resolve a long-standing puzzle concerning the antisymmetry of the flux obtained from uplift ansaetze.
Modularity maximization using completely positive programming
Yazdanparast, Sakineh; Havens, Timothy C.
2017-04-01
Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.
Utility maximization in incomplete markets with default
Lim, Thomas
2008-01-01
We adress the maximization problem of expected utility from terminal wealth. The special feature of this paper is that we consider a financial market where the price process of risky assets can have a default time. Using dynamic programming, we characterize the value function with a backward stochastic differential equation and the optimal portfolio policies. We separately treat the cases of exponential, power and logarithmic utility.
Revenue Maximizing Head Starts in Contests
Franke, Jörg; Leininger, Wolfgang; Wasser, Cédric
2014-01-01
We characterize revenue maximizing head starts for all-pay auctions and lottery contests with many heterogeneous players. We show that under optimal head starts all-pay auctions revenue-dominate lottery contests for any degree of heterogeneity among players. Moreover, all-pay auctions with optimal head starts induce higher revenue than any multiplicatively biased all-pay auction or lottery contest. While head starts are more effective than multiplicative biases in all-pay auctions, they are l...
Approximate Revenue Maximization in Interdependent Value Settings
Chawla, Shuchi; Fu, Hu; Karlin, Anna
2014-01-01
We study revenue maximization in settings where agents' values are interdependent: each agent receives a signal drawn from a correlated distribution and agents' values are functions of all of the signals. We introduce a variant of the generalized VCG auction with reserve prices and random admission, and show that this auction gives a constant approximation to the optimal expected revenue in matroid environments. Our results do not require any assumptions on the signal distributions, however, ...
Maximal supersymmetry and B-mode targets
Kallosh, Renata; Linde, Andrei; Wrase, Timm; Yamada, Yusuke
2017-04-01
Extending the work of Ferrara and one of the authors [1], we present dynamical cosmological models of α-attractors with plateau potentials for 3 α = 1, 2, 3, 4, 5, 6, 7. These models are motivated by geometric properties of maximally supersymmetric theories: M-theory, superstring theory, and maximal N = 8 supergravity. After a consistent truncation of maximal to minimal supersymmetry in a seven-disk geometry, we perform a two-step procedure: 1) we introduce a superpotential, which stabilizes the moduli of the seven-disk geometry in a supersymmetric minimum, 2) we add a cosmological sector with a nilpotent stabilizer, which breaks supersymmetry spontaneously and leads to a desirable class of cosmological attractor models. These models with n s consistent with observational data, and with tensor-to-scalar ratio r ≈ 10-2 - 10-3, provide natural targets for future B-mode searches. We relate the issue of stability of inflationary trajectories in these models to tessellations of a hyperbolic geometry.
Maximal respiratory pressures among adolescent swimmers.
Rocha Crispino Santos, M A; Pinto, M L; Couto Sant'Anna, C; Bernhoeft, M
2011-01-01
Maximal inspiratory pressures (MIP) and maximal expiratory pressures (MEP) are useful indices of respiratory muscle strength in athletes. The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training. A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61 %) being males. At baseline, MIP was found to be lower in females (P = .001). The mean values reached by males and females were: MIP(cmH2O) = M: 100.4 (± 26.5)/F: 67.8 (± 23.2); MEP (cmH2O) = M: 87.4 (± 20.7)/F: 73.9 (± 17.3). After the physical training they reached: MIP (cmH2O) = M: 95.3 (± 30.3)/F: 71.8 (± 35.6); MEP (cmH2O) = M: 82.8 (± 26.2)/F: 70.4 (± 8.3). No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance.
Fitting a mixture model by expectation maximization to discover motifs in biopolymers
Bailey, T.L.; Elkan, C. [Univ. of California, La Jolla, CA (United States)
1994-12-31
The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.
Next-to-leading Order Calculation for Jets Defined by a Maximized Jet Function
Kaufmann, Tom; Vogelsang, Werner
2014-01-01
We present a next-to-leading order QCD calculation for the single-inclusive production of collimated jets at hadron colliders, when the jet is defined by maximizing a suitable jet function that depends on the momenta of final-state particles in the event. A jet algorithm of this type was initially proposed by Georgi and subsequently further developed into the class of "$J_{E_T}$ algorithms". Our calculation establishes the infrared safety of the algorithms at this perturbative order. We derive analytical results for the relevant partonic cross sections. We discuss similarities and differences with respect to jets defined by cone or (anti-)$k_t$ algorithms and present numerical results for the Tevatron and the LHC.
Primal Decomposition-Based Method for Weighted Sum-Rate Maximization in Downlink OFDMA Systems
Weeraddana Chathuranga
2010-01-01
Full Text Available We consider the weighted sum-rate maximization problem in downlink Orthogonal Frequency Division Multiple Access (OFDMA systems. Motivated by the increasing popularity of OFDMA in future wireless technologies, a low complexity suboptimal resource allocation algorithm is obtained for joint optimization of multiuser subcarrier assignment and power allocation. The algorithm is based on an approximated primal decomposition-based method, which is inspired from exact primal decomposition techniques. The original nonconvex optimization problem is divided into two subproblems which can be solved independently. Numerical results are provided to compare the performance of the proposed algorithm to Lagrange relaxation based suboptimal methods as well as to optimal exhaustive search-based method. Despite its reduced computational complexity, the proposed algorithm provides close-to-optimal performance.
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
Non-Cancellation Multistage Kurtosis Maximization with Prewhitening for Blind Source Separation
Xiang Chen
2009-01-01
Full Text Available Chi et al. recently proposed two effective non-cancellation multistage (NCMS blind source separation algorithms, one using the turbo source extraction algorithm (TSEA, called the NCMS-TSEA, and the other using the fast kurtosis maximization algorithm (FKMA, called the NCMS-FKMA. Their computational complexity and performance heavily depend on the dimension of multisensor data, that is, number of sensors. This paper proposes the inclusion of the prewhitening processing in the NCMS-TSEA and NCMS-FKMA prior to source extraction. We come up with four improved algorithms, referred to as the PNCMS-TSEA, the PNCMS-FKMA, the PNCMS-TSEA(p, and the PNCMS-FKMA(p. Compared with the existing NCMS-TSEA and NCMS-FKMA, the former two algorithms perform with significant computational complexity reduction and some performance improvements. The latter two algorithms are generalized counterparts of the former two algorithms with the single source extraction module replaced by a bank of source extraction modules in parallel at each stage. In spite of the same performance of PNCMS-TSEA and PNCMS-TSEA(p (PNCMS-FKMA and PNCMS-FKMA(p, the merit of this parallel source extraction structure lies in much shorter processing latency making the PNCMS-TSEA(p and PNCMS-FKMA(p well suitable for software and hardware implementations. Some simulation results are presented to verify the efficacy and computational efficiency of the proposed algorithms.
Multi-scaled license plate detection based on the label-moveable maximal MSER clique
Gu, Qin; Yang, Jianyu; Kong, Lingjiang; Cui, Guolong
2015-08-01
In this paper, we consider a robust vehicle license plate detection problem for intelligent transportation systems in the presence of various illumination situations. We propose a robust and fast multi-scaled license plate detection and location algorithm, which exploits a Label-Moveable Maximal MSER clique. Specifically, first, we extract the candidate character regions using the Maximally Stable Extremal Region (MSER) features. Second, we divide each candidate character region into four types and extract the suspected initial node (the top-left character) based on its neighbor MSER distribution characteristic. Third, we label each candidate character region to accomplish license detection and location based on the detected suspected initial node and the corresponding label-moveable maximal MSER clique. The robust of license plate detection, the accuracy of character labeling for license location, and the improvement of calculation efficiency are evaluated via the real data.
Maximizing Influence in an Ising Network: A Mean-Field Optimal Solution
Lynn, Christopher
2016-01-01
The problem of influence maximization in social networks has typically been studied in the context of contagion models and irreversible processes. In this paper, we consider an alternate model that treats individual opinions as spins in an Ising network at dynamic equilibrium. We formalize the Ising influence maximization (IIM) problem, which has a physical interpretation as the maximization of the magnetization given a budget of external magnetic field. Under the mean-field (MF) approximation, we develop a number of sufficient conditions for when the problem is convex and exactly solvable, and we provide a gradient ascent algorithm that efficiently achieves an $\\epsilon$-approximation to the optimal solution. We show that optimal strategies exhibit a phase transition from focusing influence on high-degree individuals at high interaction strengths to spreading influence among low-degree individuals at low interaction strengths. We also establish a number of novel results about the structure of steady-states i...
Simulated annealing algorithm for optimal capital growth
Luo, Yong; Zhu, Bo; Tang, Yong
2014-08-01
We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.
Cardiorespiratory Coordination in Repeated Maximal Exercise
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Cardiorespiratory Coordination in Repeated Maximal Exercise.
Garcia-Retortillo, Sergi; Javierre, Casimiro; Hristovski, Robert; Ventura, Josep L; Balagué, Natàlia
2017-01-01
Increases in cardiorespiratory coordination (CRC) after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC) analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate) was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1) were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax), maximal oxygen consumption (VO2 max), or ventilatory threshold (VT), an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08) was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43) in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT) between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC evaluation in
Determination of Pavement Rehabilitation Activities through a Permutation Algorithm
Sangyum Lee
2013-01-01
Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.
An Applied Method for Designing Maximally Decimating Non-uniform Filter Banks
无
2003-01-01
Assembling individual line phase filters to form a multi-channel filter bank allows the synthesis filter to be similar to corresponding analysis filters, and the design calculation can be simple. The appropriate relations between synthesis filters and analysis filters eliminate most aliasing resulting from decimation in non-uniform maximally decimating filter banks, and LS algorithm and Remez algorithm are used to optimize the composite character. This design method can achieve approximate Perfect-Reconstruction. An example is given in which the general parameter filters with approximate line phase are used as units of a filter bank.
Maximization of sums of quotients of quadratic forms and some generalizations
Kiers, Henk A.L.
Monotonically convergent algorithms are described for maximizing six (constrained) functions of vectors x, or matrices X with columns x(1),..., x(r). These functions are h(1)(x) = Sigma(k) (x'A(k)x)(x'C(k)x)(-1), H-1(X) = Sigma(k) tr (X'A(k)X)(X'C(k)X)(-1), (h) over tilde(1)(X) = Sigma(k)
Maximizing Networking Capacity in Multi-Channel Multi-Radio Wireless Networks
万鹏俊; 万志国
2014-01-01
Providing each node with one or more multi-channel radios offers a promising avenue for enhancing the network capacity by simultaneously exploiting multiple non-overlapping channels through different radio interfaces and mitigating interferences through proper channel assignment. However, it is quite challenging to effectively utilize multiple channels and/or multiple radios to maximize throughput capacity. The National Natural Science Foundation of China (NSFC) Pro ject 61128005 conducted comprehensive algorithmic-theoretic and queuing-theoretic studies of maximizing wireless networking capacity in multi-channel multi-radio (MC-MR) wireless networks under the protocol interference model and fundamentally advanced the state of the art. In addition, under the notoriously hard physical interference model, this project has taken initial algorithmic studies on maximizing the network capacity, with or without power control. We expect the new techniques and tools developed in this project will have wide applications in capacity planning, resource allocation and sharing, and protocol design for wireless networks, and will serve as the basis for future algorithm developments in wireless networks with advanced features, such as multi-input multi-output (MIMO) wireless networks.
Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N., E-mail: lucasdelbem1@gmail.com [Universidade de Sao Paulo (USP), Sao Paulo, SP (Brazil). Faculdade de Medicina. Instituto de Radiologia; Weltman, Eduardo; Braga, Henrique F. [Instituto do Cancer do Estado de Sao Paulo, Sao Paulo, SP (Brazil). Servico de Radioterapia
2013-12-15
The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)
Postactivation Potentiation Biases Maximal Isometric Strength Assessment
Leonardo Coelho Rabello Lima
2014-01-01
Full Text Available Postactivation potentiation (PAP is known to enhance force production. Maximal isometric strength assessment protocols usually consist of two or more maximal voluntary isometric contractions (MVCs. The objective of this study was to determine if PAP would influence isometric strength assessment. Healthy male volunteers (n=23 performed two five-second MVCs separated by a 180-seconds interval. Changes in isometric peak torque (IPT, time to achieve it (tPTI, contractile impulse (CI, root mean square of the electromyographic signal during PTI (RMS, and rate of torque development (RTD, in different intervals, were measured. Significant increases in IPT (240.6 ± 55.7 N·m versus 248.9 ± 55.1 N·m, RTD (746 ± 152 N·m·s−1versus 727 ± 158 N·m·s−1, and RMS (59.1 ± 12.2% RMSMAX versus 54.8 ± 9.4% RMSMAX were found on the second MVC. tPTI decreased significantly on the second MVC (2373 ± 1200 ms versus 2784 ± 1226 ms. We conclude that a first MVC leads to PAP that elicits significant enhancements in strength-related variables of a second MVC performed 180 seconds later. If disconsidered, this phenomenon might bias maximal isometric strength assessment, overestimating some of these variables.
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.
Cycle-maximal triangle-free graphs
Durocher, Stephane; Gunderson, David S.; Li, Pak Ching
2015-01-01
Abstract We conjecture that the balanced complete bipartite graph K ⌊ n / 2 ⌋ , ⌈ n / 2 ⌉ contains more cycles than any other n -vertex triangle-free graph, and we make some progress toward proving this. We give equivalent conditions for cycle-maximal triangle-free graphs; show bounds...... on the numbers of cycles in graphs depending on numbers of vertices and edges, girth, and homomorphisms to small fixed graphs; and use the bounds to show that among regular graphs, the conjecture holds. We also consider graphs that are close to being regular, with the minimum and maximum degrees differing...