Applications of expectation maximization algorithm for coherent optical communication
DEFF Research Database (Denmark)
Carvalho, L.; Oliveira, J.; Zibar, Darko
2014-01-01
In this invited paper, we present powerful statistical signal processing methods, used by machine learning community, and link them to current problems in optical communication. In particular, we will look into iterative maximum likelihood parameter estimation based on expectation maximization...... algorithm and its application in coherent optical communication systems for linear and nonlinear impairment mitigation. Furthermore, the estimated parameters are used to build the probabilistic model of the system for the synthetic impairment generation....
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
National Aeronautics and Space Administration — This paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
Hierarchical trie packet classification algorithm based on expectation-maximization clustering.
Bi, Xia-An; Zhao, Junxia
2017-01-01
With the development of computer network bandwidth, packet classification algorithms which are able to deal with large-scale rule sets are in urgent need. Among the existing algorithms, researches on packet classification algorithms based on hierarchical trie have become an important packet classification research branch because of their widely practical use. Although hierarchical trie is beneficial to save large storage space, it has several shortcomings such as the existence of backtracking and empty nodes. This paper proposes a new packet classification algorithm, Hierarchical Trie Algorithm Based on Expectation-Maximization Clustering (HTEMC). Firstly, this paper uses the formalization method to deal with the packet classification problem by means of mapping the rules and data packets into a two-dimensional space. Secondly, this paper uses expectation-maximization algorithm to cluster the rules based on their aggregate characteristics, and thereby diversified clusters are formed. Thirdly, this paper proposes a hierarchical trie based on the results of expectation-maximization clustering. Finally, this paper respectively conducts simulation experiments and real-environment experiments to compare the performances of our algorithm with other typical algorithms, and analyzes the results of the experiments. The hierarchical trie structure in our algorithm not only adopts trie path compression to eliminate backtracking, but also solves the problem of low efficiency of trie updates, which greatly improves the performance of the algorithm.
Directory of Open Access Journals (Sweden)
Jiechang Wen
2012-01-01
Full Text Available Within the learning framework of maximum weighted likelihood (MWL proposed by Cheung, 2004 and 2005, this paper will develop a batch Rival Penalized Expectation-Maximization (RPEM algorithm for density mixture clustering provided that all observations are available before the learning process. Compared to the adaptive RPEM algorithm in Cheung, 2004 and 2005, this batch RPEM need not assign the learning rate analogous to the Expectation-Maximization (EM algorithm (Dempster et al., 1977, but still preserves the capability of automatic model selection. Further, the convergence speed of this batch RPEM is faster than the EM and the adaptive RPEM in general. The experiments show the superior performance of the proposed algorithm on the synthetic data and color image segmentation.
A batch algorithm for estimating trajectories of point targets using expectation maximization
DEFF Research Database (Denmark)
Rahmathullah, Abu; Raghavendra, Selvan; Svensson, Lennart
2016-01-01
In this paper, we propose a strategy that is based on expectation maximization for tracking multiple point targets. The algorithm is similar to probabilistic multi-hypothesis tracking (PMHT), but does not relax the point target model assumptions. According to the point target models, a target can......, extensive simulations comparing the mean optimal sub-pattern assignment (MOSPA) performance of the algorithm for different scenarios averaged over several Monte Carlo iterations show that the proposed algorithm performs better than JPDA and PMHT. We also compare it to benchmarking algorithm: N- scan pruning...... based track-oriented multiple hypothesis tracking (TOMHT). The proposed algorithm shows a good trade-off between computational complexity and the MOSPA performance....
Improved Expectation Maximization Algorithm for Gaussian Mixed Model Using the Kernel Method
Directory of Open Access Journals (Sweden)
Mohd Izhan Mohd Yusoff
2013-01-01
Full Text Available Fraud activities have contributed to heavy losses suffered by telecommunication companies. In this paper, we attempt to use Gaussian mixed model, which is a probabilistic model normally used in speech recognition to identify fraud calls in the telecommunication industry. We look at several issues encountered when calculating the maximum likelihood estimates of the Gaussian mixed model using an Expectation Maximization algorithm. Firstly, we look at a mechanism for the determination of the initial number of Gaussian components and the choice of the initial values of the algorithm using the kernel method. We show via simulation that the technique improves the performance of the algorithm. Secondly, we developed a procedure for determining the order of the Gaussian mixed model using the log-likelihood function and the Akaike information criteria. Finally, for illustration, we apply the improved algorithm to real telecommunication data. The modified method will pave the way to introduce a comprehensive method for detecting fraud calls in future work.
An adaptive Expectation-Maximization algorithm with GPU implementation for electron cryomicroscopy.
Tagare, Hemant D; Barthel, Andrew; Sigworth, Fred J
2010-09-01
Maximum-likelihood (ML) estimation has very desirable properties for reconstructing 3D volumes from noisy cryo-EM images of single macromolecular particles. Current implementations of ML estimation make use of the Expectation-Maximization (EM) algorithm or its variants. However, the EM algorithm is notoriously computation-intensive, as it involves integrals over all orientations and positions for each particle image. We present a strategy to speedup the EM algorithm using domain reduction. Domain reduction uses a coarse grid to evaluate regions in the integration domain that contribute most to the integral. The integral is evaluated with a fine grid in these regions. In the simulations reported in this paper, domain reduction gives speedups which exceed a factor of 10 in early iterations and which exceed a factor of 60 in terminal iterations. Copyright 2010 Elsevier Inc. All rights reserved.
A Local Scalable Distributed Expectation Maximization Algorithm for Large Peer-to-Peer Networks
Bhaduri, Kanishka; Srivastava, Ashok N.
2009-01-01
This paper offers a local distributed algorithm for expectation maximization in large peer-to-peer environments. The algorithm can be used for a variety of well-known data mining tasks in a distributed environment such as clustering, anomaly detection, target tracking to name a few. This technology is crucial for many emerging peer-to-peer applications for bioinformatics, astronomy, social networking, sensor networks and web mining. Centralizing all or some of the data for building global models is impractical in such peer-to-peer environments because of the large number of data sources, the asynchronous nature of the peer-to-peer networks, and dynamic nature of the data/network. The distributed algorithm we have developed in this paper is provably-correct i.e. it converges to the same result compared to a similar centralized algorithm and can automatically adapt to changes to the data and the network. We show that the communication overhead of the algorithm is very low due to its local nature. This monitoring algorithm is then used as a feedback loop to sample data from the network and rebuild the model when it is outdated. We present thorough experimental results to verify our theoretical claims.
Vilanova, Pedro
2016-01-07
In this work, we present an extension of the forward-reverse representation introduced in Simulation of forward-reverse stochastic representations for conditional diffusions , a 2014 paper by Bayer and Schoenmakers to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, i.e., SRNs conditional on their values in the extremes of given time-intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the Expectation-Maximization algorithm to the phase I output. By selecting a set of over-dispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
Bayer, Christian
2016-02-20
© 2016 Taylor & Francis Group, LLC. ABSTRACT: In this work, we present an extension of the forward–reverse representation introduced by Bayer and Schoenmakers (Annals of Applied Probability, 24(5):1994–2032, 2014) to the context of stochastic reaction networks (SRNs). We apply this stochastic representation to the computation of efficient approximations of expected values of functionals of SRN bridges, that is, SRNs conditional on their values in the extremes of given time intervals. We then employ this SRN bridge-generation technique to the statistical inference problem of approximating reaction propensities based on discretely observed data. To this end, we introduce a two-phase iterative inference method in which, during phase I, we solve a set of deterministic optimization problems where the SRNs are replaced by their reaction-rate ordinary differential equations approximation; then, during phase II, we apply the Monte Carlo version of the expectation-maximization algorithm to the phase I output. By selecting a set of overdispersed seeds as initial points in phase I, the output of parallel runs from our two-phase method is a cluster of approximate maximum likelihood estimates. Our results are supported by numerical examples.
Inoue, Jun-ichi; Tanaka, Kazuyuki
2002-01-01
Dynamical properties of image restoration and hyperparameter estimation are investigated by means of statistical mechanics. We introduce an exactly solvable model for image restoration and derive differential equations with respect to macroscopic quantities. From these equations, we evaluate relaxation processes of the system to the equilibrium state. Our statistical mechanical approach also enables us to investigate the hyperparameter estimation by means of maximization of the marginal likelihood by using gradient descent and the expectation and maximization algorithm from the dynamical point of view.
Salvo, Koen; Defrise, Michel
2017-11-01
The ‘simultaneous maximum-likelihood attenuation correction factors’ (sMLACF) algorithm presented here, is an iterative algorithm to calculate the maximum-likelihood estimate of the activity λ and the attenuation factors a in time-of-flight positron emission tomography, and this from emission data only. Hence sMLACF is an alternative to the MLACF algorithm. sMLACF is derived using the generalized expectation-maximization principle by introducing an appropriate set of complete data. The resulting iteration step yields a simultaneous update of λ and a which, in addition, enforces in a natural way the constraints \
Energy Technology Data Exchange (ETDEWEB)
Lee, Youngrok [Iowa State Univ., Ames, IA (United States)
2013-05-15
Heterogeneity exists on a data set when samples from di erent classes are merged into the data set. Finite mixture models can be used to represent a survival time distribution on heterogeneous patient group by the proportions of each class and by the survival time distribution within each class as well. The heterogeneous data set cannot be explicitly decomposed to homogeneous subgroups unless all the samples are precisely labeled by their origin classes; such impossibility of decomposition is a barrier to overcome for estimating nite mixture models. The expectation-maximization (EM) algorithm has been used to obtain maximum likelihood estimates of nite mixture models by soft-decomposition of heterogeneous samples without labels for a subset or the entire set of data. In medical surveillance databases we can find partially labeled data, that is, while not completely unlabeled there is only imprecise information about class values. In this study we propose new EM algorithms that take advantages of using such partial labels, and thus incorporate more information than traditional EM algorithms. We particularly propose four variants of the EM algorithm named EM-OCML, EM-PCML, EM-HCML and EM-CPCML, each of which assumes a specific mechanism of missing class values. We conducted a simulation study on exponential survival trees with five classes and showed that the advantages of incorporating substantial amount of partially labeled data can be highly signi cant. We also showed model selection based on AIC values fairly works to select the best proposed algorithm on each specific data set. A case study on a real-world data set of gastric cancer provided by Surveillance, Epidemiology and End Results (SEER) program showed a superiority of EM-CPCML to not only the other proposed EM algorithms but also conventional supervised, unsupervised and semi-supervised learning algorithms.
Guo, Chao-Yu; DeStefano, Anita L; Lunetta, Kathryn L; Dupuis, Josée; Cupples, L Adrienne
2005-01-01
The Haplotype Relative Risk (HRR) was first proposed [Falk et al., Ann Hum Genet 1987] to test for Linkage Disequilibrium (LD) between a marker and a putative disease locus using case-parent trios. Spurious association does not appear in such family-based studies under population admixture. In this paper, we extend the HRR to accommodate incomplete trios via the Expectation-Maximization (EM) algorithm [Dempster et al., J R Stat Soc Ser B, 1977]. In addition to triads and dyads (parent-offspring pair), the EM-HRR easily incorporates individuals with no parental genotype information available, which is excluded from the one parent Transmission/Disequilibrium Test (1-TDT) [Sun et al., Am J Epidemiol 1999]. Due to the data structure of EM-HRR, transmitted alleles are always available regardless of the number of missing parental genotypes. As a result of having a larger sample size, computer simulations reveal that the EM-HRR is more powerful in detecting LD than the 1-TDT in a population under Hardy-Weinberg Equilibirum (HWE). If admixture is not extreme, the EM-HRR remains more powerful. When a large degree of admixture exists, the EM-HRR performs better the 1-TDT when the association is strong, though not as well when the association is weak. We illustrate the proposed method with an application to the Framingham Heart Study. Copyright (c) 2005 S. Karger AG, Basel.
Directory of Open Access Journals (Sweden)
Chia-Feng Lu
Full Text Available Automatic identification of various perfusion compartments from dynamic susceptibility contrast magnetic resonance brain images can assist in clinical diagnosis and treatment of cerebrovascular diseases. The principle of segmentation methods was based on the clustering of bolus transit-time profiles to discern areas of different tissues. However, the cerebrovascular diseases may result in a delayed and dispersed local perfusion and therefore alter the hemodynamic signal profiles. Assessing the accuracy of the segmentation technique under delayed/dispersed circumstance is critical to accurately evaluate the severity of the vascular disease. In this study, we improved the segmentation method of expectation-maximization algorithm by using the results of hierarchical clustering on whitened perfusion data as initial parameters for a mixture of multivariate Gaussians model. In addition, Monte Carlo simulations were conducted to evaluate the performance of proposed method under different levels of delay, dispersion, and noise of signal profiles in tissue segmentation. The proposed method was used to classify brain tissue types using perfusion data from five normal participants, a patient with unilateral stenosis of the internal carotid artery, and a patient with moyamoya disease. Our results showed that the normal, delayed or dispersed hemodynamics can be well differentiated for patients, and therefore the local arterial input function for impaired tissues can be recognized to minimize the error when estimating the cerebral blood flow. Furthermore, the tissue in the risk of infarct and the tissue with or without the complementary blood supply from the communicating arteries can be identified.
Ting, Chee-Ming; Samdin, S Balqis; Salleh, Sh-Hussain; Omar, M Hafizi; Kamarulafizam, I
2012-01-01
This paper applies an expectation-maximization (EM) based Kalman smoother (KS) approach for single-trial event-related potential (ERP) estimation. Existing studies assume a Markov diffusion process for the dynamics of ERP parameters which is recursively estimated by optimal filtering approaches such as Kalman filter (KF). However, these studies only consider estimation of ERP state parameters while the model parameters are pre-specified using manual tuning, which is time-consuming for practical usage besides giving suboptimal estimates. We extend the KF approach by adding EM based maximum likelihood estimation of the model parameters to obtain more accurate ERP estimates automatically. We also introduce different model variants by allowing flexibility in the covariance structure of model noises. Optimal model selection is performed based on Akaike Information Criterion (AIC). The method is applied to estimation of chirp-evoked auditory brainstem responses (ABRs) for detection of wave V critical for assessment of hearing loss. Results shows that use of more complex covariances are better estimating inter-trial variability.
Pal, Suvra; Balakrishnan, N
2017-05-16
In this paper, we develop likelihood inference based on the expectation maximization (EM) algorithm for the Box- Cox transformation cure rate model assuming the lifetimes to follow a Weibull distribution. A simulation study is carried out to demonstrate the performance of the proposed estimation method. Through Monte Carlo simulations, we also study the effect of model mis-specification on the estimate of cure rate. Finally, we analyze a well-known data on melanoma with the model and the inferential method developed here.
Mat Jafri, Mohd. Zubir; Abdulbaqi, Hayder Saad; Mutter, Kussay N.; Mustapha, Iskandar Shahrim; Omar, Ahmad Fairuz
2017-06-01
A brain tumour is an abnormal growth of tissue in the brain. Most tumour volume measurement processes are carried out manually by the radiographer and radiologist without relying on any auto program. This manual method is a timeconsuming task and may give inaccurate results. Treatment, diagnosis, signs and symptoms of the brain tumours mainly depend on the tumour volume and its location. In this paper, an approach is proposed to improve volume measurement of brain tumors as well as using a new method to determine the brain tumour location. The current study presents a hybrid method that includes two methods. One method is hidden Markov random field - expectation maximization (HMRFEM), which employs a positive initial classification of the image. The other method employs the threshold, which enables the final segmentation. In this method, the tumour volume is calculated using voxel dimension measurements. The brain tumour location was determined accurately in T2- weighted MRI image using a new algorithm. According to the results, this process was proven to be more useful compared to the manual method. Thus, it provides the possibility of calculating the volume and determining location of a brain tumour.
Nonlinear Impairment Compensation Using Expectation Maximization for PDM 16-QAM Systems
DEFF Research Database (Denmark)
Zibar, Darko; Winther, Ole; Franceschi, Niccolo
2012-01-01
We show experimentally that by using non-linear signal processing based algorithm, expectation maximization, nonlinear system tolerance can be increased by 2 dB. Expectation maximization is also effective in combating I/Q modulator nonlinearities and laser linewidth....
DEFF Research Database (Denmark)
Zibar, Darko; Winther, Ole; Franceschi, Niccolo
2012-01-01
In this paper, we show numerically and experimentally that expectation maximization (EM) algorithm is a powerful tool in combating system impairments such as fibre nonlinearities, inphase and quadrature (I/Q) modulator imperfections and laser linewidth. The EM algorithm is an iterative algorithm ...
Joint Iterative Carrier Synchronization and Signal Detection Employing Expectation Maximization
DEFF Research Database (Denmark)
Zibar, Darko; de Carvalho, Luis Henrique Hecker; Estaran Tolosa, Jose Manuel
2014-01-01
In this paper, joint estimation of carrier frequency, phase, signal means and noise variance, in a maximum likelihood sense, is performed iteratively by employing expectation maximization. The parameter estimation is soft decision driven and allows joint carrier synchronization and data detection....... The algorithm is tested in a mixed line rate optical transmission scenario employing dual polarization 448 Gb/s 16-QAM signal surrounded by eight on-off keying channels in a 50 GHz grid. It is shown that joint carrier synchronization and data detection are more robust towards optical transmitter impairments...... and nonlinear phase noise, compared to digital phase-locked loop (PLL) followed by hard decisions. Additionally, soft decision driven joint carrier synchronization and detection offers an improvement of 0.5 dB in terms of input power compared to hard decision digital PLL based carrier synchronization...
Statistical Inference of DNA Translocation using Parallel Expectation Maximization
Emmett, Kevin; Rosenstein, Jacob; Pfau, David; Bamberger, Akiva; Shepard, Ken; Wiggins, Chris
2013-03-01
DNA translocation through a nanopore is an attractive candidate for a next-generation DNA sequencing platform, however the stochastic motion of the molecules within the pore, allowing both forward and backward movement, prevents easy inference of the true sequence from observed data. We model diffusion of an input DNA sequence through a nanopore as a biased random walk with noise, and describe an algorithm for efficient statistical reconstruction of the input sequence, given data consisting of a set of time series traces. The data is modeled as a Hidden Markov Model, and parallel expectation maximization is used to learn the most probable input sequence generating the observed traces. Bounds on inference accuracy are analyzed as a function of model parameters, including forward bias, error rate, and the number of traces. The number of traces is shown to have the strongest influence on algorithm performance, allowing for high inference accuracy even in extremely noisy environments. Incorrectly identified state transitions account for the majority of inference errors, and we introduce entropy-based metaheuristics for identifying and eliminating these errors. Inference is robust, fast, and scales to input sequences on the order of several kilobases.
Expectation-Maximization Tensor Factorization for Practical Location Privacy Attacks
Directory of Open Access Journals (Sweden)
Murakami Takao
2017-10-01
Full Text Available Location privacy attacks based on a Markov chain model have been widely studied to de-anonymize or de-obfuscate mobility traces. An adversary can perform various kinds of location privacy attacks using a personalized transition matrix, which is trained for each target user. However, the amount of training data available to the adversary can be very small, since many users do not disclose much location information in their daily lives. In addition, many locations can be missing from the training traces, since many users do not disclose their locations continuously but rather sporadically. In this paper, we show that the Markov chain model can be a threat even in this realistic situation. Specifically, we focus on a training phase (i.e. mobility profile building phase and propose Expectation-Maximization Tensor Factorization (EMTF, which alternates between computing a distribution of missing locations (E-step and computing personalized transition matrices via tensor factorization (M-step. Since the time complexity of EMTF is exponential in the number of missing locations, we propose two approximate learning methods, one of which uses the Viterbi algorithm while the other uses the Forward Filtering Backward Sampling (FFBS algorithm. We apply our learning methods to a de-anonymization attack and a localization attack, and evaluate them using three real datasets. The results show that our learning methods significantly outperform a random guess, even when there is only one training trace composed of 10 locations per user, and each location is missing with probability 80% (i.e. even when users hardly disclose two temporally-continuous locations.
A New Augmentation Based Algorithm for Extracting Maximal Chordal Subgraphs.
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2015-02-01
A graph is chordal if every cycle of length greater than three contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms' parallelizability. In this paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. We experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
Fitting a mixture model by expectation maximization to discover motifs in biopolymers
Energy Technology Data Exchange (ETDEWEB)
Bailey, T.L.; Elkan, C. [Univ. of California, La Jolla, CA (United States)
1994-12-31
The algorithm described in this paper discovers one or more motifs in a collection of DNA or protein sequences by using the technique of expectation maximization to fit a two-component finite mixture model to the set of sequences. Multiple motifs are found by fitting a mixture model to the data, probabilistically erasing the occurrences of the motif thus found, and repeating the process to find successive motifs. The algorithm requires only a set of unaligned sequences and a number specifying the width of the motifs as input. It returns a model of each motif and a threshold which together can be used as a Bayes-optimal classifier for searching for occurrences of the motif in other databases. The algorithm estimates how many times each motif occurs in each sequence in the dataset and outputs an alignment of the occurrences of the motif. The algorithm is capable of discovering several different motifs with differing numbers of occurrences in a single dataset.
Expectation-Maximization Method for EEG-Based Continuous Cursor Control
Directory of Open Access Journals (Sweden)
Yixiao Wang
2007-01-01
Full Text Available To develop effective learning algorithms for continuous prediction of cursor movement using EEG signals is a challenging research issue in brain-computer interface (BCI. In this paper, we propose a novel statistical approach based on expectation-maximization (EM method to learn the parameters of a classifier for EEG-based cursor control. To train a classifier for continuous prediction, trials in training data-set are first divided into segments. The difficulty is that the actual intention (label at each time interval (segment is unknown. To handle the uncertainty of the segment label, we treat the unknown labels as the hidden variables in the lower bound on the log posterior and maximize this lower bound via an EM-like algorithm. Experimental results have shown that the averaged accuracy of the proposed method is among the best.
Improved Algorithms OF CELF and CELF++ for Influence Maximization
Directory of Open Access Journals (Sweden)
Jiaguo Lv
2014-06-01
Full Text Available Motivated by the wide application in some fields, such as viral marketing, sales promotion etc, influence maximization has been the most important and extensively studied problem in social network. However, the most classical KK-Greedy algorithm for influence maximization is inefficient. Two major sources of the algorithm’s inefficiency were analyzed in this paper. With the analysis of algorithms CELF and CELF++, all nodes in the influenced set of u would never bring any marginal gain when a new seed u was produced. Through this optimization strategy, a lot of redundant nodes will be removed from the candidate nodes. Basing on the strategy, two improved algorithms of Lv_CELF and Lv_CELF++ were proposed in this study. To evaluate the two algorithms, the two algorithms with their benchmark algorithms of CELF and CELF++ were conducted on some real world datasets. To estimate the algorithms, influence degree and running time were employed to measure the performance and efficiency respectively. Experimental results showed that, compared with benchmark algorithms of CELF and CELF++, matching effects and higher efficiency were achieved by the new algorithms Lv_CELF and Lv_CELF++. Solutions with the proposed optimization strategy can be useful for the decisionmaking problems under the scenarios related to the influence maximization problem.
PEM-PCA: A Parallel Expectation-Maximization PCA Face Recognition Architecture
Directory of Open Access Journals (Sweden)
Kanokmon Rujirakul
2014-01-01
Full Text Available Principal component analysis or PCA has been traditionally used as one of the feature extraction techniques in face recognition systems yielding high accuracy when requiring a small number of features. However, the covariance matrix and eigenvalue decomposition stages cause high computational complexity, especially for a large database. Thus, this research presents an alternative approach utilizing an Expectation-Maximization algorithm to reduce the determinant matrix manipulation resulting in the reduction of the stages’ complexity. To improve the computational time, a novel parallel architecture was employed to utilize the benefits of parallelization of matrix computation during feature extraction and classification stages including parallel preprocessing, and their combinations, so-called a Parallel Expectation-Maximization PCA architecture. Comparing to a traditional PCA and its derivatives, the results indicate lower complexity with an insignificant difference in recognition precision leading to high speed face recognition systems, that is, the speed-up over nine and three times over PCA and Parallel PCA.
Directory of Open Access Journals (Sweden)
Huang Yufei
2007-01-01
Full Text Available We investigate in this paper reverse engineering of gene regulatory networks from time-series microarray data. We apply dynamic Bayesian networks (DBNs for modeling cell cycle regulations. In developing a network inference algorithm, we focus on soft solutions that can provide a posteriori probability (APP of network topology. In particular, we propose a variational Bayesian structural expectation maximization algorithm that can learn the posterior distribution of the network model parameters and topology jointly. We also show how the obtained APPs of the network topology can be used in a Bayesian data integration strategy to integrate two different microarray data sets. The proposed VBSEM algorithm has been tested on yeast cell cycle data sets. To evaluate the confidence of the inferred networks, we apply a moving block bootstrap method. The inferred network is validated by comparing it to the KEGG pathway map.
Computing a Clique Tree with the Algorithm Maximal Label Search
Directory of Open Access Journals (Sweden)
Anne Berry
2017-01-01
Full Text Available The algorithm MLS (Maximal Label Search is a graph search algorithm that generalizes the algorithms Maximum Cardinality Search (MCS, Lexicographic Breadth-First Search (LexBFS, Lexicographic Depth-First Search (LexDFS and Maximal Neighborhood Search (MNS. On a chordal graph, MLS computes a PEO (perfect elimination ordering of the graph. We show how the algorithm MLS can be modified to compute a PMO (perfect moplex ordering, as well as a clique tree and the minimal separators of a chordal graph. We give a necessary and sufficient condition on the labeling structure of MLS for the beginning of a new clique in the clique tree to be detected by a condition on labels. MLS is also used to compute a clique tree of the complement graph, and new cliques in the complement graph can be detected by a condition on labels for any labeling structure. We provide a linear time algorithm computing a PMO and the corresponding generators of the maximal cliques and minimal separators of the complement graph. On a non-chordal graph, the algorithm MLSM, a graph search algorithm computing an MEO and a minimal triangulation of the graph, is used to compute an atom tree of the clique minimal separator decomposition of any graph.
Improved Algorithm for Throughput Maximization in MC-CDMA
Hema Kale; C.G. Dethe; M.M. Mushrif
2012-01-01
The Multi-Carrier Code Division Multiple Access (MC-CDMA) is becoming a very significant downlink multiple access technique for high-rate data transmission in the fourth generation wireless communication systems. By means of efficient resource allocation higher data rate i.e. throughput can be achieved. This paper evaluates the performance of criteria used for group (subchannel) allocation employed in downlink transmission, which results in throughput maximization. Proposed algorithm gives th...
AREM: Aligning Short Reads from ChIP-Sequencing by Expectation Maximization
Newkirk, Daniel; Biesinger, Jacob; Chon, Alvin; Yokomori, Kyoko; Xie, Xiaohui
High-throughput sequencing coupled to chromatin immunoprecipitation (ChIP-Seq) is widely used in characterizing genome-wide binding patterns of transcription factors, cofactors, chromatin modifiers, and other DNA binding proteins. A key step in ChIP-Seq data analysis is to map short reads from high-throughput sequencing to a reference genome and identify peak regions enriched with short reads. Although several methods have been proposed for ChIP-Seq analysis, most existing methods only consider reads that can be uniquely placed in the reference genome, and therefore have low power for detecting peaks located within repeat sequences. Here we introduce a probabilistic approach for ChIP-Seq data analysis which utilizes all reads, providing a truly genome-wide view of binding patterns. Reads are modeled using a mixture model corresponding to K enriched regions and a null genomic background. We use maximum likelihood to estimate the locations of the enriched regions, and implement an expectation-maximization (E-M) algorithm, called AREM (aligning reads by expectation maximization), to update the alignment probabilities of each read to different genomic locations. We apply the algorithm to identify genome-wide binding events of two proteins: Rad21, a component of cohesin and a key factor involved in chromatid cohesion, and Srebp-1, a transcription factor important for lipid/cholesterol homeostasis. Using AREM, we were able to identify 19,935 Rad21 peaks and 1,748 Srebp-1 peaks in the mouse genome with high confidence, including 1,517 (7.6%) Rad21 peaks and 227 (13%) Srebp-1 peaks that were missed using only uniquely mapped reads. The open source implementation of our algorithm is available at http://sourceforge.net/projects/arem
Partial AUC maximization for essential gene prediction using genetic algorithms.
Hwang, Kyu-Baek; Ha, Beom-Yong; Ju, Sanghun; Kim, Sangsoo
2013-01-01
Identifying genes indispensable for an organism's life and their characteristics is one of the central questions in current biological research, and hence it would be helpful to develop computational approaches towards the prediction of essential genes. The performance of a predictor is usually measured by the area under the receiver operating characteristic curve (AUC). We propose a novel method by implementing genetic algorithms to maximize the partial AUC that is restricted to a specific interval of lower false positive rate (FPR), the region relevant to follow-up experimental validation. Our predictor uses various features based on sequence information, protein-protein interaction network topology, and gene expression profiles. A feature selection wrapper was developed to alleviate the over-fitting problem and to weigh each feature's relevance to prediction. We evaluated our method using the proteome of budding yeast. Our implementation of genetic algorithms maximizing the partial AUC below 0.05 or 0.10 of FPR outperformed other popular classification methods.
Liang, Liang; Shen, Hongying; De Camilli, Pietro; Toomre, Derek K; Duncan, James S
2011-01-01
Multi-angle total internal reflection fluorescence microscopy (MA-TIRFM) is a new generation of TIRF microscopy to study cellular processes near dorsal cell membrane in 4 dimensions (3D+t). To perform quantitative analysis using MA-TIRFM, it is necessary to track subcellular particles in these processes. In this paper, we propose a method based on a MAP framework for automatic particle tracking and apply it to track clathrin coated pits (CCPs). The expectation maximization (EM) algorithm is employed to solve the MAP problem. To provide the initial estimations for the EM algorithm, we develop a forward filter based on the most probable trajectory (MPT) filter. Multiple linear models are used to model particle dynamics. For CCP tracking, we use two linear models to describe constrained Brownian motion and fluorophore variation according to CCP properties. The tracking method is evaluated on synthetic data and results show that it has high accuracy. The result on real data confirmed by human expert cell biologists is also presented.
Awate, Suyash P; Radhakrishnan, Thyagarajan
2015-01-01
In microscopy imaging, colocalization between two biological entities (e.g., protein-protein or protein-cell) refers to the (stochastic) dependencies between the spatial locations of the two entities in the biological specimen. Measuring colocalization between two entities relies on fluorescence imaging of the specimen using two fluorescent chemicals, each of which indicates the presence/absence of one of the entities at any pixel location. State-of-the-art methods for estimating colocalization rely on post-processing image data using an adhoc sequence of algorithms with many free parameters that are tuned visually. This leads to loss of reproducibility of the results. This paper proposes a brand-new framework for estimating the nature and strength of colocalization directly from corrupted image data by solving a single unified optimization problem that automatically deals with noise, object labeling, and parameter tuning. The proposed framework relies on probabilistic graphical image modeling and a novel inference scheme using variational Bayesian expectation maximization for estimating all model parameters, including colocalization, from data. Results on simulated and real-world data demonstrate improved performance over the state of the art.
Two Time Point MS Lesion Segmentation in Brain MRI: An Expectation-Maximization Framework.
Jain, Saurabh; Ribbens, Annemie; Sima, Diana M; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2016-01-01
Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint expectation-maximization (EM) framework for two time point white matter (WM) lesion segmentation. MSmetrix-long takes as input a 3D T1-weighted and a 3D FLAIR MR image and segments lesions in three steps: (1) cross-sectional lesion segmentation of the two time points; (2) creation of difference image, which is used to model the lesion evolution; (3) a joint EM lesion segmentation framework that uses output of step (1) and step (2) to provide the final lesion segmentation. The accuracy (Dice score) and reproducibility (absolute lesion volume difference) of MSmetrix-long is evaluated using two datasets. Results: On the first dataset, the median Dice score between MSmetrix-long and expert lesion segmentation was 0.63 and the Pearson correlation coefficient (PCC) was equal to 0.96. On the second dataset, the median absolute volume difference was 0.11 ml. Conclusions: MSmetrix-long is accurate and consistent in segmenting MS lesions. Also, MSmetrix-long compares favorably with the publicly available longitudinal MS lesion segmentation algorithm of Lesion Segmentation Toolbox.
Reichman, Daniel; Morton, Kenneth D.; Collins, Leslie M.; Torrione, Peter A.
2014-05-01
Ground Penetrating Radar (GPR) is a very promising technology for subsurface threat detection. A successful algorithm employing GPR should achieve high detection rates at a low false-alarm rate and do so at operationally relevant speeds. GPRs measure reflections at dielectric boundaries that occur at the interfaces between different materials. These boundaries may occur at any depth, within the sensor's range, and furthermore, the dielectric changes could be such that they induce a 180 degree phase shift in the received signal relative to the emitted GPR pulse. As a result of these time-of-arrival and phase variations, extracting robust features from target responses in GPR is not straightforward. In this work, a method to mitigate polarity and alignment variations based on an expectation-maximization (EM) principal-component analysis (PCA) approach is proposed. This work demonstrates how model-based target alignment can significantly improve detection performance. Performance is measured according to the improvement in the receiver operating characteristic (ROC) curve for classification before and after the data is properly aligned and phase-corrected.
Karakatsanis, Nicolas A; Tsoumpas, Charalampos; Zaidi, Habib
2017-09-01
Bulk body motion may randomly occur during PET acquisitions introducing blurring, attenuation-emission mismatches and, in dynamic PET, discontinuities in the measured time activity curves between consecutive frames. Meanwhile, dynamic PET scans are longer, thus increasing the probability of bulk motion. In this study, we propose a streamlined 3D PET motion-compensated image reconstruction (3D-MCIR) framework, capable of robustly deconvolving intra-frame motion from a static or dynamic 3D sinogram. The presented 3D-MCIR methods need not partition the data into multiple gates, such as 4D MCIR algorithms, or access list-mode (LM) data, such as LM MCIR methods, both associated with increased computation or memory resources. The proposed algorithms can support compensation for any periodic and non-periodic motion, such as cardio-respiratory or bulk motion, the latter including rolling, twisting or drifting. Inspired from the widely adopted point-spread function (PSF) deconvolution 3D PET reconstruction techniques, here we introduce an image-based 3D generalized motion deconvolution method within the standard 3D maximum-likelihood expectation-maximization (ML-EM) reconstruction framework. In particular, we initially integrate a motion blurring kernel, accounting for every tracked motion within a frame, as an additional MLEM modeling component in the image space (integrated 3D-MCIR). Subsequently, we replaced the integrated model component with a nested iterative Richardson-Lucy (RL) image-based deconvolution method to accelerate the MLEM algorithm convergence rate (RL-3D-MCIR). The final method was evaluated with realistic simulations of whole-body dynamic PET data employing the XCAT phantom and real human bulk motion profiles, the latter estimated from volunteer dynamic MRI scans. In addition, metabolic uptake rate Ki parametric images were generated with the standard Patlak method. Our results demonstrate significant improvement in contrast-to-noise ratio (CNR) and
MotifHyades: expectation maximization for de novo DNA motif pair discovery on paired sequences.
Wong, Ka-Chun
2017-10-01
In higher eukaryotes, protein-DNA binding interactions are the central activities in gene regulation. In particular, DNA motifs such as transcription factor binding sites are the key components in gene transcription. Harnessing the recently available chromatin interaction data, computational methods are desired for identifying the coupling DNA motif pairs enriched on long-range chromatin-interacting sequence pairs (e.g. promoter-enhancer pairs) systematically. To fill the void, a novel probabilistic model (namely, MotifHyades) is proposed and developed for de novo DNA motif pair discovery on paired sequences. In particular, two expectation maximization algorithms are derived for efficient model training with linear computational complexity. Under diverse scenarios, MotifHyades is demonstrated faster and more accurate than the existing ad hoc computational pipeline. In addition, MotifHyades is applied to discover thousands of DNA motif pairs with higher gold standard motif matching ratio, higher DNase accessibility and higher evolutionary conservation than the previous ones in the human K562 cell line. Lastly, it has been run on five other human cell lines (i.e. GM12878, HeLa-S3, HUVEC, IMR90, and NHEK), revealing another thousands of novel DNA motif pairs which are characterized across a broad spectrum of genomic features on long-range promoter-enhancer pairs. The matrix-algebra-optimized versions of MotifHyades and the discovered DNA motif pairs can be found in http://bioinfo.cs.cityu.edu.hk/MotifHyades. kc.w@cityu.edu.hk. Supplementary data are available at Bioinformatics online.
Directory of Open Access Journals (Sweden)
Kujawińska Agnieszka
2016-06-01
Full Text Available The article presents a study of applying the proposed method of cluster analysis to support purchasing decisions in the welding industry. The authors analyze the usefulness of the non-hierarchical method, Expectation Maximization (EM, in the selection of material (212 combinations of flux and wire melt for the SAW (Submerged Arc Welding method process. The proposed approach to cluster analysis is proved as useful in supporting purchase decisions.
Algorithms for k-Colouring and Finding Maximal Independent Sets
DEFF Research Database (Denmark)
Byskov, Jesper Makholm
2003-01-01
In this extended abstract, we construct algorithms that decide for a graph with n vertices whether there exists a 4-, 5- or 6-colouring of the vertices running in time O(1.7504n), O(2.1592 n) and O(2.3289n), respectively, using polynomial space. For 6- or 7-colouring we construct algorithms running...
Maximizing influence in a social network: Improved results using a genetic algorithm
Zhang, Kaiqi; Du, Haifeng; Feldman, Marcus W.
2017-07-01
The influence maximization problem focuses on finding a small subset of nodes in a social network that maximizes the spread of influence. While the greedy algorithm and some improvements to it have been applied to solve this problem, the long solution time remains a problem. Stochastic optimization algorithms, such as simulated annealing, are other choices for solving this problem, but they often become trapped in local optima. We propose a genetic algorithm to solve the influence maximization problem. Through multi-population competition, using this algorithm we achieve an optimal result while maintaining diversity of the solution. We tested our method with actual networks, and our genetic algorithm performed slightly worse than the greedy algorithm but better than other algorithms.
Ng, C M
2013-10-01
The development of a population PK/PD model, an essential component for model-based drug development, is both time- and labor-intensive. A graphical-processing unit (GPU) computing technology has been proposed and used to accelerate many scientific computations. The objective of this study was to develop a hybrid GPU-CPU implementation of parallelized Monte Carlo parametric expectation maximization (MCPEM) estimation algorithm for population PK data analysis. A hybrid GPU-CPU implementation of the MCPEM algorithm (MCPEMGPU) and identical algorithm that is designed for the single CPU (MCPEMCPU) were developed using MATLAB in a single computer equipped with dual Xeon 6-Core E5690 CPU and a NVIDIA Tesla C2070 GPU parallel computing card that contained 448 stream processors. Two different PK models with rich/sparse sampling design schemes were used to simulate population data in assessing the performance of MCPEMCPU and MCPEMGPU. Results were analyzed by comparing the parameter estimation and model computation times. Speedup factor was used to assess the relative benefit of parallelized MCPEMGPU over MCPEMCPU in shortening model computation time. The MCPEMGPU consistently achieved shorter computation time than the MCPEMCPU and can offer more than 48-fold speedup using a single GPU card. The novel hybrid GPU-CPU implementation of parallelized MCPEM algorithm developed in this study holds a great promise in serving as the core for the next-generation of modeling software for population PK/PD analysis.
Two time point MS lesion segmentation in brain MRI : an expectation-maximization framework
Jain, Saurabh; Ribbens, Annemie; Sima, Diana M.; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H.; van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2016-01-01
Abstract: Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint expectation-maximization (EM) framework for two time point white matter (WM) lesion segmentation. MSmetrix-long takes as input a 3D T1-weighted and a 3D FLAIR MR image and segments lesion...
Maximizing microbial perchlorate degradation using a genetic algorithm: consortia optimization.
Kucharzyk, Katarzyna H; Soule, Terence; Hess, Thomas F
2013-09-01
Microorganisms in consortia perform many tasks more effectively than individual organisms and in addition grow more rapidly and in greater abundance. In this work, experimental datasets were assembled consisting of all possible selected combinations of perchlorate reducing strains of microorganisms and their perchlorate degradation rates were evaluated. A genetic algorithm (GA) methodology was successfully applied to define sets of microbial strains to achieve maximum rates of perchlorate degradation. Over the course of twenty generations of optimization using a GA, we saw a statistically significant 2.06 and 4.08-fold increase in average perchlorate degradation rates by consortia constructed using solely the perchlorate reducing bacteria (PRB) and by consortia consisting of PRB and accompanying organisms that did not degrade perchlorate, respectively. The comparison of kinetic rates constant in two types of microbial consortia additionally showed marked increases.
Li, Xingfeng; Coyle, Damien; Maguire, Liam; McGinnity, Thomas Martin
2014-02-01
The trust region method which originated from the Levenberg-Marquardt (LM) algorithm for mixed effect model estimation are considered in the context of second level functional magnetic resonance imaging (fMRI) data analysis. We first present the mathematical and optimization details of the method for the mixed effect model analysis, then we compare the proposed methods with the conventional expectation-maximization (EM) algorithm based on a series of datasets (synthetic and real human fMRI datasets). From simulation studies, we found a higher damping factor for the LM algorithm is better than lower damping factor for the fMRI data analysis. More importantly, in most cases, the expectation trust region algorithm is superior to the EM algorithm in terms of accuracy if the random effect variance is large. We also compare these algorithms on real human datasets which comprise repeated measures of fMRI in phased-encoded and random block experiment designs. We observed that the proposed method is faster in computation and robust to Gaussian noise for the fMRI analysis. The advantages and limitations of the suggested methods are discussed. © 2013.
Nor, Shahdiba Binti Md; Mahmud, Zamalia
2016-10-01
The analysis of sports data has always aroused great interest among statisticians and sports data have been investigated from different perspectives often aim at forecasting the results. The study focuses on the 12 teams who join the Malaysian Super League (MSL) for season 2015. This paper used Bayesian Expectation Maximization for Generalized Bradley Terry Model to estimate all the football team's rankings. The Generalized Bradley-Terry model is possible to find the maximum likelihood (ML) estimate of the skill ratings λ using a simple iterative procedure. In order to maximize the function of ML, we need inferential bayesian method to get posterior distribution which can be computed quickly. The team's ability was estimated based on the previous year's game results by calculating the probability of winning based on the final scores for each team. It was found that model with tie scores does make a difference in affect the model of estimating the football team's ability in winning the next match. However, team with better results in the previous year has a better chance for scoring in the next game.
Ortiz-Rosario, Alexis; Adeli, Hojjat; Buford, John A
2017-01-15
Researchers often rely on simple methods to identify involvement of neurons in a particular motor task. The historical approach has been to inspect large groups of neurons and subjectively separate neurons into groups based on the expertise of the investigator. In cases where neuron populations are small it is reasonable to inspect these neuronal recordings and their firing rates carefully to avoid data omissions. In this paper, a new methodology is presented for automatic objective classification of neurons recorded in association with behavioral tasks into groups. By identifying characteristics of neurons in a particular group, the investigator can then identify functional classes of neurons based on their relationship to the task. The methodology is based on integration of a multiple signal classification (MUSIC) algorithm to extract relevant features from the firing rate and an expectation-maximization Gaussian mixture algorithm (EM-GMM) to cluster the extracted features. The methodology is capable of identifying and clustering similar firing rate profiles automatically based on specific signal features. An empirical wavelet transform (EWT) was used to validate the features found in the MUSIC pseudospectrum and the resulting signal features captured by the methodology. Additionally, this methodology was used to inspect behavioral elements of neurons to physiologically validate the model. This methodology was tested using a set of data collected from awake behaving non-human primates. Copyright © 2016 Elsevier B.V. All rights reserved.
A Linear Time Algorithm for the k Maximal Sums Problem
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Jørgensen, Allan Grønlund
2007-01-01
Finding the sub-vector with the largest sum in a sequence of n numbers is known as the maximum sum problem. Finding the k sub-vectors with the largest sums is a natural extension of this, and is known as the k maximal sums problem. In this paper we design an optimal O(n + k) time algorithm for the...... k maximal sums problem. We use this algorithm to obtain algorithms solving the two-dimensional k maximal sums problem in O(m 2·n + k) time, where the input is an m ×n matrix with m ≤ n. We generalize this algorithm to solve the d-dimensional problem in O(n 2d − 1 + k) time. The space usage of all...... the algorithms can be reduced to O(n d − 1 + k). This leads to the first algorithm for the k maximal sums problem in one dimension using O(n + k) time and O(k) space....
Directory of Open Access Journals (Sweden)
Ye Ping
2005-12-01
Full Text Available Abstract Background Synthetic lethality experiments identify pairs of genes with complementary function. More direct functional associations (for example greater probability of membership in a single protein complex may be inferred between genes that share synthetic lethal interaction partners than genes that are directly synthetic lethal. Probabilistic algorithms that identify gene modules based on motif discovery are highly appropriate for the analysis of synthetic lethal genetic interaction data and have great potential in integrative analysis of heterogeneous datasets. Results We have developed Genetic Interaction Motif Finding (GIMF, an algorithm for unsupervised motif discovery from synthetic lethal interaction data. Interaction motifs are characterized by position weight matrices and optimized through expectation maximization. Given a seed gene, GIMF performs a nonlinear transform on the input genetic interaction data and automatically assigns genes to the motif or non-motif category. We demonstrate the capacity to extract known and novel pathways for Saccharomyces cerevisiae (budding yeast. Annotations suggested for several uncharacterized genes are supported by recent experimental evidence. GIMF is efficient in computation, requires no training and automatically down-weights promiscuous genes with high degrees. Conclusion GIMF effectively identifies pathways from synthetic lethality data with several unique features. It is mostly suitable for building gene modules around seed genes. Optimal choice of one single model parameter allows construction of gene networks with different levels of confidence. The impact of hub genes the generic probabilistic framework of GIMF may be used to group other types of biological entities such as proteins based on stochastic motifs. Analysis of the strongest motifs discovered by the algorithm indicates that synthetic lethal interactions are depleted between genes within a motif, suggesting that synthetic
Directory of Open Access Journals (Sweden)
Hugo Fort
2015-11-01
Full Text Available Mutualistic networks in nature are widespread and play a key role in generating the diversity of life on Earth. They constitute an interdisciplinary field where physicists, biologists and computer scientists work together. Plant-pollinator mutualisms in particular form complex networks of interdependence between often hundreds of species. Understanding the architecture of these networks is of paramount importance for assessing the robustness of the corresponding communities to global change and management strategies. Advances in this problem are currently limited mainly due to the lack of methodological tools to deal with the intrinsic complexity of mutualisms, as well as the scarcity and incompleteness of available empirical data. One way to uncover the structure underlying complex networks is to employ information theoretical statistical inference methods, such as the expectation maximization (EM algorithm. In particular, such an approach can be used to cluster the nodes of a network based on the similarity of their node neighborhoods. Here, we show how to connect network theory with the classical ecological niche theory for mutualistic plant-pollinator webs by using the EM algorithm. We apply EM to classify the nodes of an extensive collection of mutualistic plant-pollinator networks according to their connection similarity. We find that EM recovers largely the same clustering of the species as an alternative recently proposed method based on resource overlap, where one considers each party as a consuming resource for the other party (plants providing food to animals, while animals assist the reproduction of plants. Furthermore, using the EM algorithm, we can obtain a sequence of successfully-refined classifications that enables us to identify the fine-structure of the ecological network and understand better the niche distribution both for plants and animals. This is an example of how information theoretical methods help to systematize and
A maximal clique based multiobjective evolutionary algorithm for overlapping community detection
Wen, Xuyun; Chen, Wei-Neng; Lin, Ying; Gu, Tianlong; Zhang, Huaxiang; Li, Yun; Yin, Yilong; Zhang, Jun
2016-01-01
Detecting community structure has become one im-portant technique for studying complex networks. Although many community detection algorithms have been proposed, most of them focus on separated communities, where each node can be-long to only one community. However, in many real-world net-works, communities are often overlapped with each other. De-veloping overlapping community detection algorithms thus be-comes necessary. Along this avenue, this paper proposes a maxi-mal clique based multiob...
Robust Recursive Algorithm under Uncertainties via Worst-Case SINR Maximization
Directory of Open Access Journals (Sweden)
Xin Song
2015-01-01
Full Text Available The performance of traditional constrained-LMS (CLMS algorithm is known to degrade seriously in the presence of small training data size and mismatches between the assumed array response and the true array response. In this paper, we develop a robust constrained-LMS (RCLMS algorithm based on worst-case SINR maximization. Our algorithm belongs to the class of diagonal loading techniques, in which the diagonal loading factor is obtained in a simple form and it decreases the computation cost. The updated weight vector is derived by the descent gradient method and Lagrange multiplier method. It demonstrates that our proposed recursive algorithm provides excellent robustness against signal steering vector mismatches and the small training data size and, has fast convergence rate, and makes the mean output array signal-to-interference-plus-noise ratio (SINR consistently close to the optimal one. Some simulation results are presented to compare the performance of our robust algorithm with the traditional CLMS algorithm.
PMCR-Miner: parallel maximal confident association rules miner algorithm for microarray data set.
Zakaria, Wael; Kotb, Yasser; Ghaleb, Fayed F M
2015-01-01
The MCR-Miner algorithm is aimed to mine all maximal high confident association rules form the microarray up/down-expressed genes data set. This paper introduces two new algorithms: IMCR-Miner and PMCR-Miner. The IMCR-Miner algorithm is an extension of the MCR-Miner algorithm with some improvements. These improvements implement a novel way to store the samples of each gene into a list of unsigned integers in order to benefit using the bitwise operations. In addition, the IMCR-Miner algorithm overcomes the drawbacks faced by the MCR-Miner algorithm by setting some restrictions to ignore repeated comparisons. The PMCR-Miner algorithm is a parallel version of the new proposed IMCR-Miner algorithm. The PMCR-Miner algorithm is based on shared-memory systems and task parallelism, where no time is needed in the process of sharing and combining data between processors. The experimental results on real microarray data sets show that the PMCR-Miner algorithm is more efficient and scalable than the counterparts.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2014-10-01
The expectation maximization (EM) is the standard training algorithm for hidden Markov model (HMM). However, EM faces a local convergence problem in HMM estimation. This paper attempts to overcome this problem of EM and proposes hybrid metaheuristic approaches to EM for HMM. In our earlier research, a hybrid of a constraint-based evolutionary learning approach to EM (CEL-EM) improved HMM estimation. In this paper, we propose a hybrid simulated annealing stochastic version of EM (SASEM) that combines simulated annealing (SA) with EM. The novelty of our approach is that we develop a mathematical reformulation of HMM estimation by introducing a stochastic step between the EM steps and combine SA with EM to provide better control over the acceptance of stochastic and EM steps for better HMM estimation. We also extend our earlier work and propose a second hybrid which is a combination of an EA and the proposed SASEM, (EA-SASEM). The proposed EA-SASEM uses the best constraint-based EA strategies from CEL-EM and stochastic reformulation of HMM. The complementary properties of EA and SA and stochastic reformulation of HMM of SASEM provide EA-SASEM with sufficient potential to find better estimation for HMM. To the best of our knowledge, this type of hybridization and mathematical reformulation have not been explored in the context of EM and HMM training. The proposed approaches have been evaluated through comprehensive experiments to justify their effectiveness in signal modeling using the speech corpus: TIMIT. Experimental results show that proposed approaches obtain higher recognition accuracies than the EM algorithm and CEL-EM as well.
Khan, Zia; Bloom, Joshua S.; Kruglyak, Leonid; Singh, Mona
2009-01-01
Motivation: High-throughput sequencing technologies place ever increasing demands on existing algorithms for sequence analysis. Algorithms for computing maximal exact matches (MEMs) between sequences appear in two contexts where high-throughput sequencing will vastly increase the volume of sequence data: (i) seeding alignments of high-throughput reads for genome assembly and (ii) designating anchor points for genome–genome comparisons. Results: We introduce a new algorithm for finding MEMs. The algorithm leverages a sparse suffix array (SA), a text index that stores every K-th position of the text. In contrast to a full text index that stores every position of the text, a sparse SA occupies much less memory. Even though we use a sparse index, the output of our algorithm is the same as a full text index algorithm as long as the space between the indexed suffixes is not greater than a minimum length of a MEM. By relying on partial matches and additional text scanning between indexed positions, the algorithm trades memory for extra computation. The reduced memory usage makes it possible to determine MEMs between significantly longer sequences. Availability: Source code for the algorithm is available under a BSD open source license at http://compbio.cs.princeton.edu/mems. The implementation can serve as a drop-in replacement for the MEMs algorithm in MUMmer 3. Contact: zkhan@cs.princeton.edu;mona@cs.princeton.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:19389736
A Heuristic Optimal Discrete Bit Allocation Algorithm for Margin Maximization in DMT Systems
Directory of Open Access Journals (Sweden)
Dong Shi-Wei
2007-01-01
Full Text Available A heuristic optimal discrete bit allocation algorithm is proposed for solving the margin maximization problem in discrete multitone (DMT systems. Starting from an initial equal power assignment bit distribution, the proposed algorithm employs a multistaged bit rate allocation scheme to meet the target rate. If the total bit rate is far from the target rate, a multiple-bits loading procedure is used to obtain a bit allocation close to the target rate. When close to the target rate, a parallel bit-loading procedure is used to achieve the target rate and this is computationally more efficient than conventional greedy bit-loading algorithm. Finally, the target bit rate distribution is checked, if it is efficient, then it is also the optimal solution; else, optimal bit distribution can be obtained only by few bit swaps. Simulation results using the standard asymmetric digital subscriber line (ADSL test loops show that the proposed algorithm is efficient for practical DMT transmissions.
Guo, Jingyu; Tian, Dehua; McKinney, Brett A.; Hartman, John L.
2010-06-01
Interactions between genetic and/or environmental factors are ubiquitous, affecting the phenotypes of organisms in complex ways. Knowledge about such interactions is becoming rate-limiting for our understanding of human disease and other biological phenomena. Phenomics refers to the integrative analysis of how all genes contribute to phenotype variation, entailing genome and organism level information. A systems biology view of gene interactions is critical for phenomics. Unfortunately the problem is intractable in humans; however, it can be addressed in simpler genetic model systems. Our research group has focused on the concept of genetic buffering of phenotypic variation, in studies employing the single-cell eukaryotic organism, S. cerevisiae. We have developed a methodology, quantitative high throughput cellular phenotyping (Q-HTCP), for high-resolution measurements of gene-gene and gene-environment interactions on a genome-wide scale. Q-HTCP is being applied to the complete set of S. cerevisiae gene deletion strains, a unique resource for systematically mapping gene interactions. Genetic buffering is the idea that comprehensive and quantitative knowledge about how genes interact with respect to phenotypes will lead to an appreciation of how genes and pathways are functionally connected at a systems level to maintain homeostasis. However, extracting biologically useful information from Q-HTCP data is challenging, due to the multidimensional and nonlinear nature of gene interactions, together with a relative lack of prior biological information. Here we describe a new approach for mining quantitative genetic interaction data called recursive expectation-maximization clustering (REMc). We developed REMc to help discover phenomic modules, defined as sets of genes with similar patterns of interaction across a series of genetic or environmental perturbations. Such modules are reflective of buffering mechanisms, i.e., genes that play a related role in the maintenance
Distributed Matching Algorithms: Maximizing Secrecy in the Presence of Untrusted Relay
Directory of Open Access Journals (Sweden)
B. Ali
2017-06-01
Full Text Available In this paper, we propose a secrecy sum-rate maximization based matching algorithm between primary transmitters and secondary cooperative jammers in the presence of an eavesdropper. More explicitly, we consider an untrusted relay scenario, where the relay is a potential eavesdropper. We first show the achievable secrecy regions employing a friendly jammer in a cooperative scenario with employing an untrusted relay. Then, we provide results for the secrecy regions for two scenarios, where in the first case we consider no direct transmission between the source and the destination, while in the second case we include a source to destination direct link in our communication system. Furthermore, a friendly jammer helps to send a noise signal during the first phase of the cooperative transmission, for securing the information transmitted from the source. In our matching algorithm, the selected cooperative jammer or the secondary user, is rewarded with the spectrum allocation for a fraction of time slot from the source which is the primary user. The Conventional Distributed Algorithm (CDA and the Pragmatic Distributed Algorithm (PDA, which were originally designed for maximising the user’s sum rate, are modified and adapted for maximizing the secrecy sum-rate for the primary user. Instead of assuming perfect modulation and/or perfect channel coding, we have also investigated our proposed schemes when practical channel coding and modulation schemes are invoked.
Very slow search and reach: failure to maximize expected gain in an eye-hand coordination task.
Directory of Open Access Journals (Sweden)
Hang Zhang
Full Text Available We examined an eye-hand coordination task where optimal visual search and hand movement strategies were inter-related. Observers were asked to find and touch a target among five distractors on a touch screen. Their reward for touching the target was reduced by an amount proportional to how long they took to locate and reach to it. Coordinating the eye and the hand appropriately would markedly reduce the search-reach time. Using statistical decision theory we derived the sequence of interrelated eye and hand movements that would maximize expected gain and we predicted how hand movements should change as the eye gathered further information about target location. We recorded human observers' eye movements and hand movements and compared them with the optimal strategy that would have maximized expected gain. We found that most observers failed to adopt the optimal search-reach strategy. We analyze and describe the strategies they did adopt.
Obulesu, O.; Rama Mohan Reddy, A., Dr; Mahendra, M.
2017-08-01
Detecting regular and efficient cyclic models is the demanding activity for data analysts due to unstructured, vigorous and enormous raw information produced from web. Many existing approaches generate large candidate patterns in the occurrence of huge and complex databases. In this work, two novel algorithms are proposed and a comparative examination is performed by considering scalability and performance parameters. The first algorithm is, EFPMA (Extended Regular Model Detection Algorithm) used to find frequent sequential patterns from the spatiotemporal dataset and the second one is, ETMA (Enhanced Tree-based Mining Algorithm) for detecting effective cyclic models with symbolic database representation. EFPMA is an algorithm grows models from both ends (prefixes and suffixes) of detected patterns, which results in faster pattern growth because of less levels of database projection compared to existing approaches such as Prefixspan and SPADE. ETMA uses distinct notions to store and manage transactions data horizontally such as segment, sequence and individual symbols. ETMA exploits a partition-and-conquer method to find maximal patterns by using symbolic notations. Using this algorithm, we can mine cyclic models in full-series sequential patterns including subsection series also. ETMA reduces the memory consumption and makes use of the efficient symbolic operation. Furthermore, ETMA only records time-series instances dynamically, in terms of character, series and section approaches respectively. The extent of the pattern and proving efficiency of the reducing and retrieval techniques from synthetic and actual datasets is a really open & challenging mining problem. These techniques are useful in data streams, traffic risk analysis, medical diagnosis, DNA sequence Mining, Earthquake prediction applications. Extensive investigational outcomes illustrates that the algorithms outperforms well towards efficiency and scalability than ECLAT, STNR and MAFIA approaches.
Directory of Open Access Journals (Sweden)
Uamporn Witthayarat
2012-01-01
Full Text Available The aim of this paper is to introduce an iterative algorithm for finding a common solution of the sets (A+M2−1(0 and (B+M1−1(0, where M is a maximal accretive operator in a Banach space and, by using the proposed algorithm, to establish some strong convergence theorems for common solutions of the two sets above in a uniformly convex and 2-uniformly smooth Banach space. The results obtained in this paper extend and improve the corresponding results of Qin et al. 2011 from Hilbert spaces to Banach spaces and Petrot et al. 2011. Moreover, we also apply our results to some applications for solving convex feasibility problems.
Karakatsanis, Nicolas A.; Casey, Michael E.; Lodge, Martin A.; Rahmim, Arman; Zaidi, Habib
2016-08-01
Whole-body (WB) dynamic PET has recently demonstrated its potential in translating the quantitative benefits of parametric imaging to the clinic. Post-reconstruction standard Patlak (sPatlak) WB graphical analysis utilizes multi-bed multi-pass PET acquisition to produce quantitative WB images of the tracer influx rate K i as a complimentary metric to the semi-quantitative standardized uptake value (SUV). The resulting K i images may suffer from high noise due to the need for short acquisition frames. Meanwhile, a generalized Patlak (gPatlak) WB post-reconstruction method had been suggested to limit K i bias of sPatlak analysis at regions with non-negligible 18F-FDG uptake reversibility; however, gPatlak analysis is non-linear and thus can further amplify noise. In the present study, we implemented, within the open-source software for tomographic image reconstruction platform, a clinically adoptable 4D WB reconstruction framework enabling efficient estimation of sPatlak and gPatlak images directly from dynamic multi-bed PET raw data with substantial noise reduction. Furthermore, we employed the optimization transfer methodology to accelerate 4D expectation-maximization (EM) convergence by nesting the fast image-based estimation of Patlak parameters within each iteration cycle of the slower projection-based estimation of dynamic PET images. The novel gPatlak 4D method was initialized from an optimized set of sPatlak ML-EM iterations to facilitate EM convergence. Initially, realistic simulations were conducted utilizing published 18F-FDG kinetic parameters coupled with the XCAT phantom. Quantitative analyses illustrated enhanced K i target-to-background ratio (TBR) and especially contrast-to-noise ratio (CNR) performance for the 4D versus the indirect methods and static SUV. Furthermore, considerable convergence acceleration was observed for the nested algorithms involving 10-20 sub-iterations. Moreover, systematic reduction in K i % bias and improved TBR were
Seret, Alain; Forthomme, Julien
2009-09-01
The aim of this study was to compare the performance of filtered backprojection (FBP) and ordered-subset expectation maximization (OSEM) reconstruction algorithms available in several types of commercial SPECT software. Numeric simulations of SPECT acquisitions of 2 phantoms were used: the National Electrical Manufacturers Association line phantom used for the assessment of SPECT resolution and a phantom with uniform, hot-rod, and cold-rod compartments. For FBP, no filtering and filtering of the projections with either a Butterworth filter (order 3 or 6) or a Hanning filter at various cutoff frequencies were considered. For OSEM, the number of subsets was 1, 4, 8, or 16, and the number of iterations was chosen to obtain a product number of iterations times the number of subsets equal to 16, 32, 48, or 64. The line phantom enabled us to obtain the reconstructed central, radial, and tangential full width at half maximum. The uniform compartment of the second phantom delivered the reconstructed mean pixel counts and SDs from which the coefficients of variation were calculated. Hot contrast and cold contrast were obtained from its rod compartments. For FBP, the full width at half maximum, mean pixel count, coefficient of variation, and contrast were almost software independent. The only exceptions were a smaller (by 0.5 mm) full width at half maximum for one of the software types, higher mean pixel counts for 2 of the software types, and better contrast for 2 of the software types under some filtering conditions. For OSEM, the full width at half maximum differed by 0.1-2.5 mm with the different types of software but was almost independent of the number of subsets or iterations. There was a marked dependence of the mean pixel count on the type of software used, and there was a moderate dependence of the coefficient of variation. Contrast was almost software independent. The mean pixel count varied greatly with the number of iterations for 2 of the software types, and
Mousavi, Seyed Hosein; Nazemi, Ali; Hafezalkotob, Ashkan
2015-12-01
With the formation of the competitive electricity markets in the world, optimization of bidding strategies has become one of the main discussions in studies related to market designing. Market design is challenged by multiple objectives that need to be satisfied. The solution of those multi-objective problems is searched often over the combined strategy space, and thus requires the simultaneous optimization of multiple parameters. The problem is formulated analytically using the Nash equilibrium concept for games composed of large numbers of players having discrete and large strategy spaces. The solution methodology is based on a characterization of Nash equilibrium in terms of minima of a function and relies on a metaheuristic optimization approach to find these minima. This paper presents some metaheuristic algorithms to simulate how generators bid in the spot electricity market viewpoint of their profit maximization according to the other generators' strategies, such as genetic algorithm (GA), simulated annealing (SA) and hybrid simulated annealing genetic algorithm (HSAGA) and compares their results. As both GA and SA are generic search methods, HSAGA is also a generic search method. The model based on the actual data is implemented in a peak hour of Tehran's wholesale spot market in 2012. The results of the simulations show that GA outperforms SA and HSAGA on computing time, number of function evaluation and computing stability, as well as the results of calculated Nash equilibriums by GA are less various and different from each other than the other algorithms.
Directory of Open Access Journals (Sweden)
Mahdi M. M. El-Arini
2013-01-01
Full Text Available In recent years, the solar energy has become one of the most important alternative sources of electric energy, so it is important to operate photovoltaic (PV panel at the optimal point to obtain the possible maximum efficiency. This paper presents a new optimization approach to maximize the electrical power of a PV panel. The technique which is based on objective function represents the output power of the PV panel and constraints, equality and inequality. First the dummy variables that have effect on the output power are classified into two categories: dependent and independent. The proposed approach is a multistage one as the genetic algorithm, GA, is used to obtain the best initial population at optimal solution and this initial population is fed to Lagrange multiplier algorithm (LM, then a comparison between the two algorithms, GA and LM, is performed. The proposed technique is applied to solar radiation measured at Helwan city at latitude 29.87°, Egypt. The results showed that the proposed technique is applicable.
Maximizing Expected Achievable Rates for Block-Fading Buffer-Aided Relay Channels
Shaqfeh, Mohammad
2016-05-25
In this paper, the long-term average achievable rate over block-fading buffer-aided relay channels is maximized using a hybrid scheme that combines three essential transmission strategies, which are decode-and-forward, compress-and-forward, and direct transmission. The proposed hybrid scheme is dynamically adapted based on the channel state information. The integration and optimization of these three strategies provide a more generic and fundamental solution and give better achievable rates than the known schemes in the literature. Despite the large number of optimization variables, the proposed hybrid scheme can be optimized using simple closed-form formulas that are easy to apply in practical relay systems. This includes adjusting the transmission rate and compression when compress-and-forward is the selected strategy based on the channel conditions. Furthermore, in this paper, the hybrid scheme is applied to three different models of the Gaussian block-fading buffer-aided relay channels, depending on whether the relay is half or full duplex and whether the source and the relay have orthogonal or non-orthogonal channel access. Several numerical examples are provided to demonstrate the achievable rate results and compare them to the upper bounds of the ergodic capacity for each one of the three channel models under consideration.
Huda, Shamsul; Yearwood, John; Togneri, Roberto
2009-02-01
This paper attempts to overcome the tendency of the expectation-maximization (EM) algorithm to locate a local rather than global maximum when applied to estimate the hidden Markov model (HMM) parameters in speech signal modeling. We propose a hybrid algorithm for estimation of the HMM in automatic speech recognition (ASR) using a constraint-based evolutionary algorithm (EA) and EM, the CEL-EM. The novelty of our hybrid algorithm (CEL-EM) is that it is applicable for estimation of the constraint-based models with many constraints and large numbers of parameters (which use EM) like HMM. Two constraint-based versions of the CEL-EM with different fusion strategies have been proposed using a constraint-based EA and the EM for better estimation of HMM in ASR. The first one uses a traditional constraint-handling mechanism of EA. The other version transforms a constrained optimization problem into an unconstrained problem using Lagrange multipliers. Fusion strategies for the CEL-EM use a staged-fusion approach where EM has been plugged with the EA periodically after the execution of EA for a specific period of time to maintain the global sampling capabilities of EA in the hybrid algorithm. A variable initialization approach (VIA) has been proposed using a variable segmentation to provide a better initialization for EA in the CEL-EM. Experimental results on the TIMIT speech corpus show that CEL-EM obtains higher recognition accuracies than the traditional EM algorithm as well as a top-standard EM (VIA-EM, constructed by applying the VIA to EM).
Energy Technology Data Exchange (ETDEWEB)
Tumuluru, Jaya
2013-01-10
Aims: The present case study is on maximizing the aqua feed properties using response surface methodology and genetic algorithm. Study Design: Effect of extrusion process variables like screw speed, L/D ratio, barrel temperature, and feed moisture content were analyzed to maximize the aqua feed properties like water stability, true density, and expansion ratio. Place and Duration of Study: This study was carried out in the Department of Agricultural and Food Engineering, Indian Institute of Technology, Kharagpur, India. Methodology: A variable length single screw extruder was used in the study. The process variables selected were screw speed (rpm), length-to-diameter (L/D) ratio, barrel temperature (degrees C), and feed moisture content (%). The pelletized aqua feed was analyzed for physical properties like water stability (WS), true density (TD), and expansion ratio (ER). Extrusion experimental data was collected by based on central composite design. The experimental data was further analyzed using response surface methodology (RSM) and genetic algorithm (GA) for maximizing feed properties. Results: Regression equations developed for the experimental data has adequately described the effect of process variables on the physical properties with coefficient of determination values (R2) of > 0.95. RSM analysis indicated WS, ER, and TD were maximized at L/D ratio of 12-13, screw speed of 60-80 rpm, feed moisture content of 30-40%, and barrel temperature of = 80 degrees C for ER and TD and > 90 degrees C for WS. Based on GA analysis, a maxium WS of 98.10% was predicted at a screw speed of 96.71 rpm, L/D radio of 13.67, barrel temperature of 96.26 degrees C, and feed moisture content of 33.55%. Maximum ER and TD of 0.99 and 1346.9 kg/m3 was also predicted at screw speed of 60.37 and 90.24 rpm, L/D ratio of 12.18 and 13.52, barrel temperature of 68.50 and 64.88 degrees C, and medium feed moisture content of 33.61 and 38.36%. Conclusion: The present data analysis indicated
Directory of Open Access Journals (Sweden)
Aida Tayebiyan
2016-06-01
Full Text Available Background: Several reservoir systems have been constructed for hydropower generation around the world. Hydropower offers an economical source of electricity with reduce carbon emissions. Therefore, it is such a clean and renewable source of energy. Reservoirs that generate hydropower are typically operated with the goal of maximizing energy revenue. Yet, reservoir systems are inefficiently operated and manage according to policies determined at the construction time. It is worth noting that with little enhancement in operation of reservoir system, there could be an increase in efficiency of the scheme for many consumers. Methods: This research develops simulation-optimization models that reflect discrete hedging policy (DHP to manage and operate hydropower reservoir system and analyse it in both single and multireservoir system. Accordingly, three operational models (2 single reservoir systems and 1 multi-reservoir system were constructed and optimized by genetic algorithm (GA. Maximizing the total power generation in horizontal time is chosen as an objective function in order to improve the functional efficiency in hydropower production with consideration to operational and physical limitations. The constructed models, which is a cascade hydropower reservoirs system have been tested and evaluated in the Cameron Highland and Batang Padang in Malaysia. Results: According to the given results, usage of DHP for hydropower reservoir system operation could increase the power generation output to nearly 13% in the studied reservoir system compared to present operating policy (TNB operation. This substantial increase in power production will enhance economic development. Moreover, the given results of single and multi-reservoir systems affirmed that hedging policy could manage the single system much better than operation of the multi-reservoir system. Conclusion: It can be summarized that DHP is an efficient and feasible policy, which could be used
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X
2015-12-26
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs) in the network act as routers to transmit data to base station (BS) cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.
Directory of Open Access Journals (Sweden)
Liu Yang
2015-12-01
Full Text Available Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC algorithm is proposed to construct the optimal multi-hop clustering architecture in energy harvesting WSNs, with the goal of achieving perpetual network operation. All cluster heads (CHs in the network act as routers to transmit data to base station (BS cooperatively by a multi-hop communication method. In addition, by analyzing the energy consumption of intra- and inter-cluster data transmission, we give the energy neutrality constraints. Under these constraints, every sensor node can work in an energy neutral state, which in turn provides perpetual network operation. Furthermore, the minimum network data transmission cycle is mathematically derived using convex optimization techniques while the network information gathering is maximal. Simulation results show that our protocol can achieve perpetual network operation, so that the consistent data delivery is guaranteed. In addition, substantial improvements on the performance of network throughput are also achieved as compared to the famous traditional clustering protocol LEACH and recent energy harvesting aware clustering protocols.
Directory of Open Access Journals (Sweden)
Guang Deng
2008-05-01
Full Text Available A fundamental problem in signal processing is to estimate signal from noisy observations. This is usually formulated as an optimization problem. Optimizations based on variational lower bound and minorization-maximization have been widely used in machine learning research, signal processing, and statistics. In this paper, we study iterative algorithms based on the conjugate function lower bound (CFLB and minorization-maximization (MM for a class of objective functions. We propose a generalized version of these two algorithms and show that they are equivalent when the objective function is convex and differentiable. We then develop a CFLB/MM algorithm for solving the MAP estimation problems under a linear Gaussian observation model. We modify this algorithm for wavelet-domain image denoising. Experimental results show that using a single wavelet representation the performance of the proposed algorithms makes better than that of the bishrinkage algorithm which is arguably one of the best in recent publications. Using complex wavelet representations, the performance of the proposed algorithm is very competitive with that of the state-of-the-art algorithms.
Expectation Value Calculation of Grid QoS Parameters Based on Algorithm Prim
Kaijian Liang; Linfeng Bai; Xilong Qu
2011-01-01
From the perspective of selecting service by QoS attributes, a computation method of QoS expectation value, which is based on Algorithm Prim, was presented to provide support for selection of service. On the basis of the ability of service providers, by Algorithm Prim, this method succeded in calculating a set of balanced expectation values of QoS. Selection of service based on these QoS values would be beneficial to optimization of system resources and protection of the users of those servic...
Yousefi, Siamak; Balasubramanian, Madhusudhanan; Goldbaum, Michael H; Medeiros, Felipe A; Zangwill, Linda M; Weinreb, Robert N; Liebmann, Jeffrey M; Girkin, Christopher A; Bowd, Christopher
2016-05-01
To validate Gaussian mixture-model with expectation maximization (GEM) and variational Bayesian independent component analysis mixture-models (VIM) for detecting glaucomatous progression along visual field (VF) defect patterns (GEM-progression of patterns (POP) and VIM-POP). To compare GEM-POP and VIM-POP with other methods. GEM and VIM models separated cross-sectional abnormal VFs from 859 eyes and normal VFs from 1117 eyes into abnormal and normal clusters. Clusters were decomposed into independent axes. The confidence limit (CL) of stability was established for each axis with a set of 84 stable eyes. Sensitivity for detecting progression was assessed in a sample of 83 eyes with known progressive glaucomatous optic neuropathy (PGON). Eyes were classified as progressed if any defect pattern progressed beyond the CL of stability. Performance of GEM-POP and VIM-POP was compared to point-wise linear regression (PLR), permutation analysis of PLR (PoPLR), and linear regression (LR) of mean deviation (MD), and visual field index (VFI). Sensitivity and specificity for detecting glaucomatous VFs were 89.9% and 93.8%, respectively, for GEM and 93.0% and 97.0%, respectively, for VIM. Receiver operating characteristic (ROC) curve areas for classifying progressed eyes were 0.82 for VIM-POP, 0.86 for GEM-POP, 0.81 for PoPLR, 0.69 for LR of MD, and 0.76 for LR of VFI. GEM-POP was significantly more sensitive to PGON than PoPLR and linear regression of MD and VFI in our sample, while providing localized progression information. Detection of glaucomatous progression can be improved by assessing longitudinal changes in localized patterns of glaucomatous defect identified by unsupervised machine learning.
DEFF Research Database (Denmark)
depend on the reader’s own experiences, individual feelings, personal associations or on conventions of reading, interpretive communities and cultural conditions? This volume brings together narrative theory, fictionality theory and speech act theory to address such questions of expectations...
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-01
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised of partial volume effects which may affect treatment prognosis, assessment, or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discover LS at positions of increasing radii from the scanner’s center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method’s correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom, and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of ±30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV
Multi-objective Evolutionary Algorithms for Influence Maximization in Social Networks
Bucur, Doina; Iacca, Giovanni; Marcelli, Andrea; Squillero, Giovanni; Tonda, Alberto; Squillero, Giovanni; Sim, Kevin
As the pervasiveness of social networks increases, new NP-hard related problems become interesting for the optimization community. The objective of influence maximization is to contact the largest possible number of nodes in a network, starting from a small set of seed nodes, and assuming a model
Haiyun Zhou; Shin Min Kang; Yeol Je Cho
2008-01-01
Abstract Let be a real Hilbert space, a nonempty closed convex subset of , and a maximal monotone operator with . Let be the metric projection of onto . Suppose that, for any given , , and , there exists satisfying the following set-valued mapping equation: for all , where with as and is regarded as an error sequence such that . Let be a real sequence such that as and . For any fixed , define a sequence iteratively as for all . Then converges stron...
Power backup Density based Clustering Algorithm for Maximizing Lifetime of Wireless Sensor Networks
DEFF Research Database (Denmark)
Wagh, Sanjeev; Prasad, Ramjee
2014-01-01
WSNs consists several nodes spread over experimental fields for specific application temporarily. The spatially distributed sensor nodes sense and gather the information for intended parameters like temperature, sound, vibrations, etc for the particular application. In this paper, we evaluate...... the impact of different algorithms i.e. clustering for densely populated field application, energy backup by adding energy harvesting node in field, positioning energy harvesting node smartly in the field and also positioning the base station in sensor field to optimize the communication between cluster head...... algorithm can be applied for many sensitive applications like military for hostile and remote areas or environmental monitoring where human intervention is not possible....
Directory of Open Access Journals (Sweden)
VV Juli
2013-12-01
Full Text Available Wireless sensor networks extend the capability to monitor and control far-flung environments. However, sensor nodes must be deployed appropriately to reach an adequate coverage level for the successful acquisition of data. Modern sensing devices are able to move from one place to another for different purposes and constitute the mobile sensor network. This mobile sensor capability could be used to enhance the coverage of the sensor network. Since mobile sensor nodes have limited capabilities and power constraints, the algorithms which drive the sensors to optimal locations should extend the coverage. It should also reduce the power needed to move the sensors efficiently. In this paper, a genetic algorithm- (GA based sensor deployment scheme is proposed to maximize network coverage, and the performance was studied with the random deployment using a Matlab simulation.
Yang, Liu; Lu, Yinzhi; Zhong, Yuanchang; Wu, Xuegang; Yang, Simon X.
2015-01-01
Energy resource limitation is a severe problem in traditional wireless sensor networks (WSNs) because it restricts the lifetime of network. Recently, the emergence of energy harvesting techniques has brought with them the expectation to overcome this problem. In particular, it is possible for a sensor node with energy harvesting abilities to work perpetually in an Energy Neutral state. In this paper, a Multi-hop Energy Neutral Clustering (MENC) algorithm is proposed to construct the optimal m...
Efficient Rectangular Maximal-Volume Algorithm for Rating Elicitation in Collaborative Filtering
Fonarev, Alexander
2017-02-07
Cold start problem in Collaborative Filtering can be solved by asking new users to rate a small seed set of representative items or by asking representative users to rate a new item. The question is how to build a seed set that can give enough preference information for making good recommendations. One of the most successful approaches, called Representative Based Matrix Factorization, is based on Maxvol algorithm. Unfortunately, this approach has one important limitation - a seed set of a particular size requires a rating matrix factorization of fixed rank that should coincide with that size. This is not necessarily optimal in the general case. In the current paper, we introduce a fast algorithm for an analytical generalization of this approach that we call Rectangular Maxvol. It allows the rank of factorization to be lower than the required size of the seed set. Moreover, the paper includes the theoretical analysis of the method\\'s error, the complexity analysis of the existing methods and the comparison to the state-of-the-art approaches.
Maximizing microbial degradation of perchlorate using a genetic algorithm: Media optimization.
Kucharzyk, Katarzyna H; Crawford, Ronald L; Paszczynski, Andrzej J; Soule, Terence; Hess, Thomas F
2012-01-01
Microbial communities are under constant influence of physical and chemical components in ecosystems. Shifts in conditions such as pH, temperature or carbon source concentration can translate into shifts in overall ecosystem functioning. These conditions can be manipulated in a laboratory setup using evolutionary computation methods such as genetic algorithms (GAs). In work described here, a GA methodology was successfully applied to define sets of environmental conditions for microbial enrichments and pure cultures to achieve maximum rates of perchlorate degradation. Over the course of 11 generations of optimization using a GA, we saw a statistically significant 16.45 and 16.76-fold increases in average perchlorate degradation rates by Dechlorosoma sp. strain KJ and Dechloromonas sp. strain Miss R, respectively. For two bacterial consortia, Pl6 and Cw3, 5.79 and 5.75-fold increases in average perchlorate degradation were noted. Comparison of zero-order kinetic rate constants for environmental conditions in GA-determined first and last generations of all bacterial cultures additionally showed marked increases. Copyright © 2011 Elsevier B.V. All rights reserved.
DEFF Research Database (Denmark)
Hove, Jens D; Rasmussen, Rune; Freiberg, Jacob
2008-01-01
BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron...... and OSEM flow values were observed with a flow underestimation of 45% (rest/dipyridamole) in the septum and of 5% (rest) and 15% (dipyridamole) in the lateral myocardial wall. CONCLUSIONS: OSEM reconstruction of myocardial perfusion images with N-13 ammonia and PET produces high-quality images for visual...
Dubois, Anne; Lavielle, Marc; Gsteiger, Sandro; Pigeolet, Etienne; Mentré, France
2011-09-20
In this work, we develop a bioequivalence analysis using nonlinear mixed effects models (NLMEM) that mimics the standard noncompartmental analysis (NCA). We estimate NLMEM parameters, including between-subject and within-subject variability and treatment, period and sequence effects. We explain how to perform a Wald test on a secondary parameter, and we propose an extension of the likelihood ratio test for bioequivalence. We compare these NLMEM-based bioequivalence tests with standard NCA-based tests. We evaluate by simulation the NCA and NLMEM estimates and the type I error of the bioequivalence tests. For NLMEM, we use the stochastic approximation expectation maximisation (SAEM) algorithm implemented in monolix. We simulate crossover trials under H(0) using different numbers of subjects and of samples per subject. We simulate with different settings for between-subject and within-subject variability and for the residual error variance. The simulation study illustrates the accuracy of NLMEM-based geometric means estimated with the SAEM algorithm, whereas the NCA estimates are biased for sparse design. NCA-based bioequivalence tests show good type I error except for high variability. For a rich design, type I errors of NLMEM-based bioequivalence tests (Wald test and likelihood ratio test) do not differ from the nominal level of 5%. Type I errors are inflated for sparse design. We apply the bioequivalence Wald test based on NCA and NLMEM estimates to a three-way crossover trial, showing that Omnitrope®; (Sandoz GmbH, Kundl, Austria) powder and solution are bioequivalent to Genotropin®; (Pfizer Pharma GmbH, Karlsruhe, Germany). NLMEM-based bioequivalence tests are an alternative to standard NCA-based tests. However, caution is needed for small sample size and highly variable drug. Copyright © 2011 John Wiley & Sons, Ltd.
Lan, Zhou; Zhao, Chen; Guo, Weiqun; Guan, Xiong; Zhang, Xiaolin
2015-01-01
Spinosyns, products of secondary metabolic pathway of Saccharopolyspora spinosa, show high insecticidal activity, but difficulty in enhancing the spinosad yield affects wide application. The fermentation process is a key factor in this case. The response surface methodology (RMS) and artificial neural network (ANN) modeling were applied to optimize medium components for spinosad production using S. spinosa strain CGMCC4.1365. Experiments were performed using a rotatable central composite design, and the data obtained were used to construct an ANN model and an RSM model. Using a genetic algorithm (GA), the input space of the ANN model was optimized to obtain optimal values of medium component concentrations. The regression coefficients (R(2)) for the ANN and RSM models were 0.9866 and 0.9458, respectively, indicating that the fitness of the ANN model was higher. The maximal spinosad yield (401.26 mg/l) was obtained using ANN/GA-optimized concentrations. The hybrid ANN/GA approach provides a viable alternative to the conventional RSM approach for the modeling and optimization of fermentation processes. © 2015 S. Karger AG, Basel.
DEFF Research Database (Denmark)
Hove, Jens Dahlgaard; Rasmussen, R.; Freiberg, J.
2008-01-01
BACKGROUND: The purpose of this study was to investigate the quantitative properties of ordered-subset expectation maximization (OSEM) on kinetic modeling with nitrogen 13 ammonia compared with filtered backprojection (FBP) in healthy subjects. METHODS AND RESULTS: Cardiac N-13 ammonia positron...... emission tomography (PET) studies from 20 normal volunteers at rest and during dipyridamole stimulation were analyzed. Image data were reconstructed with either FBP or OSEM. FBP- and OSEM-derived input functions and tissue curves were compared together with the myocardial blood flow and spillover values...... and OSEM flow values were observed with a flow underestimation of 45% (rest/dipyridamole) in the septum and of 5% (rest) and 15% (dipyridamole) in the lateral myocardial wall. CONCLUSIONS: OSEM reconstruction of myocardial perfusion images with N-13 ammonia and PET produces high-quality images for visual...
ADEMA: an algorithm to determine expected metabolite level alterations using mutual information.
Directory of Open Access Journals (Sweden)
A Ercument Cicek
Full Text Available Metabolomics is a relatively new "omics" platform, which analyzes a discrete set of metabolites detected in bio-fluids or tissue samples of organisms. It has been used in a diverse array of studies to detect biomarkers and to determine activity rates for pathways based on changes due to disease or drugs. Recent improvements in analytical methodology and large sample throughput allow for creation of large datasets of metabolites that reflect changes in metabolic dynamics due to disease or a perturbation in the metabolic network. However, current methods of comprehensive analyses of large metabolic datasets (metabolomics are limited, unlike other "omics" approaches where complex techniques for analyzing coexpression/coregulation of multiple variables are applied. This paper discusses the shortcomings of current metabolomics data analysis techniques, and proposes a new multivariate technique (ADEMA based on mutual information to identify expected metabolite level changes with respect to a specific condition. We show that ADEMA better predicts De Novo Lipogenesis pathway metabolite level changes in samples with Cystic Fibrosis (CF than prediction based on the significance of individual metabolite level changes. We also applied ADEMA's classification scheme on three different cohorts of CF and wildtype mice. ADEMA was able to predict whether an unknown mouse has a CF or a wildtype genotype with 1.0, 0.84, and 0.9 accuracy for each respective dataset. ADEMA results had up to 31% higher accuracy as compared to other classification algorithms. In conclusion, ADEMA advances the state-of-the-art in metabolomics analysis, by providing accurate and interpretable classification results.
Directory of Open Access Journals (Sweden)
Alomair O.
2015-11-01
Full Text Available Miscible gas injection is one of the most important enhanced oil recovery (EOR approaches for increasing oil recovery. Due to the massive cost associated with this approach a high degree of accuracy is required for predicting the outcome of the process. Such accuracy includes, the preliminary screening parameters for gas miscible displacement; the “Minimum Miscibility Pressure” (MMP and the availability of the gas. All conventional and stat-of-art MMP measurement methods are either time consuming or decidedly cost demanding processes. Therefore, in order to address the immediate industry demands a nonparametric approach, Alternating Conditional Expectation (ACE, is used in this study to estimate MMP. This algorithm Breiman and Friedman [Brieman L., Friedman J.H. (1985 J. Am. Stat. Assoc. 80, 391, 580-619]estimates the transformations of a set of predictors (here C1, C2, C3, C4, C5, C6, C7+, CO2, H2S, N2, Mw5+, Mw7+ and T and a response (here MMP that produce the maximum linear effect between these transformed variables. One hundred thirteen MMP data points are considered both from the relevant published literature and the experimental work. Five MMP measurements for Kuwaiti Oil are included as part of the testing data. The proposed model is validated using detailed statistical analysis; a reasonably good value of correlation coefficient 0.956 is obtained as compare to the existing correlations. Similarly, standard deviation and average absolute error values are at the lowest as 139 psia (8.55 bar and 4.68% respectively. Hence, it reveals that the results are more reliable than the existing correlations for pure CO2 injection to enhance oil recovery. In addition to its accuracy, the ACE approach is more powerful, quick and can handle a huge data.
Dreano, Denis
2017-04-05
Specification and tuning of errors from dynamical models are important issues in data assimilation. In this work, we propose an iterative expectation-maximisation (EM) algorithm to estimate the model error covariances using classical extended and ensemble versions of the Kalman smoother. We show that, for additive model errors, the estimate of the error covariance converges. We also investigate other forms of model error, such as parametric or multiplicative errors. We show that additive Gaussian model error is able to compensate for non additive sources of error in the algorithms we propose. We also demonstrate the limitations of the extended version of the algorithm and recommend the use of the more robust and flexible ensemble version. This article is a proof of concept of the methodology with the Lorenz-63 attractor. We developed an open-source Python library to enable future users to apply the algorithm to their own nonlinear dynamical models.
Zafrir, Nili; Bental, Tamir; Solodky, Alejandro; Ben-Shlomo, Avi; Mats, Israel; Hassid, Yosef; Belzer, Doron; Battler, Alexander; Gutstein, Ariel
2013-02-01
We previously described the feasibility of myocardial perfusion imaging (MPI) with nearly half the radiation dose using ordered-subset expectation maximization with resolution recovery (OSEM-RR) processing. This study sought to determine if the findings can be expanded to obese patients. Fifty obese patients (>100 kg) referred for MPI underwent stress-rest or rest-stress studies with a half dose of Tc-99m sestamibi in a 1-day protocol using OSEM-RR processing. Image quality and clinical results were compared with matched patients (by age, sex, weight, presence/probability of coronary artery disease) evaluated with standard "full-dose" Tc-99m sestamibi, mostly in a 2-day protocol. Dose activities were adjusted individually by weight. Mean Tc-99m activity was 33.4 ± 13.9 mCi in the half-dose group and 60 ± 10 mCi in the full-dose group (P half-dose group and 80% of the full-dose group (P half the radiation dose is feasible in obese patients. Image quality is better than for full-dose MPI, and the procedure can be performed in 1 day.
Holden, J E
2013-01-01
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines 4-dimensional Gaussian filtering with EM deconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established 3 and 4-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications. PMID:23370699
Floberg, J M; Holden, J E
2013-02-21
We introduce a method for denoising dynamic PET data, spatio-temporal expectation-maximization (STEM) filtering, that combines four-dimensional Gaussian filtering withEMdeconvolution. The initial Gaussian filter suppresses noise at a broad range of spatial and temporal frequencies and EM deconvolution quickly restores the frequencies most important to the signal. We aim to demonstrate that STEM filtering can improve variance in both individual time frames and in parametric images without introducing significant bias. We evaluate STEM filtering with a dynamic phantom study, and with simulated and human dynamic PET studies of a tracer with reversible binding behaviour, [C-11]raclopride, and a tracer with irreversible binding behaviour, [F-18]FDOPA. STEM filtering is compared to a number of established three and four-dimensional denoising methods. STEM filtering provides substantial improvements in variance in both individual time frames and in parametric images generated with a number of kinetic analysis techniques while introducing little bias. STEM filtering does bias early frames, but this does not affect quantitative parameter estimates. STEM filtering is shown to be superior to the other simple denoising methods studied. STEM filtering is a simple and effective denoising method that could be valuable for a wide range of dynamic PET applications.
Noise-enhanced clustering and competitive learning algorithms.
Osoba, Osonde; Kosko, Bart
2013-01-01
Noise can provably speed up convergence in many centroid-based clustering algorithms. This includes the popular k-means clustering algorithm. The clustering noise benefit follows from the general noise benefit for the expectation-maximization algorithm because many clustering algorithms are special cases of the expectation-maximization algorithm. Simulations show that noise also speeds up convergence in stochastic unsupervised competitive learning, supervised competitive learning, and differential competitive learning. Copyright © 2012 Elsevier Ltd. All rights reserved.
1983-09-01
diagram consisting of specific block types in which each block type represents some basic system action. This visual representation permits other peple to...7 MOE Selected (a) Percent of CAS attack sorties for which an expected -. target kill is achieved at or below a specified weapon weight. 2... killed or not. Manual wargames and stochastic or deterministic simulations are examples of this kind of modeling [Ref. 37:p.121. The important aspect
Barbee, David L; Flynn, Ryan T; Holden, James E; Nickles, Robert J; Jeraj, Robert
2010-01-07
Tumor heterogeneities observed in positron emission tomography (PET) imaging are frequently compromised by partial volume effects which may affect treatment prognosis, assessment or future implementations such as biologically optimized treatment planning (dose painting). This paper presents a method for partial volume correction of PET-imaged heterogeneous tumors. A point source was scanned on a GE Discovery LS at positions of increasing radii from the scanner's center to obtain the spatially varying point spread function (PSF). PSF images were fit in three dimensions to Gaussian distributions using least squares optimization. Continuous expressions were devised for each Gaussian width as a function of radial distance, allowing for generation of the system PSF at any position in space. A spatially varying partial volume correction (SV-PVC) technique was developed using expectation maximization (EM) and a stopping criterion based on the method's correction matrix generated for each iteration. The SV-PVC was validated using a standard tumor phantom and a tumor heterogeneity phantom and was applied to a heterogeneous patient tumor. SV-PVC results were compared to results obtained from spatially invariant partial volume correction (SINV-PVC), which used directionally uniform three-dimensional kernels. SV-PVC of the standard tumor phantom increased the maximum observed sphere activity by 55 and 40% for 10 and 13 mm diameter spheres, respectively. Tumor heterogeneity phantom results demonstrated that as net changes in the EM correction matrix decreased below 35%, further iterations improved overall quantitative accuracy by less than 1%. SV-PVC of clinically observed tumors frequently exhibited changes of +/-30% in regions of heterogeneity. The SV-PVC method implemented spatially varying kernel widths and automatically determined the number of iterations for optimal restoration, parameters which are arbitrarily chosen in SINV-PVC. Comparing SV-PVC to SINV-PVC demonstrated
Directory of Open Access Journals (Sweden)
Rutledge John
2011-05-01
Full Text Available Abstract Background Standard mean imputation for missing values in the Western Ontario and Mc Master (WOMAC Osteoarthritis Index limits the use of collected data and may lead to bias. Probability model-based imputation methods overcome such limitations but were never before applied to the WOMAC. In this study, we compare imputation results for the Expectation Maximization method (EM and the mean imputation method for WOMAC in a cohort of total hip replacement patients. Methods WOMAC data on a consecutive cohort of 2062 patients scheduled for surgery were analyzed. Rates of missing values in each of the WOMAC items from this large cohort were used to create missing patterns in the subset of patients with complete data. EM and the WOMAC's method of imputation are then applied to fill the missing values. Summary score statistics for both methods are then described through box-plot and contrasted with the complete case (CC analysis and the true score (TS. This process is repeated using a smaller sample size of 200 randomly drawn patients with higher missing rate (5 times the rates of missing values observed in the 2062 patients capped at 45%. Results Rate of missing values per item ranged from 2.9% to 14.5% and 1339 patients had complete data. Probability model-based EM imputed a score for all subjects while WOMAC's imputation method did not. Mean subscale scores were very similar for both imputation methods and were similar to the true score; however, the EM method results were more consistent with the TS after simulation. This difference became more pronounced as the number of items in a subscale increased and the sample size decreased. Conclusions The EM method provides a better alternative to the WOMAC imputation method. The EM method is more accurate and imputes data to create a complete data set. These features are very valuable for patient-reported outcomes research in which resources are limited and the WOMAC score is used in a multivariate
Directory of Open Access Journals (Sweden)
Rabha W. Ibrahim
2018-01-01
Full Text Available The maximum min utility function (MMUF problem is an important representative of a large class of cloud computing systems (CCS. Having numerous applications in practice, especially in economy and industry. This paper introduces an effective solution-based search (SBS algorithm for solving the problem MMUF. First, we suggest a new formula of the utility function in term of the capacity of the cloud. We formulate the capacity in CCS, by using a fractional diffeo-integral equation. This equation usually describes the flow of CCS. The new formula of the utility function is modified recent active utility functions. The suggested technique first creates a high-quality initial solution by eliminating the less promising components, and then develops the quality of the achieved solution by the summation search solution (SSS. This method is considered by the Mittag-Leffler sum as hash functions to determine the position of the agent. Experimental results commonly utilized in the literature demonstrate that the proposed algorithm competes approvingly with the state-of-the-art algorithms both in terms of solution quality and computational efficiency.
Directory of Open Access Journals (Sweden)
Tiansong Cui
2016-01-01
Full Text Available Dynamic energy pricing provides a promising solution for the utility companies to incentivize energy users to perform demand side management in order to minimize their electric bills. Moreover, the emerging decentralized smart grid, which is a likely infrastructure scenario for future electrical power networks, allows energy consumers to select their energy provider from among multiple utility companies in any billing period. This paper thus starts by considering an oligopolistic energy market with multiple non-cooperative (competitive utility companies, and addresses the problem of determining dynamic energy prices for every utility company in this market based on a modified Bertrand Competition Model of user behaviors. Two methods of dynamic energy pricing are proposed for a utility company to maximize its total profit. The first method finds the greatest lower bound on the total profit that can be achieved by the utility company, whereas the second method finds the best response of a utility company to dynamic pricing policies that the other companies have adopted in previous billing periods. To exploit the advantages of each method while compensating their shortcomings, an adaptive dynamic pricing policy is proposed based on a machine learning technique, which finds a good balance between invocations of the two aforesaid methods. Experimental results show that the adaptive policy results in consistently high profit for the utility company no matter what policies are employed by the other companies.
Directory of Open Access Journals (Sweden)
Fernando Scarpati
2013-09-01
Full Text Available A number of scholars of private equity (“PE” have attempted to assess the ex-post returns, or performance, of PEs by adopting an ex-post perspective of asset pricing. In doing so a set of phenomena has been recognized that is thought to be specific to the PE sector, such as “money-chasing deal phenomenon” (Gompers and Lerner, 2000 and “performance persistence” (Lerner and Schoar, 2005. However, based on their continuing use of an ex-post perspective, few scholars have paid attention to the possible extent to which these and other PE phenomena may affect expected returns from PE investments. To address this problem this article draws on an ex-ante perspective of investment decision-making in suggesting how a number of drivers and factors of PE phenomena may produce “abnormal returns”, and that each of those drivers and factors should therefore be considered in accurately assessing the required risk premium and expected abnormal returns of PE investments. In making these contributions we examined a private equity investment of a regional PE in Italy and administered a telephone questionnaire to 40 PEs in Italy and the UK and found principally that while size is the most important driver in producing abnormal returns illiquidity alone cannot explain the expected returns of PE investments (cf. Franzoni et al., 2012. Based on our findings we developed a predictive model of PE decision-making that draws on an ex-ante perspective of asset pricing and takes into account PE phenomena and abnormal returns. This model extends the work of Franzoni et al. (2012, Jegadeesh et al. (2009, and Korteweg and Sorensen (2010 who did not consider the possible influence of PE phenomena in decision-making and will also help PE managers in making better-informed decisions.
Directory of Open Access Journals (Sweden)
Yasui Yutaka
2011-01-01
Full Text Available Abstract Background Autism spectrum disorders (ASD are associated with complications of pregnancy that implicate fetal hypoxia (FH; the excess of ASD in male gender is poorly understood. We tested the hypothesis that risk of ASD is related to fetal hypoxia and investigated whether this effect is greater among males. Methods Provincial delivery records (PDR identified the cohort of all 218,890 singleton live births in the province of Alberta, Canada, between 01-01-98 and 12-31-04. These were followed-up for ASD via ICD-9 diagnostic codes assigned by physician billing until 03-31-08. Maternal and obstetric risk factors, including FH determined from blood tests of acidity (pH, were extracted from PDR. The binary FH status was missing in approximately half of subjects. Assuming that characteristics of mothers and pregnancies would be correlated with FH, we used an Estimation-Maximization algorithm to estimate HF-ASD association, allowing for both missing-at-random (MAR and specific not-missing-at-random (NMAR mechanisms. Results Data indicated that there was excess risk of ASD among males who were hypoxic at birth, not materially affected by adjustment for potential confounding due to birth year and socio-economic status: OR 1.13, 95%CI: 0.96, 1.33 (MAR assumption. Limiting analysis to full-term males, the adjusted OR under specific NMAR assumptions spanned 95%CI of 1.0 to 1.6. Conclusion Our results are consistent with a weak effect of fetal hypoxia on risk of ASD among males. E-M algorithm is an efficient and flexible tool for modeling missing data in the studied setting.
Van Neerven, J.M.A.M.; Veraar, M.C.; Weis, L.
2015-01-01
In this paper, we prove maximal regularity estimates in “square function spaces” which are commonly used in harmonic analysis, spectral theory, and stochastic analysis. In particular, they lead to a new class of maximal regularity results for both deterministic and stochastic equations in L p
Brotnow, Line; Reiss, David; Stover, Carla S; Ganiban, Jody; Leve, Leslie D; Neiderhiser, Jenae M; Shaw, Daniel S; Stevens, Hanna E
2015-01-01
Mothers' stress in pregnancy is considered an environmental risk factor in child development. Multiple stressors may combine to increase risk, and maternal personal characteristics may offset the effects of stress. This study aimed to test the effect of 1) multifactorial prenatal stress, integrating objective "stressors" and subjective "distress" and 2) the moderating effects of maternal characteristics (perceived social support, self-esteem and specific personality traits) on infant birthweight. Hierarchical regression modeling was used to examine cross-sectional data on 403 birth mothers and their newborns from an adoption study. Distress during pregnancy showed a statistically significant association with birthweight (R2 = 0.032, F(2, 398) = 6.782, p = .001). The hierarchical regression model revealed an almost two-fold increase in variance of birthweight predicted by stressors as compared with distress measures (R2Δ = 0.049, F(4, 394) = 5.339, p maternal characteristics moderated this association (R2Δ = 0.031, F(4, 389) = 3.413, p = .009). Specifically, the expected benefit to birthweight as a function of higher SES was observed only for mothers with lower levels of harm-avoidance and higher levels of perceived social support. Importantly, the results were not better explained by prematurity, pregnancy complications, exposure to drugs, alcohol or environmental toxins. The findings support multidimensional theoretical models of prenatal stress. Although both objective stressors and subjectively measured distress predict birthweight, they should be considered distinct and cumulative components of stress. This study further highlights that jointly considering risk factors and protective factors in pregnancy improves the ability to predict birthweight.
Indian Academy of Sciences (India)
positive numbers. The word 'algorithm' was most often associated with this algorithm till 1950. It may however be pOinted out that several non-trivial algorithms such as synthetic (polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used.
Indian Academy of Sciences (India)
In the description of algorithms and programming languages, what is the role of control abstraction? • What are the inherent limitations of the algorithmic processes? In future articles in this series, we will show that these constructs are powerful and can be used to encode any algorithm. In the next article, we will discuss ...
DEFF Research Database (Denmark)
Hobolth, Asger
2008-01-01
The evolution of DNA sequences can be described by discrete state continuous time Markov processes on a phylogenetic tree. We consider neighbor-dependent evolutionary models where the instantaneous rate of substitution at a site depends on the states of the neighboring sites. Neighbor...
Indian Academy of Sciences (India)
, i is referred to as the loop-index, 'stat-body' is any sequence of ... while i ~ N do stat-body; i: = i+ 1; endwhile. The algorithm for sorting the numbers is described in Table 1 and the algorithmic steps on a list of 4 numbers shown in. Figure 1.
Zak, Michail
2008-01-01
A report discusses an algorithm for a new kind of dynamics based on a quantum- classical hybrid-quantum-inspired maximizer. The model is represented by a modified Madelung equation in which the quantum potential is replaced by different, specially chosen 'computational' potential. As a result, the dynamics attains both quantum and classical properties: it preserves superposition and entanglement of random solutions, while allowing one to measure its state variables, using classical methods. Such optimal combination of characteristics is a perfect match for quantum-inspired computing. As an application, an algorithm for global maximum of an arbitrary integrable function is proposed. The idea of the proposed algorithm is very simple: based upon the Quantum-inspired Maximizer (QIM), introduce a positive function to be maximized as the probability density to which the solution is attracted. Then the larger value of this function will have the higher probability to appear. Special attention is paid to simulation of integer programming and NP-complete problems. It is demonstrated that the problem of global maximum of an integrable function can be found in polynomial time by using the proposed quantum- classical hybrid. The result is extended to a constrained maximum with applications to integer programming and TSP (Traveling Salesman Problem).
Finding Maximal Quasiperiodicities in Strings
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Pedersen, Christian N. S.
2000-01-01
of length n in time O(n log n) and space O(n). Our algorithm uses the suffix tree as the fundamental data structure combined with efficient methods for merging and performing multiple searches in search trees. Besides finding all maximal quasiperiodic substrings, our algorithm also marks the nodes...... in the suffix tree that have a superprimitive path-label....
Indian Academy of Sciences (India)
Algorithms. 3. Procedures and Recursion. R K Shyamasundar. In this article we introduce procedural abstraction and illustrate its uses. Further, we illustrate the notion of recursion which is one of the most useful features of procedural abstraction. Procedures. Let us consider a variation of the pro blem of summing the first M.
Indian Academy of Sciences (India)
number of elements. We shall illustrate the widely used matrix multiplication algorithm using the two dimensional arrays in the following. Consider two matrices A and B of integer type with di- mensions m x nand n x p respectively. Then, multiplication of. A by B denoted, A x B , is defined by matrix C of dimension m xp where.
A Local Scalable Distributed EM Algorithm for Large P2P Networks
National Aeronautics and Space Administration — his paper describes a local and distributed expectation maximization algorithm for learning parameters of Gaussian mixture models (GMM) in large peer-to-peer (P2P)...
Adaptive Subgradient Methods for Online AUC Maximization
Ding, Yi; Zhao, Peilin; Hoi, Steven C. H.; Ong, Yew-Soon
2016-01-01
Learning for maximizing AUC performance is an important research problem in Machine Learning and Artificial Intelligence. Unlike traditional batch learning methods for maximizing AUC which often suffer from poor scalability, recent years have witnessed some emerging studies that attempt to maximize AUC by single-pass online learning approaches. Despite their encouraging results reported, the existing online AUC maximization algorithms often adopt simple online gradient descent approaches that...
Su, Kuan-Hao; Chen, Jay S; Lee, Jih-Shian; Hu, Chi-Min; Chang, Chi-Wei; Chou, Yuan-Hwa; Liu, Ren-Shyan; Chen, Jyh-Cheng
2011-07-01
The objective of this study was to use a mixture of Poisson (MOP) model expectation maximum (EM) algorithm for segmenting microPET images. Simulated rat phantoms with partial volume effect and different noise levels were generated to evaluate the performance of the method. The partial volume correction was performed using an EM deblurring method before the segmentation. The EM-MOP outperforms the EM-MOP in terms of the estimated spatial accuracy, quantitative accuracy, robustness and computing efficiency. To conclude, the proposed EM-MOP method is a reliable and accurate approach for estimating uptake levels and spatial distributions across target tissues in microPET (11)C-raclopride imaging studies. Copyright © 2011 Elsevier Ltd. All rights reserved.
Maximizing and customer loyalty: Are maximizers less loyal?
Directory of Open Access Journals (Sweden)
Linda Lai
2011-06-01
Full Text Available Despite their efforts to choose the best of all available solutions, maximizers seem to be more inclined than satisficers to regret their choices and to experience post-decisional dissonance. Maximizers may therefore be expected to change their decisions more frequently and hence exhibit lower customer loyalty to providers of products and services compared to satisficers. Findings from the study reported here (N = 1978 support this prediction. Maximizers reported significantly higher intentions to switch to another service provider (television provider than satisficers. Maximizers' intentions to switch appear to be intensified and mediated by higher proneness to regret, increased desire to discuss relevant choices with others, higher levels of perceived knowledge of alternatives, and higher ego involvement in the end product, compared to satisficers. Opportunities for future research are suggested.
Simulated annealing algorithm for optimal capital growth
Luo, Yong; Zhu, Bo; Tang, Yong
2014-08-01
We investigate the problem of dynamic optimal capital growth of a portfolio. A general framework that one strives to maximize the expected logarithm utility of long term growth rate was developed. Exact optimization algorithms run into difficulties in this framework and this motivates the investigation of applying simulated annealing optimized algorithm to optimize the capital growth of a given portfolio. Empirical results with real financial data indicate that the approach is inspiring for capital growth portfolio.
Combination therapy design for maximizing sensitivity and minimizing toxicity.
Matlock, Kevin; Berlow, Noah; Keller, Charles; Pal, Ranadip
2017-03-22
Design of personalized targeted therapies involve modeling of patient sensitivity to various drugs and drug combinations. Majority of studies evaluate the sensitivity of tumor cells to targeted drugs without modeling the effect of the drugs on normal cells. In this article, we consider the individual modeling of drug responses to tumor and normal cells and utilize them to design targeted combination therapies that maximize sensitivity over tumor cells and minimize toxicity over normal cells. The problem is formulated as maximizing sensitivity over tumor cell models while maintaining sensitivity below a threshold over normal cell models. We utilize the constrained structure of tumor proliferation models to design an accelerated lexicographic search algorithm for generating the optimal solution. For comparison purposes, we also designed two suboptimal search algorithms based on evolutionary algorithms and hill-climbing based techniques. Results over synthetic models and models generated from Genomics of Drug Sensitivity in Cancer database shows the ability of the proposed algorithms to arrive at optimal or close to optimal solutions in significantly lower number of steps as compared to exhaustive search. We also present the theoretical analysis of the expected number of comparisons required for the proposed Lexicographic search that compare favorably with the observed number of computations. The proposed algorithms provide a framework for design of combination therapy that tackles tumor heterogeneity while satisfying toxicity constraints.
Profit maximization mitigates competition
DEFF Research Database (Denmark)
Dierker, Egbert; Grodal, Birgit
1996-01-01
competition than utility maximization. Since profit maximization tends to raise prices, it may be regarded as beneficial for the owners as a whole. Moreover, if profit maximization is a good proxy for utility maximization, then there is no need for a general equilibrium analysis that takes the distribution...
Maximal lattice free bodies, test sets and the Frobenius problem
DEFF Research Database (Denmark)
Jensen, Anders Nedergaard; Lauritzen, Niels; Roune, Bjarke Hammersholt
Maximal lattice free bodies are maximal polytopes without interior integral points. Scarf initiated the study of maximal lattice free bodies relative to the facet normals in a fixed matrix. In this paper we give an efficient algorithm for computing the maximal lattice free bodies of an integral...... method is inspired by the novel algorithm by Einstein, Lichtblau, Strzebonski and Wagon and the Groebner basis approach by Roune....
Influence Maximization in Ising Networks
Lynn, Christopher; Lee, Daniel
In the analysis of social networks, a fundamental problem is influence maximization: Which individuals should be influenced to maximally impact the collective opinions of an entire population? Traditionally, influence maximization has been studied in the context of contagion models and irreversible processes. However, by including stochastic noise in the opinion formation process, repeated interactions between individuals give rise to complex macroscopic patterns that are observed, for example, in the formation of political opinions. Here we map influence maximization in the presence of stochastic noise onto the Ising model, and the resulting problem has a natural physical interpretation as maximizing the magnetization given a budget of external magnetic field. Using the susceptibility matrix, we provide a gradient ascent algorithm for calculating optimal external fields in real-world social networks. Remarkably, we find that the optimal external field solutions undergo a phase transition from intuitively focusing on high-degree individuals at high temperatures to counterintuitively focusing on low-degree individuals at low temperatures, a feature previously neglected under the viral paradigm. We acknowledge support from the U.S. National Science Foundation, the Air Force Office of Scientific Research, and the Department of Transportation.
Inferring the structure of latent class models using a genetic algorithm
van der Maas, H.L.J.; Raijmakers, M.E.J.; Visser, I.
2005-01-01
Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified
Weissman, Alexander
2013-01-01
Convergence of the expectation-maximization (EM) algorithm to a global optimum of the marginal log likelihood function for unconstrained latent variable models with categorical indicators is presented. The sufficient conditions under which global convergence of the EM algorithm is attainable are provided in an information-theoretic context by…
Principles of maximally classical and maximally realistic quantum ...
Indian Academy of Sciences (India)
While these equations hold att =0 by definition, in general we expect them to break down at nonzero time since classical motion may not ensure agreement with quantum marginal conditions. Instead we define maximal classicality to mean that h h cl is just a sum of a function of (x t) and a function of (p t), and λ(t) is as close ...
Maximally incompatible quantum observables
Energy Technology Data Exchange (ETDEWEB)
Heinosaari, Teiko, E-mail: teiko.heinosaari@utu.fi [Turku Centre for Quantum Physics, Department of Physics and Astronomy, University of Turku, FI-20014 Turku (Finland); Schultz, Jussi, E-mail: jussi.schultz@gmail.com [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Toigo, Alessandro, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica, Politecnico di Milano, Piazza Leonardo da Vinci 32, I-20133 Milano (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Milano, Via Celoria 16, I-20133 Milano (Italy); Ziman, Mario, E-mail: ziman@savba.sk [RCQI, Institute of Physics, Slovak Academy of Sciences, Dúbravská cesta 9, 84511 Bratislava (Slovakia); Faculty of Informatics, Masaryk University, Botanická 68a, 60200 Brno (Czech Republic)
2014-05-01
The existence of maximally incompatible quantum observables in the sense of a minimal joint measurability region is investigated. Employing the universal quantum cloning device it is argued that only infinite dimensional quantum systems can accommodate maximal incompatibility. It is then shown that two of the most common pairs of complementary observables (position and momentum; number and phase) are maximally incompatible.
Iteration Capping For Discrete Choice Models Using the EM Algorithm
Kabatek, J.
2013-01-01
The Expectation-Maximization (EM) algorithm is a well-established estimation procedure which is used in many domains of econometric analysis. Recent application in a discrete choice framework (Train, 2008) facilitated estimation of latent class models allowing for very exible treatment of unobserved
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.
2014-01-01
Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B
Maximization, learning, and economic behavior.
Erev, Ido; Roth, Alvin E
2014-07-22
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design.
Maximization, learning, and economic behavior
Erev, Ido; Roth, Alvin E.
2014-01-01
The rationality assumption that underlies mainstream economic theory has proved to be a useful approximation, despite the fact that systematic violations to its predictions can be found. That is, the assumption of rational behavior is useful in understanding the ways in which many successful economic institutions function, although it is also true that actual human behavior falls systematically short of perfect rationality. We consider a possible explanation of this apparent inconsistency, suggesting that mechanisms that rest on the rationality assumption are likely to be successful when they create an environment in which the behavior they try to facilitate leads to the best payoff for all agents on average, and most of the time. Review of basic learning research suggests that, under these conditions, people quickly learn to maximize expected return. This review also shows that there are many situations in which experience does not increase maximization. In many cases, experience leads people to underweight rare events. In addition, the current paper suggests that it is convenient to distinguish between two behavioral approaches to improve economic analyses. The first, and more conventional approach among behavioral economists and psychologists interested in judgment and decision making, highlights violations of the rational model and proposes descriptive models that capture these violations. The second approach studies human learning to clarify the conditions under which people quickly learn to maximize expected return. The current review highlights one set of conditions of this type and shows how the understanding of these conditions can facilitate market design. PMID:25024182
Kirk, James J.
This paper describes five maxims for an effective faculty mentoring program developed at Western Carolina University (North Carolina) designed to increase retention of new faculty. The first maxim, "ask what the program will do for the school," emphasizes that a program should not be undertaken until this question has been specifically…
Directory of Open Access Journals (Sweden)
Andrew M. Parker
2007-12-01
Full Text Available Our previous research suggests that people reporting a stronger desire to maximize obtain worse life outcomes (Bruine de Bruin et al., 2007. Here, we examine whether this finding may be explained by the decision-making styles of self-reported maximizers. Expanding on Schwartz et al. (2002, we find that self-reported maximizers are more likely to show problematic decision-making styles, as evidenced by self-reports of less behavioral coping, greater dependence on others when making decisions, more avoidance of decision making, and greater tendency to experience regret. Contrary to predictions, self-reported maximizers were more likely to report spontaneous decision making. However, the relationship between self-reported maximizing and worse life outcomes is largely unaffected by controls for measures of other decision-making styles, decision-making competence, and demographic variables.
Distributed Maximality based CTL Model Checking
Djamel Eddine Saidouni; Zine EL Abidine Bouneb
2010-01-01
In this paper we investigate an approach to perform a distributed CTL Model checker algorithm on a network of workstations using Kleen three value logic, the state spaces is partitioned among the network nodes, We represent the incomplete state spaces as a Maximality labeled Transition System MLTS which are able to express true concurrency. we execute in parallel the same algorithm in each node, for a certain property on an incomplete MLTS , this last compute the set of states which satisfy o...
Accurate and Robust Ego-Motion Estimation using Expectation Maximization
Dubbelman, G.; Mark, W. van der; Groen, F.C.A.
2008-01-01
A novel robust visual-odometry technique, called EM-SE(3) is presented and compared against using the random sample consensus (RANSAC) for ego-motion estimation. In this contribution, stereo-vision is used to generate a number of minimal-set motion hypothesis. By using EM-SE(3), which involves
Performance of different population pharmacokinetic algorithms.
Colucci, Philippe; Grenier, Julie; Yue, Corinne Seng; Turgeon, Jacques; Ducharme, Murray P
2011-10-01
There has been an increased focus on population pharmacokinetics (PK) to improve the drug development process since the "Critical Path paper" by the Food and Drug Administration. This increased interest has given rise to additional algorithms. The purpose of this exercise was to compare the new algorithms iterative-2-stage (ITS) and maximum likelihood expectation maximization (MLEM) available in ADAPT 5 with other methods. A total of 29 clinical trials with different study designs were simulated. Different algorithms were used to fit the simulated data, and the estimated parameters were compared with the true values. The algorithms ITS and MLEM were compared with the standard-2-stage, Iterative-2-Stage (IT2S) method in the IT2S package and the first-order conditional estimate (FOCE) method in NONMEM version VI. Imprecision and bias for the population PK parameters, variances, and individual PK parameters were used to compare the methods. Population PK parameters were well estimated and bias low for all nonlinear mixed effect modeling approaches. These approaches were superior to the standard-2-stage analyses. The algorithm MLEM was better than IT2S and ITS in predicting the PK and variability parameters. Residual variability was better estimated using MLEM and FOCE. A difference in the estimation of the variance exists between FOCE and the other methods. Variances estimated with FOCE often had shrinkage issues, whereas MLEM in ADAPT 5 had practically no shrinkage problems. Using MLEM, a reduction from 3000 to 1000 samples in the expectation maximization step had no impact on the results. The new algorithm MLEM in ADAPT 5 was consistently better than IT2S and ITS in its prediction of PK parameters, variances, and the residual variability. It was comparable with the FOCE method with significantly fewer shrinkage issues in the estimation of variance. The number of samples used in the expectation maximization step with MLEM did not influence the results.
Utility maximization under solvency constraints and unhedgeable risks
Kleinow, T.; Pelsser, A.
2008-01-01
We consider the utility maximization problem for an investor who faces a solvency or risk constraint in addition to a budget constraint. The investor wishes to maximize her expected utility from terminal wealth subject to a bound on her expected solvency at maturity. We measure solvency using a
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
outlines how the expectation-based explanation of IEO complements explanations stressing family resources as an important cause of IEO; it carefully defines "expectation," the core concept underlying the dissertation; it places the methodological contributions of the dissertation in the debate over...... for their educational futures. Focusing on the causes rather than the consequences of educational expectations, I argue that students shape their expectations in response to the signals about their academic performance they receive from institutionalized performance indicators in schools. Chapter II considers...... strongly suggest that students rely on information about their academic performances when considering their educational prospects. The two chapters thus highlight that educational expectations are subject to change over the educational career, and that educational systems play a prominent role in students...
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
of the relation between the self and educational prospects; evaluations that are socially bounded in that students take their family's social position into consideration when forming their educational expectations. One important consequence of this learning process is that equally talented students tend to make...... different educational choices according to their family background. IEO thus appears to be mediated by the expectations students hold for their futures. Taken together, this research agenda argues that both researchers and policy-makers need to consider the expectation-based origin of educational...... outlines how the expectation-based explanation of IEO complements explanations stressing family resources as an important cause of IEO; it carefully defines "expectation," the core concept underlying the dissertation; it places the methodological contributions of the dissertation in the debate over...
Galica, G. E.; Dichter, B. K.; Tsui, S.; Golightly, M. J.; Lopate, C.; Connell, J. J.
2016-05-01
The space weather instruments (Space Environment In-Situ Suite - SEISS) on the soon to be launched, NOAA GOES-R series spacecraft offer significant space weather measurement performance advances over the previous GOES N-P series instruments. The specifications require that the instruments ensure proper operation under the most stressful high flux conditions corresponding to the largest solar particle event expected during the program, while maintaining high sensitivity at low flux levels. Since the performance of remote sensing instruments is sensitive to local space weather conditions, the SEISS data will be of be of use to a broad community of users. The SEISS suite comprises five individual sensors and a data processing unit: Magnetospheric Particle Sensor-Low (0.03-30 keV electrons and ions), Magnetospheric Particle Sensor-High (0.05-4 MeV electrons, 0.08-12 MeV protons), two Solar And Galactic Proton Sensors (1 to >500 MeV protons), and the Energetic Heavy ion Sensor (10-200 MeV for H, H to Fe with single element resolution). We present comparisons between the enhanced GOES-R instruments and the current GOES space weather measurement capabilities. We provide an overview of the sensor configurations and performance. Results of extensive sensor modeling with GEANT, FLUKA and SIMION are compared with calibration data measured over nearly the entire energy range of the instruments. Combination of the calibration results and model are used to calculate the geometric factors of the various energy channels. The calibrated geometric factors and typical and extreme space weather environments are used to calculate the expected on-orbit performance.
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
outlines how the expectation-based explanation of IEO complements explanations stressing family resources as an important cause of IEO; it carefully defines "expectation," the core concept underlying the dissertation; it places the methodological contributions of the dissertation in the debate over......' expectation formation. Chapters IV and V constitute the methodological contribution of the dissertation. Chapter IV develops a general method for decomposing total effects into its direct and indirect counterparts in nonlinear probability models such as the logistic response model. The method forms a solution...
von Davier, Matthias
2016-01-01
This report presents results on a parallel implementation of the expectation-maximization (EM) algorithm for multidimensional latent variable models. The developments presented here are based on code that parallelizes both the E step and the M step of the parallel-E parallel-M algorithm. Examples presented in this report include item response…
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
In this dissertation I examine the relationship between subjective beliefs about the outcomes of educational choices and the generation of inequality of educational opportunity (IEO) in post-industrial society. Taking my departure in the rational action turn in the sociology of educational...... different educational choices according to their family background. IEO thus appears to be mediated by the expectations students hold for their futures. Taken together, this research agenda argues that both researchers and policy-makers need to consider the expectation-based origin of educational...... for their educational futures. Focusing on the causes rather than the consequences of educational expectations, I argue that students shape their expectations in response to the signals about their academic performance they receive from institutionalized performance indicators in schools. Chapter II considers...
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
different educational choices according to their family background. IEO thus appears to be mediated by the expectations students hold for their futures. Taken together, this research agenda argues that both researchers and policy-makers need to consider the expectation-based origin of educational...... inequalities if educational reform is to promote educational and social mobility in post-industrial society. I pursue my research agenda in five chapters. In the introductory Chapter I I situate my research contributions in the tradition of the sociology of educational stratification. This chapter also...... outlines how the expectation-based explanation of IEO complements explanations stressing family resources as an important cause of IEO; it carefully defines "expectation," the core concept underlying the dissertation; it places the methodological contributions of the dissertation in the debate over...
DEFF Research Database (Denmark)
Nash, Ulrik William
2014-01-01
cognitive bounds will perceive business opportunities identically. In addition, because cues provide information about latent causal structures of the environment, changes in causality must be accompanied by changes in cognitive representations if adaptation is to be maintained. The concept of evolutionary......, they are correlated among people who share environments because these individuals satisfice within their cognitive bounds by using cues in order of validity, as opposed to using cues arbitrarily. Any difference in expectations thereby arise from differences in cognitive ability, because two individuals with identical......The concept of evolutionary expectations descends from cue learning psychology, synthesizing ideas on rational expectations with ideas on bounded rationality, to provide support for these ideas simultaneously. Evolutionary expectations are rational, but within cognitive bounds. Moreover...
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
strongly suggest that students rely on information about their academic performances when considering their educational prospects. The two chapters thus highlight that educational expectations are subject to change over the educational career, and that educational systems play a prominent role in students...... stratification, I argue that students facing significant educational transitions form their educational expectations by taking into account the foreseeable, yet inherently uncertain, consequences of potential educational pathways. This process of expectation formation, I posit, involves evaluations...... of the relation between the self and educational prospects; evaluations that are socially bounded in that students take their family's social position into consideration when forming their educational expectations. One important consequence of this learning process is that equally talented students tend to make...
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
stratification, I argue that students facing significant educational transitions form their educational expectations by taking into account the foreseeable, yet inherently uncertain, consequences of potential educational pathways. This process of expectation formation, I posit, involves evaluations...... of the relation between the self and educational prospects; evaluations that are socially bounded in that students take their family's social position into consideration when forming their educational expectations. One important consequence of this learning process is that equally talented students tend to make...... the role of causal inference in social science; and it discusses the potential of the findings of the dissertation to inform educational policy. In Chapters II and III, constituting the substantive contribution of the dissertation, I examine the process through which students form expectations...
Robert Lapson
1992-01-01
A procedure for decision-making under risk is developed and axiomatized. It provides another explanation for the Allais paradox as well as justification for some other preference patterns that can not be represented by the expected utility model, but it includes expected utility representation fo preferences as a particular case. The idea of the procedure is that evaluation of the lotteries takes two steps. First, a decision maker classifies a lottery as a "bad," "good" or "medium" one. Then ...
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
In this dissertation I examine the relationship between subjective beliefs about the outcomes of educational choices and the generation of inequality of educational opportunity (IEO) in post-industrial society. Taking my departure in the rational action turn in the sociology of educational...... different educational choices according to their family background. IEO thus appears to be mediated by the expectations students hold for their futures. Taken together, this research agenda argues that both researchers and policy-makers need to consider the expectation-based origin of educational...... strongly suggest that students rely on information about their academic performances when considering their educational prospects. The two chapters thus highlight that educational expectations are subject to change over the educational career, and that educational systems play a prominent role in students...
DEFF Research Database (Denmark)
Andersen, Klaus Ejner
1985-01-01
Guinea pig maximization tests (GPMT) with chlorocresol were performed to ascertain whether the sensitization rate was affected by minor changes in the Freund's complete adjuvant (FCA) emulsion used. Three types of emulsion were evaluated: the oil phase was mixed with propylene glycol, saline...
Finding all maximal cliques in dynamic graphs
Stix, Volker
2002-01-01
Clustering applications dealing with perception based or biased data lead to models with non-disjunct clusters. There, objects to be clustered are allowed to belong to several clusters at the same time which results in a fuzzy clustering. It can be shown that this is equivalent to searching all maximal cliques in dynamic graphs like G_t=(V,E_t), where E_(t-1) in E_t, t=1,... ,T; E_0=(). In this article algorithms are provided to track all maximal cliques in a fully dynamic graph. It is natura...
Dickens, Charles
2005-01-01
One of Dickens's most renowned and enjoyable novels, Great Expectations tells the story of Pip, an orphan boy who wishes to transcend his humble origins and finds himself unexpectedly given the opportunity to live a life of wealth and respectability. Over the course of the tale, in which Pip
DEFF Research Database (Denmark)
Holm, Claus
2015-01-01
Young Australians’ post-school futures are uncertain, insecure and fluid in relation to working life. But if you think that this is the recipe for a next generation of depressed young Australians, you may be wrong. A new book documents that young people are characterised by optimism, but their ex......, but their expectations of the future differ from those of their parents....
Models and Algorithms for Tracking Target with Coordinated Turn Motion
Directory of Open Access Journals (Sweden)
Xianghui Yuan
2014-01-01
Full Text Available Tracking target with coordinated turn (CT motion is highly dependent on the models and algorithms. First, the widely used models are compared in this paper—coordinated turn (CT model with known turn rate, augmented coordinated turn (ACT model with Cartesian velocity, ACT model with polar velocity, CT model using a kinematic constraint, and maneuver centered circular motion model. Then, in the single model tracking framework, the tracking algorithms for the last four models are compared and the suggestions on the choice of models for different practical target tracking problems are given. Finally, in the multiple models (MM framework, the algorithm based on expectation maximization (EM algorithm is derived, including both the batch form and the recursive form. Compared with the widely used interacting multiple model (IMM algorithm, the EM algorithm shows its effectiveness.
Principles of maximally classical and maximally realistic quantum ...
Indian Academy of Sciences (India)
Home; Journals; Pramana – Journal of Physics; Volume 59; Issue 2. Principles of maximally classical and maximally realistic quantum mechanics. S M Roy. Volume 59 Issue 2 August ... Keywords. Maximally realistic quantum theory; phase space Bell inequalities; maximally classical trajectories in realistic quantum theory.
Directory of Open Access Journals (Sweden)
Janusz Brzozowski
2014-05-01
Full Text Available The atoms of a regular language are non-empty intersections of complemented and uncomplemented quotients of the language. Tight upper bounds on the number of atoms of a language and on the quotient complexities of atoms are known. We introduce a new class of regular languages, called the maximally atomic languages, consisting of all languages meeting these bounds. We prove the following result: If L is a regular language of quotient complexity n and G is the subgroup of permutations in the transition semigroup T of the minimal DFA of L, then L is maximally atomic if and only if G is transitive on k-subsets of 1,...,n for 0 <= k <= n and T contains a transformation of rank n-1.
Robust Mean Change-Point Detecting through Laplace Linear Regression Using EM Algorithm
Directory of Open Access Journals (Sweden)
Fengkai Yang
2014-01-01
normal distribution, we developed the expectation maximization (EM algorithm to estimate the position of mean change-point. We investigated the performance of the algorithm through different simulations, finding that our methods is robust to the distributions of errors and is effective to estimate the position of mean change-point. Finally, we applied our method to the classical Holbert data and detected a change-point.
Social group utility maximization
Gong, Xiaowen; Yang, Lei; Zhang, Junshan
2014-01-01
This SpringerBrief explains how to leverage mobile users' social relationships to improve the interactions of mobile devices in mobile networks. It develops a social group utility maximization (SGUM) framework that captures diverse social ties of mobile users and diverse physical coupling of mobile devices. Key topics include random access control, power control, spectrum access, and location privacy.This brief also investigates SGUM-based power control game and random access control game, for which it establishes the socially-aware Nash equilibrium (SNE). It then examines the critical SGUM-b
Dopaminergic balance between reward maximization and policy complexity
Directory of Open Access Journals (Sweden)
Naama eParush
2011-05-01
Full Text Available Previous reinforcement-learning models of the basal ganglia network have highlighted the role of dopamine in encoding the mismatch between prediction and reality. Far less attention has been paid to the computational goals and algorithms of the main-axis (actor. Here, we construct a top-down model of the basal ganglia with emphasis on the role of dopamine as both a reinforcement learning signal and as a pseudo-temperature signal controlling the general level of basal ganglia excitability and motor vigilance of the acting agent. We argue that the basal ganglia endow the thalamic-cortical networks with the optimal dynamic tradeoff between two constraints: minimizing the policy complexity (cost and maximizing the expected future reward (gain. We show that this multi-dimensional optimization processes results in an experience-modulated version of the softmax behavioral policy. Thus, as in classical softmax behavioral policies, probability of actions are selected according to their estimated values and the pseudo-temperature, but in addition also vary according to the frequency of previous choices of these actions. We conclude that the computational goal of the basal ganglia is not to maximize cumulative (positive and negative reward. Rather, the basal ganglia aim at optimization of independent gain and cost functions. Unlike previously suggested single-variable maximization processes, this multi-dimensional optimization process leads naturally to a softmax-like behavioral policy. We suggest that beyond its role in the modulation of the efficacy of the cortico-striatal synapses, dopamine directly affects striatal excitability and thus provides a pseudo-temperature signal that modulates the trade-off between gain and cost. The resulting experience and dopamine modulated softmax policy can then serve as a theoretical framework to account for the broad range of behaviors and clinical states governed by the basal ganglia and dopamine systems.
Maximal avalanches in the Bak-Sneppen model
Gillett, A.J.; Meester, R.W.J.; van der Wal, P.
2006-01-01
We study the durations of the avalanches in the maximal avalanche decomposition of the Bak-Sneppen evolution model. We show that all the avalanches in this maximal decomposition have infinite expectation, but only 'barely', in the sense that if we made the appropriate threshold a tiny amount smaller
On utility maximization in discrete-time financial market models
Miklos Rasonyi; Lukasz Stettner
2005-01-01
We consider a discrete-time financial market model with finite time horizon and give conditions which guarantee the existence of an optimal strategy for the problem of maximizing expected terminal utility. Equivalent martingale measures are constructed using optimal strategies.
Quantum stochastic calculus with maximal operator domains
Lindsay, J. Martin; Attal, Stéphane
2004-01-01
Quantum stochastic calculus is extended in a new formulation in which its stochastic integrals achieve their natural and maximal domains. Operator adaptedness, conditional expectations and stochastic integrals are all defined simply in terms of the orthogonal projections of the time filtration of Fock space, together with sections of the adapted gradient operator. Free from exponential vector domains, our stochastic integrals may be satisfactorily composed yielding quantum Itô formulas for op...
Dichter, B. K.; Galica, G. E.; Tsui, S.; Golightly, M. J.
2015-12-01
The space weather instruments (Space Environment In-Situ Suite - SEISS) on the soon to be launched, NOAA GOES-R spacecraft offer significant performance advances over the previous GOES N-O series instruments. The medium- and high-energy particle instruments, MPS-HI and SGPS, measure differential proton spectra from 80 keV to 500 MeV in a total of 21 logarithmically spaced channels and electrons from 50 keV to 10 MeV in 10 logarithmically spaced channels. These instruments use solid state silicon detectors as sensor. Their designs feature multi-detector coincidence telescopes, combined with degrader material, tungsten shielding and data processing algorithms to optimize the signal to noise ratio. Details of the mechanical and electronic design will be presented. Key aspects of data processing including background subtraction techniques and a novel method to distinguish high energy rear entry particles from front entry ones will be described. Results of extensive modeling with GEANT4 will be compared with calibration data measured over nearly the entire energy range of the instruments. Combination of the two will be used to calculate the geometric factors of the various energy channels. A listing of the channels and their properties will be presented. The calibrated geometric factors and typical and extreme space weather environments will be used to calculate the expected on-orbit performance. The specifications that the instruments met ensure proper operation under the most stressful high flux conditions corresponding to the largest solar particle event expected during the program and high sensitivity at low flux levels. Comparisons will be made between the enhanced GOES-R instruments and the current GOES space weather measurement capabilities.
Maximal unbordered factors of random strings
DEFF Research Database (Denmark)
Cording, Patrick Hagge; Knudsen, Mathias Bæk Tejs
2016-01-01
A border of a string is a non-empty prefix of the string that is also a suffix of the string, and a string is unbordered if it has no border. Loptev, Kucherov, and Starikovskaya [CPM 2015] conjectured the following: If we pick a string of length n from a fixed alphabet uniformly at random......, then the expected length of the maximal unbordered factor is n − O(1). We prove that this conjecture is true by proving that the expected value is in fact n − Θ(σ−1), where σ is the size of the alphabet. We discuss some of the consequences of this theorem....
Algorithms and Algorithmic Languages.
Veselov, V. M.; Koprov, V. M.
This paper is intended as an introduction to a number of problems connected with the description of algorithms and algorithmic languages, particularly the syntaxes and semantics of algorithmic languages. The terms "letter, word, alphabet" are defined and described. The concept of the algorithm is defined and the relation between the algorithm and…
Song, Hairong; Ferrer, Emilio
2009-01-01
This article presents a state-space modeling (SSM) technique for fitting process factor analysis models directly to raw data. The Kalman smoother via the expectation-maximization algorithm to obtain maximum likelihood parameter estimates is used. To examine the finite sample properties of the estimates in SSM when common factors are involved, a…
MAXIMS VIOLATIONS IN LITERARY WORK
Directory of Open Access Journals (Sweden)
Widya Hanum Sari Pertiwi
2015-12-01
Full Text Available This study was qualitative research action that focuses to find out the flouting of Gricean maxims and the functions of the flouting in the tales which are included in collection of children literature entitled My Giant Treasury of Stories and Rhymes. The objective of the study is generally to identify the violation of maxims of quantity, quality, relevance, and manner in the data sources and also to analyze the use of the flouting in the tales which are included in the book. Qualitative design using categorizing strategies, specifically coding strategy, was applied. Thus, the researcher as the instrument in this investigation was selecting the tales, reading them, and gathering every item which reflects the violation of Gricean maxims based on some conditions of flouting maxims. On the basis of the data analysis, it was found that the some utterances in the tales, both narration and conversation, flouting the four maxims of conversation, namely maxim of quality, maxim of quantity, maxim of relevance, and maxim of manner. The researcher has also found that the flouting of maxims has one basic function that is to encourage the readers’ imagination toward the tales. This one basic function is developed by six others functions: (1 generating specific situation, (2 developing the plot, (3 enlivening the characters’ utterance, (4 implicating message, (5 indirectly characterizing characters, and (6 creating ambiguous setting. Keywords: children literature, tales, flouting maxims
Maximizing relationship possibilities: relational maximization in romantic relationships.
Mikkelson, Alan C; Pauley, Perry M
2013-01-01
Using Rusbult's (1980) investment model and Schwartz's (2000) conceptualization of decision maximization, we sought to understand how an individual's propensity to maximize his or her decisions factored into investment, satisfaction, and awareness of alternatives in romantic relationships. In study one, 275 participants currently involved in romantic relationships completed measures of maximization, satisfaction, investment size, quality of alternatives, and commitment. In study two, 343 participants were surveyed as part of the creation of a scale of relational maximization. Results from both studies revealed that the tendency to maximize (in general and in relationships specifically) was negatively correlated with satisfaction, investment, and commitment, and positively correlated with quality of alternatives. Furthermore, we found that satisfaction and investments mediated the relationship between maximization and relationship commitment.
Demeyer, Sofie; Michoel, Tom; Fostier, Jan; Audenaert, Pieter; Pickavet, Mario; Demeester, Piet
2013-01-01
Subgraph matching algorithms are designed to find all instances of predefined subgraphs in a large graph or network and play an important role in the discovery and analysis of so-called network motifs, subgraph patterns which occur more often than expected by chance. We present the index-based subgraph matching algorithm (ISMA), a novel tree-based algorithm. ISMA realizes a speedup compared to existing algorithms by carefully selecting the order in which the nodes of a query subgraph are investigated. In order to achieve this, we developed a number of data structures and maximally exploited symmetry characteristics of the subgraph. We compared ISMA to a naive recursive tree-based algorithm and to a number of well-known subgraph matching algorithms. Our algorithm outperforms the other algorithms, especially on large networks and with large query subgraphs. An implementation of ISMA in Java is freely available at http://sourceforge.net/projects/isma/. PMID:23620730
Real-time topic-aware influence maximization using preprocessing.
Chen, Wei; Lin, Tian; Yang, Cheng
2016-01-01
Influence maximization is the task of finding a set of seed nodes in a social network such that the influence spread of these seed nodes based on certain influence diffusion model is maximized. Topic-aware influence diffusion models have been recently proposed to address the issue that influence between a pair of users are often topic-dependent and information, ideas, innovations etc. being propagated in networks are typically mixtures of topics. In this paper, we focus on the topic-aware influence maximization task. In particular, we study preprocessing methods to avoid redoing influence maximization for each mixture from scratch. We explore two preprocessing algorithms with theoretical justifications. Our empirical results on data obtained in a couple of existing studies demonstrate that one of our algorithms stands out as a strong candidate providing microsecond online response time and competitive influence spread, with reasonable preprocessing effort.
Mining λ-Maximal Cliques from a Fuzzy Graph
Directory of Open Access Journals (Sweden)
Fei Hao
2016-06-01
Full Text Available The depletion of natural resources in the last century now threatens our planet and the life of future generations. For the sake of sustainable development, this paper pioneers an interesting and practical problem of dense substructure (i.e., maximal cliques mining in a fuzzy graph where the edges are weighted by the degree of membership. For parameter 0 ≤ λ ≤ 1 (also called fuzzy cut in fuzzy logic, a newly defined concept λ-maximal clique is introduced in a fuzzy graph. In order to detect the λ-maximal cliques from a fuzzy graph, an efficient mining algorithm based on Fuzzy Formal Concept Analysis (FFCA is proposed. Extensive experimental evaluations are conducted for demonstrating the feasibility of the algorithm. In addition, a novel recommendation service based on an λ-maximal clique is provided for illustrating the sustainable usability of the problem addressed.
Inverting Monotonic Nonlinearities by Entropy Maximization.
Directory of Open Access Journals (Sweden)
Jordi Solé-Casals
Full Text Available This paper proposes a new method for blind inversion of a monotonic nonlinear map applied to a sum of random variables. Such kinds of mixtures of random variables are found in source separation and Wiener system inversion problems, for example. The importance of our proposed method is based on the fact that it permits to decouple the estimation of the nonlinear part (nonlinear compensation from the estimation of the linear one (source separation matrix or deconvolution filter, which can be solved by applying any convenient linear algorithm. Our new nonlinear compensation algorithm, the MaxEnt algorithm, generalizes the idea of Gaussianization of the observation by maximizing its entropy instead. We developed two versions of our algorithm based either in a polynomial or a neural network parameterization of the nonlinear function. We provide a sufficient condition on the nonlinear function and the probability distribution that gives a guarantee for the MaxEnt method to succeed compensating the distortion. Through an extensive set of simulations, MaxEnt is compared with existing algorithms for blind approximation of nonlinear maps. Experiments show that MaxEnt is able to successfully compensate monotonic distortions outperforming other methods in terms of the obtained Signal to Noise Ratio in many important cases, for example when the number of variables in a mixture is small. Besides its ability for compensating nonlinearities, MaxEnt is very robust, i.e. showing small variability in the results.
A Fast and Efficient Algorithm for Mining Top-k Nodes in Complex Networks
Liu, Dong; Jing, Yun; Zhao, Jing; Wang, Wenjun; Song, Guojie
2017-02-01
One of the key problems in social network analysis is influence maximization, which has great significance both in theory and practical applications. Given a complex network and a positive integer k, and asks the k nodes to trigger the largest expected number of the remaining nodes. Many mature algorithms are mainly divided into propagation-based algorithms and topology- based algorithms. The propagation-based algorithms are based on optimization of influence spread process, so the influence spread of them significantly outperforms the topology-based algorithms. But these algorithms still takes days to complete on large networks. Contrary to propagation based algorithms, the topology-based algorithms are based on intuitive parameter statistics and static topology structure properties. Their running time are extremely short but the results of influence spread are unstable. In this paper, we propose a novel topology-based algorithm based on local index rank (LIR). The influence spread of our algorithm is close to the propagation-based algorithm and sometimes over them. Moreover, the running time of our algorithm is millions of times shorter than that of propagation-based algorithms. Our experimental results show that our algorithm has a good and stable performance in IC and LT model.
Maximally Symmetric Composite Higgs Models
Csáki, Csaba; Ma, Teng; Shu, Jing
2017-09-01
Maximal symmetry is a novel tool for composite pseudo Goldstone boson Higgs models: it is a remnant of an enhanced global symmetry of the composite fermion sector involving a twisting with the Higgs field. Maximal symmetry has far-reaching consequences: it ensures that the Higgs potential is finite and fully calculable, and also minimizes the tuning. We present a detailed analysis of the maximally symmetric S O (5 )/S O (4 ) model and comment on its observational consequences.
Maximal cuts in arbitrary dimension
Bosma, Jorrit; Sogaard, Mads; Zhang, Yang
2017-08-01
We develop a systematic procedure for computing maximal unitarity cuts of multiloop Feynman integrals in arbitrary dimension. Our approach is based on the Baikov representation in which the structure of the cuts is particularly simple. We examine several planar and nonplanar integral topologies and demonstrate that the maximal cut inherits IBPs and dimension shift identities satisfied by the uncut integral. Furthermore, for the examples we calculated, we find that the maximal cut functions from different allowed regions, form the Wronskian matrix of the differential equations on the maximal cut.
State-space models - from the EM algorithm to a gradient approach
DEFF Research Database (Denmark)
Olsson, Rasmus Kongsgaard; Petersen, Kaare Brandt; Lehn-Schiøler, Tue
2007-01-01
Slow convergence is observed in the EM algorithm for linear state-space models. We propose to circumvent the problem by applying any off-the-shelf quasi-Newton-type optimizer, which operates on the gradient of the log-likelihood function. Such an algorithm is a practical alternative due to the fact...... that the exact gradient of the log-likelihood function can be computed by recycling components of the expectation-maximization (EM) algorithm. We demonstrate the efficiency of the proposed method in three relevant instances of the linear state-space model. In high signal-to-noise ratios, where EM is particularly...
Dynamic robust duality in utility maximization
Øksendal, Bernt; Sulem, Agnès
2013-01-01
A celebrated financial application of convex duality theory gives an explicit relation between the following two quantities: (i) The optimal terminal wealth X^*(T) : = X_{\\varphi ^*}(T) of the problem to maximize the expected U-utility of the terminal wealth X_{\\varphi }(T) generated by admissible portfolios \\varphi (t); 0 \\le t \\le T in a market with the risky asset price process modeled as a semimartingale; (ii) The optimal scenario \\frac{dQ^*}{dP} of the dual problem to minimize the ...
Influence Maximization in Social Networks with Genetic Algorithms
Bucur, Doina; Iacca, Giovanni; Squillero, Giovanni; Burelli, Paolo
We live in a world of social networks. Our everyday choices are often influenced by social interactions. Word of mouth, meme diffusion on the Internet, and viral marketing are all examples of how social networks can affect our behaviour. In many practical applications, it is of great interest to
Siting samplers to minimize expected time to detection.
Walter, Travis; Lorenzetti, David M; Sohn, Michael D
2012-12-01
We present a probabilistic approach to designing an indoor sampler network for detecting an accidental or intentional chemical or biological release, and demonstrate it for a real building. In an earlier article, Sohn and Lorenzetti developed a proof of concept algorithm that assumed samplers could return measurements only slowly (on the order of hours). This led to optimal "detect to treat" architectures that maximize the probability of detecting a release. This article develops a more general approach and applies it to samplers that can return measurements relatively quickly (in minutes). This leads to optimal "detect to warn" architectures that minimize the expected time to detection. Using a model of a real, large, commercial building, we demonstrate the approach by optimizing networks against uncertain release locations, source terms, and sampler characteristics. Finally, we speculate on rules of thumb for general sampler placement. © 2012 Society for Risk Analysis.
DEFF Research Database (Denmark)
Mahnke, Martina; Uprichard, Emma
2014-01-01
changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...... would like to emphasize another side to the algorithmic everyday life. We argue that algorithms can instigate and facilitate imagination, creativity, and frivolity, while saying something that is simultaneously old and new, always almost repeating what was before but never quite returning. We show...... this by threading together stimulating quotes and screenshots from Google’s autocomplete algorithms. In doing so, we invite the reader to re-explore Google’s autocomplete algorithms in a creative, playful, and reflexive way, thereby rendering more visible some of the excitement and frivolity that comes from being...
Directory of Open Access Journals (Sweden)
A. Garmroodi Asil
2017-09-01
To further reduce the sulfur dioxide emission of the entire refining process, two scenarios of acid gas or air preheats are investigated when either of them is used simultaneously with the third enrichment scheme. The maximum overall sulfur recovery efficiency and highest combustion chamber temperature is slightly higher for acid gas preheats but air preheat is more favorable because it is more benign. To the best of our knowledge, optimization of the entire GTU + enrichment section and SRU processes has not been addressed previously.
Kreitler, Jason R.; Stoms, David M.; Davis, Frank W.
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Directory of Open Access Journals (Sweden)
Jason Kreitler
2014-12-01
Full Text Available Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Kreitler, Jason; Stoms, David M; Davis, Frank W
2014-01-01
Quantitative methods of spatial conservation prioritization have traditionally been applied to issues in conservation biology and reserve design, though their use in other types of natural resource management is growing. The utility maximization problem is one form of a covering problem where multiple criteria can represent the expected social benefits of conservation action. This approach allows flexibility with a problem formulation that is more general than typical reserve design problems, though the solution methods are very similar. However, few studies have addressed optimization in utility maximization problems for conservation planning, and the effect of solution procedure is largely unquantified. Therefore, this study mapped five criteria describing elements of multifunctional agriculture to determine a hypothetical conservation resource allocation plan for agricultural land conservation in the Central Valley of CA, USA. We compared solution procedures within the utility maximization framework to determine the difference between an open source integer programming approach and a greedy heuristic, and find gains from optimization of up to 12%. We also model land availability for conservation action as a stochastic process and determine the decline in total utility compared to the globally optimal set using both solution algorithms. Our results are comparable to other studies illustrating the benefits of optimization for different conservation planning problems, and highlight the importance of maximizing the effectiveness of limited funding for conservation and natural resource management.
Profit maximization with customer satisfaction control for electric vehicle charging in smart grids
Directory of Open Access Journals (Sweden)
Edwin Collado
2017-05-01
Full Text Available As the market of electric vehicles is gaining popularity, large-scale commercialized or privately-operated charging stations are expected to play a key role as a technology enabler. In this paper, we study the problem of charging electric vehicles at stations with limited charging machines and power resources. The purpose of this study is to develop a novel profit maximization framework for station operation in both offline and online charging scenarios, under certain customer satisfaction constraints. The main goal is to maximize the profit obtained by the station owner and provide a satisfactory charging service to the customers. The framework includes not only the vehicle scheduling and charging power control, but also the managing of user satisfaction factors, which are defined as the percentages of finished charging targets. The profit maximization problem is proved to be NPcomplete in both scenarios (NP refers to “nondeterministic polynomial time”, for which two-stage charging strategies are proposed to obtain efficient suboptimal solutions. Competitive analysis is also provided to analyze the performance of the proposed online two-stage charging algorithm against the offline counterpart under non-congested and congested charging scenarios. Finally, the simulation results show that the proposed two-stage charging strategies achieve performance close to that with exhaustive search. Also, the proposed algorithms provide remarkable performance gains compared to the other conventional charging strategies with respect to not only the unified profit, but also other practical interests, such as the computational time, the user satisfaction factor, the power consumption, and the competitive ratio.
High Throughput and Acceptance Ratio Multipath Routing Algorithm in Cognitive Wireless Mesh Network
Directory of Open Access Journals (Sweden)
Zhufang Kuang
2017-11-01
Full Text Available The link failure due to the secondary users exiting the licensed channels when primary users reoccupy the licensed channels is very important in cognitive wireless mesh networks (CWMNs. A multipath routing and spectrum allocation algorithm based on channel interference and reusability with Quality of Service (QoS constraints in CWMNs (MRIR was proposed. Maximizing the throughput and the acceptance ratio of the wireless service is the objective of the MRIR. First, a primary path of resource conservation with QoS constraints was constructed, then, a resource conservation backup path based on channel interference and reusability with QoS constraints was constructed. The MRIR algorithm contains the primary path routing and spectrum allocation algorithm, and the backup path routing and spectrum allocation algorithm. The simulation results showed that the MRIR algorithm could achieve the expected goals and could achieve a higher throughput and acceptance ratio.
A RICIAN MIXTURE MODEL CLASSIFICATION ALGORITHM FOR MAGNETIC RESONANCE IMAGES
Roy, Snehashis; Carass, Aaron; Bazin, Pierre-Louis; Prince, Jerry L.
2009-01-01
Tissue classification algorithms developed for magnetic resonance images commonly assume a Gaussian model on the statistics of noise in the image. While this is approximately true for voxels having large intensities, it is less true as the underlying intensity becomes smaller. In this paper, the Gaussian model is replaced with a Rician model, which is a better approximation to the observed signal. A new classification algorithm based on a finite mixture model of Rician signals is presented wherein the expectation maximization algorithm is used to find the joint maximum likelihood estimates of the unknown mixture parameters. Improved accuracy of tissue classification is demonstrated on several sample data sets. It is also shown that classification repeatability for the same subject under different MR acquisitions is improved using the new method. PMID:20126426
DOA estimation and mutual coupling calibration with the SAGE algorithm
Directory of Open Access Journals (Sweden)
Xiong Kunlai
2014-12-01
Full Text Available In this paper, a novel algorithm is presented for direction of arrival (DOA estimation and array self-calibration in the presence of unknown mutual coupling. In order to highlight the relationship between the array output and mutual coupling coefficients, we present a novel model of the array output with the unknown mutual coupling coefficients. Based on this model, we use the space alternating generalized expectation-maximization (SAGE algorithm to jointly estimate the DOA parameters and the mutual coupling coefficients. Unlike many existing counterparts, our method requires neither calibration sources nor initial calibration information. At the same time, our proposed method inherits the characteristics of good convergence and high estimation precision of the SAGE algorithm. By numerical experiments we demonstrate that our proposed method outperforms the existing method for DOA estimation and mutual coupling calibration.
Algorithms Introduction to Algorithms
Indian Academy of Sciences (India)
Home; Journals; Resonance – Journal of Science Education; Volume 1; Issue 1. Algorithms Introduction to Algorithms. R K Shyamasundar. Series Article Volume 1 Issue 1 January 1996 pp 20-27. Fulltext. Click here to view fulltext PDF. Permanent link: http://www.ias.ac.in/article/fulltext/reso/001/01/0020-0027 ...
Aher, Sunita B.
2014-01-01
Recommendation systems have been widely used in internet activities whose aim is to present the important and useful information to the user with little effort. Course Recommendation System is system which recommends to students the best combination of courses in engineering education system e.g. if student is interested in course like system programming then he would like to learn the course entitled compiler construction. The algorithm with combination of two data mining algorithm i.e. combination of Expectation Maximization Clustering and Apriori Association Rule Algorithm have been developed. The result of this developed algorithm is compared with Apriori Association Rule Algorithm which is an existing algorithm in open source data mining tool Weka.
Learning to maximize reward rate: a model based on semi-Markov decision processes.
Khodadadi, Arash; Fakhari, Pegah; Busemeyer, Jerome R
2014-01-01
WHEN ANIMALS HAVE TO MAKE A NUMBER OF DECISIONS DURING A LIMITED TIME INTERVAL, THEY FACE A FUNDAMENTAL PROBLEM: how much time they should spend on each decision in order to achieve the maximum possible total outcome. Deliberating more on one decision usually leads to more outcome but less time will remain for other decisions. In the framework of sequential sampling models, the question is how animals learn to set their decision threshold such that the total expected outcome achieved during a limited time is maximized. The aim of this paper is to provide a theoretical framework for answering this question. To this end, we consider an experimental design in which each trial can come from one of the several possible "conditions." A condition specifies the difficulty of the trial, the reward, the penalty and so on. We show that to maximize the expected reward during a limited time, the subject should set a separate value of decision threshold for each condition. We propose a model of learning the optimal value of decision thresholds based on the theory of semi-Markov decision processes (SMDP). In our model, the experimental environment is modeled as an SMDP with each "condition" being a "state" and the value of decision thresholds being the "actions" taken in those states. The problem of finding the optimal decision thresholds then is cast as the stochastic optimal control problem of taking actions in each state in the corresponding SMDP such that the average reward rate is maximized. Our model utilizes a biologically plausible learning algorithm to solve this problem. The simulation results show that at the beginning of learning the model choses high values of decision threshold which lead to sub-optimal performance. With experience, however, the model learns to lower the value of decision thresholds till finally it finds the optimal values.
Performance Modeling of Maximal Sharing
M.J. Steindorfer (Michael); J.J. Vinju (Jurgen)
2016-01-01
textabstractIt is noticeably hard to predict the effect of optimization strategies in Java without implementing them. "Maximal sharing" (a.k.a. "hash-consing") is one of these strategies that may have great benefit in terms of time and space, or may have detrimental overhead. It all depends on the
Greedy SINR Maximization in Collaborative Multibase Wireless Systems
Directory of Open Access Journals (Sweden)
Popescu Otilia
2004-01-01
Full Text Available We present a codeword adaptation algorithm for collaborative multibase wireless systems. The system is modeled with multiple inputs and multiple outputs (MIMO in which information is transmitted using multicode CDMA, and codewords are adapted based on greedy maximization of the signal-to-interference-plus-noise ratio. The procedure monotonically increases the sum capacity and, when repeated iteratively for all codewords in the system, converges to a fixed point. Fixed-point properties and a connection with sum capacity maximization, along with a discussion of simulations that corroborate the basic analytic results, are included in the paper.
Gap processing for adaptive maximal Poisson-disk sampling
Yan, Dongming
2013-09-01
In this article, we study the generation of maximal Poisson-disk sets with varying radii. First, we present a geometric analysis of gaps in such disk sets. This analysis is the basis for maximal and adaptive sampling in Euclidean space and on manifolds. Second, we propose efficient algorithms and data structures to detect gaps and update gaps when disks are inserted, deleted, moved, or when their radii are changed.We build on the concepts of regular triangulations and the power diagram. Third, we show how our analysis contributes to the state-of-the-art in surface remeshing. © 2013 ACM.
Enumerating all maximal frequent subtrees in collections of phylogenetic trees
2014-01-01
Background A common problem in phylogenetic analysis is to identify frequent patterns in a collection of phylogenetic trees. The goal is, roughly, to find a subset of the species (taxa) on which all or some significant subset of the trees agree. One popular method to do so is through maximum agreement subtrees (MASTs). MASTs are also used, among other things, as a metric for comparing phylogenetic trees, computing congruence indices and to identify horizontal gene transfer events. Results We give algorithms and experimental results for two approaches to identify common patterns in a collection of phylogenetic trees, one based on agreement subtrees, called maximal agreement subtrees, the other on frequent subtrees, called maximal frequent subtrees. These approaches can return subtrees on larger sets of taxa than MASTs, and can reveal new common phylogenetic relationships not present in either MASTs or the majority rule tree (a popular consensus method). Our current implementation is available on the web at https://code.google.com/p/mfst-miner/. Conclusions Our computational results confirm that maximal agreement subtrees and all maximal frequent subtrees can reveal a more complete phylogenetic picture of the common patterns in collections of phylogenetic trees than maximum agreement subtrees; they are also often more resolved than the majority rule tree. Further, our experiments show that enumerating maximal frequent subtrees is considerably more practical than enumerating ordinary (not necessarily maximal) frequent subtrees. PMID:25061474
Breaking the Ceiling of Human Maximal Lifespan.
Ben-Haim, Moshe Shay; Kanfi, Yariv; Mitchel, Sarah J; Maoz, Noam; Vaughan, Kelli; Amariglio, Ninette; Lerrer, Batia; de Cabo, Rafael; Rechavi, Gideon; Cohen, Haim Y
2017-11-07
While average human life expectancy has increased dramatically in the last century, the maximum lifespan has only modestly increased. These observations prompted the notion that human lifespan might have reached its maximal natural limit of ~115 years. To evaluate this hypothesis, we conducted a systematic analysis of all-cause human mortality throughout the 20 th century. Our analyses revealed that, once cause of death is accounted for, there is a proportional increase in both median age of death and maximum lifespan. To examine whether pathway targeted aging interventions affected both median and maximum lifespan, we analyzed hundreds of interventions performed in multiple organisms (yeast, worms, flies, and rodents). Three criteria: median, maximum, and last survivor lifespans were all significantly extended, and to a similar extent. Altogether, these findings suggest that targeting the biological/genetic causes of aging can allow breaking the currently observed ceiling of human maximal lifespan. © The Author 2017. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Strategy to maximize maintenance operation
Espinoza, Michael
2005-01-01
This project presents a strategic analysis to maximize maintenance operations in Alcan Kitimat Works in British Columbia. The project studies the role of maintenance in improving its overall maintenance performance. It provides strategic alternatives and specific recommendations addressing Kitimat Works key strategic issues and problems. A comprehensive industry and competitive analysis identifies the industry structure and its competitive forces. In the mature aluminium industry, the bargain...
Non-negative matrix factorization by maximizing correntropy for cancer clustering
Wang, Jim Jing-Yan
2013-03-24
Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.
Statistical mechanics of influence maximization with thermal noise
Lynn, Christopher W.; Lee, Daniel D.
2017-03-01
The problem of optimally distributing a budget of influence among individuals in a social network, known as influence maximization, has typically been studied in the context of contagion models and deterministic processes, which fail to capture stochastic interactions inherent in real-world settings. Here, we show that by introducing thermal noise into influence models, the dynamics exactly resemble spins in a heterogeneous Ising system. In this way, influence maximization in the presence of thermal noise has a natural physical interpretation as maximizing the magnetization of an Ising system given a budget of external magnetic field. Using this statistical mechanical formulation, we demonstrate analytically that for small external-field budgets, the optimal influence solutions exhibit a highly non-trivial temperature dependence, focusing on high-degree hub nodes at high temperatures and on easily influenced peripheral nodes at low temperatures. For the general problem, we present a projected gradient ascent algorithm that uses the magnetic susceptibility to calculate locally optimal external-field distributions. We apply our algorithm to synthetic and real-world networks, demonstrating that our analytic results generalize qualitatively. Our work establishes a fruitful connection with statistical mechanics and demonstrates that influence maximization depends crucially on the temperature of the system, a fact that has not been appreciated by existing research.
A new iterative algorithm to reconstruct the refractive index.
Liu, Y J; Zhu, P P; Chen, B; Wang, J Y; Yuan, Q X; Huang, W X; Shu, H; Li, E R; Liu, X S; Zhang, K; Ming, H; Wu, Z Y
2007-06-21
The latest developments in x-ray imaging are associated with techniques based on the phase contrast. However, the image reconstruction procedures demand significant improvements of the traditional methods, and/or new algorithms have to be introduced to take advantage of the high contrast and sensitivity of the new experimental techniques. In this letter, an improved iterative reconstruction algorithm based on the maximum likelihood expectation maximization technique is presented and discussed in order to reconstruct the distribution of the refractive index from data collected by an analyzer-based imaging setup. The technique considered probes the partial derivative of the refractive index with respect to an axis lying in the meridional plane and perpendicular to the propagation direction. Computer simulations confirm the reliability of the proposed algorithm. In addition, the comparison between an analytical reconstruction algorithm and the iterative method has been also discussed together with the convergent characteristic of this latter algorithm. Finally, we will show how the proposed algorithm may be applied to reconstruct the distribution of the refractive index of an epoxy cylinder containing small air bubbles of about 300 micro of diameter.
Algorithms for Reinforcement Learning
Szepesvari, Csaba
2010-01-01
Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'
Maximizing entropy over Markov processes
DEFF Research Database (Denmark)
Biondi, Fabrizio; Legay, Axel; Nielsen, Bo Friis
2014-01-01
computation reduces to finding a model of a specification with highest entropy. Entropy maximization for probabilistic process specifications has not been studied before, even though it is well known in Bayesian inference for discrete distributions. We give a characterization of global entropy of a process...... to use Interval Markov Chains to model abstractions of deterministic systems with confidential data, and use the above results to compute their channel capacity. These results are a foundation for ongoing work on computing channel capacity for abstractions of programs derived from code. © 2014 Elsevier...
Fast Deterministic Distributed Maximal Independent Set Computation on Growth-Bounded Graphs
Kuhn, Fabian; Moscibroda, Thomas; Nieberg, T.; Wattenhofer, Roger; Fraigniaud, Pierre
2005-01-01
Abstract. The distributed complexity of computing a maximal independent set in a graph is of both practical and theoretical importance. While there exists an elegant O(log n) time randomized algorithm for general graphs, no deterministic polylogarithmic algorithm is known. In this paper, we study
Kadyshevskij, V G; Rodionov, R N; Sorin, A S
2007-01-01
We investigate the possibility to construct a generalization of the Standard Model, which we call the Maximal Mass Model because it contains a limiting mass $M$ for its fundamental constituents. The parameter $M$ is considered as a new universal physical constant of Nature and therefore is called the fundamental mass. It is introduced in a purely geometrical way, like the velocity of light as a maximal velocity in the special relativity. If one chooses the Euclidean formulation of quantum field theory, the adequate realization of the limiting mass hypothesis is reduced to the choice of the de Sitter geometry as the geometry of the 4-momentum space. All fields, defined in de Sitter p-space in configurational space obey five dimensional Klein-Gordon type equation with fundamental mass $M$ as a mass parameter. The role of dynamical field variables is played by the Cauchy initial conditions given at $x_5 = 0$, guarantying the locality and gauge invariance principles. The corresponding to the geometrical requireme...
The ML-EM Algorithm is Not Optimal for Poisson Noise.
Zeng, Gengsheng L
2015-01-01
The ML-EM (maximum likelihood expectation maximization) algorithm is the most popular image reconstruction method when the measurement noise is Poisson distributed. This short paper considers the problem that for a given noisy projection data set, whether the ML-EM algorithm is able to provide an approximate solution that is close to the true solution. It is well-known that the ML-EM algorithm at early iterations converges towards the true solution and then in later iterations diverges away from the true solution. Therefore a potential good approximate solution can only be obtained by early termination. This short paper argues that the ML-EM algorithm is not optimal in providing such an approximate solution. In order to show that the ML-EM algorithm is not optimal, it is only necessary to provide a different algorithm that performs better. An alternative algorithm is suggested in this paper and this alternative algorithm is able to outperform the ML-EM algorithm.
Cui, Jingyu; Pratx, Guillem; Meng, Bowen; Levin, Craig S
2013-05-01
The processing speed for positron emission tomography (PET) image reconstruction has been greatly improved in recent years by simply dividing the workload to multiple processors of a graphics processing unit (GPU). However, if this strategy is generalized to a multi-GPU cluster, the processing speed does not improve linearly with the number of GPUs. This is because large data transfer is required between the GPUs after each iteration, effectively reducing the parallelism. This paper proposes a novel approach to reformulate the maximum likelihood expectation maximization (MLEM) algorithm so that it can scale up to many GPU nodes with less frequent inter-node communication. While being mathematically different, the new algorithm maximizes the same convex likelihood function as MLEM, thus converges to the same solution. Experiments on a multi-GPU cluster demonstrate the effectiveness of the proposed approach.
Volume versus value maximization illustrated for Douglas-fir with thinning
Kurt H. Riitters; J. Douglas Brodie; Chiang Kao
1982-01-01
Economic and physical criteria for selecting even-aged rotation lengths are reviewed with examples of their optimizations. To demonstrate the trade-off between physical volume, economic return, and stand diameter, examples of thinning regimes for maximizing volume, forest rent, and soil expectation are compared with an example of maximizing volume without thinning. The...
Antonides, G.; Maital, S.
2002-01-01
Compelling evidence exists that behavior is inconsistent with the assumptions of expected-utility maximization. However, if learning occurs, then maximization may take place asymptotically (albeit slowly). But a series of experiments by Herrnstein and his associates show that under very general
Xie, Zhenwei; Zhu, Qi
2017-01-01
In this study, an optimal power allocation algorithm by maximizing the sum-throughput in energy harvesting cognitive radio networks is proposed. Under the causality constraints of the harvested energy by solar radiation, electromagnetic waves and so on in the two secondary users (SUs), and the interference constraint in the primary user (PU), the sum-throughput maximization problem is formulated. The algorithm decomposes the interference threshold constraint to the power upper bounds of the two SUs. Then, the power allocation problems of the two SUs can be solved by a directional water-filling algorithm (DWA) with the power upper bounds, respectively. The paper gives the algorithm steps and simulation results, and the simulation results verify that the proposed algorithm has obvious advantages over the other two algorithms.
Planning the FUSE Mission Using the SOVA Algorithm
Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly
2011-01-01
Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.
Utility-maximizing Server Selection
Truong, K. P.; Griffin, D.; Maini, E.; Rio, M.
2016-01-01
This paper presents a new method for selection between replicated servers distributed over a wide area, allowing application and network providers to trade-off costs with quality-of-service for their users. First, we create a novel utility framework that factors in quality of service metrics. Then we design a polynomial optimization algorithm to allocate user service requests to servers based on the utility while satisfying transit cost constraint. We then describe an efficient - low overhead...
Differential Evolution for Lifetime Maximization of Heterogeneous Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Yulong Xu
2013-01-01
Full Text Available Maximizing the lifetime of wireless sensor networks (WSNs is a hot and significant issue. However, using differential evolution (DE to research this problem has not appeared so far. This paper proposes a DE-based approach that can maximize the lifetime of WSN through finding the largest number of disjoint sets of sensors, with every set being able to completely cover the target. Different from other methods in the literature, firstly we introduce a common method to generate test data set and then propose an algorithm using differential evolution to solve disjoint set covers (DEDSC problems. The proposed algorithm includes a recombining operation, which performs after initialization and guarantees at least one critical target’s sensor is divided into different disjoint sets. Moreover, the fitness computation in DEDSC contains both the number of complete cover subsets and the coverage percent of incomplete cover subsets. Applications for sensing a number of target points, named point-coverage, have been used for evaluating the effectiveness of algorithm. Results show that the proposed algorithm DEDSC is promising and simple; its performance outperforms or is similar to other existing excellent approaches in both optimization speed and solution quality.
DEFF Research Database (Denmark)
Medford, Anthony
2017-01-01
Background: Whereas the rise in human life expectancy has been extensively studied, the evolution of maximum life expectancies, i.e., the rise in best-practice life expectancy in a group of populations, has not been examined to the same extent. The linear rise in best-practice life expectancy has...... been reported previously by various authors. Though remarkable, this is simply an empirical observation. Objective: We examine best-practice life expectancy more formally by using extreme value theory. Methods: Extreme value distributions are ﬁt to the time series (1900 to 2012) of maximum life...... expectancies at birth and age 65, for both sexes, using data from the Human Mortality Database and the United Nations. Conclusions: Generalized extreme value distributions offer a theoretically justiﬁed way to model best-practice life expectancies. Using this framework one can straightforwardly obtain...
An Affinity Propagation-Based DNA Motif Discovery Algorithm
Directory of Open Access Journals (Sweden)
Chunxiao Sun
2015-01-01
Full Text Available The planted (l,d motif search (PMS is one of the fundamental problems in bioinformatics, which plays an important role in locating transcription factor binding sites (TFBSs in DNA sequences. Nowadays, identifying weak motifs and reducing the effect of local optimum are still important but challenging tasks for motif discovery. To solve the tasks, we propose a new algorithm, APMotif, which first applies the Affinity Propagation (AP clustering in DNA sequences to produce informative and good candidate motifs and then employs Expectation Maximization (EM refinement to obtain the optimal motifs from the candidate motifs. Experimental results both on simulated data sets and real biological data sets show that APMotif usually outperforms four other widely used algorithms in terms of high prediction accuracy.
An Affinity Propagation-Based DNA Motif Discovery Algorithm.
Sun, Chunxiao; Huo, Hongwei; Yu, Qiang; Guo, Haitao; Sun, Zhigang
2015-01-01
The planted (l, d) motif search (PMS) is one of the fundamental problems in bioinformatics, which plays an important role in locating transcription factor binding sites (TFBSs) in DNA sequences. Nowadays, identifying weak motifs and reducing the effect of local optimum are still important but challenging tasks for motif discovery. To solve the tasks, we propose a new algorithm, APMotif, which first applies the Affinity Propagation (AP) clustering in DNA sequences to produce informative and good candidate motifs and then employs Expectation Maximization (EM) refinement to obtain the optimal motifs from the candidate motifs. Experimental results both on simulated data sets and real biological data sets show that APMotif usually outperforms four other widely used algorithms in terms of high prediction accuracy.
Acceleration of MAP-EM algorithm via over-relaxation.
Tsai, Yu-Jung; Huang, Hsuan-Ming; Fang, Yu-Hua Dean; Chang, Shi-Ing; Hsiao, Ing-Tsung
2015-03-01
To improve the convergence rate of the effective maximum a posteriori expectation-maximization (MAP-EM) algorithm in tomographic reconstructions, this study proposes a modified MAP-EM which uses an over-relaxation factor to accelerate image reconstruction. The proposed method, called MAP-AEM, is evaluated and compared with the results for MAP-EM and for an ordered-subset algorithm, in terms of the convergence rate and noise properties. The results show that the proposed method converges numerically much faster than MAP-EM and with a speed that is comparable to that for an ordered-subset type method. The proposed method is effective in accelerating MAP-EM tomographic reconstruction. Copyright © 2014 Elsevier Ltd. All rights reserved.
Krol, Andrzej; Bowsher, James E.; Feiglin, David H.; Gagne, George M.; Hellwig, Bradford J.; Tornai, Martin P.; Thomas, Frank D.
2001-07-01
The purpose of this study was to evaluate performance of the EM-IntraSPECT (EMIS) algorithm for non-uniform attenuation correction in the chest. EMIS is a maximum-likelihood expectation maximization (MLEM) algorithm for simultaneously estimating SPECT emission and attenuation parameters from emission data alone. EMIS uses the activity within the patient as transmission tomography sources, with which attenuation coefficients can be estimated. A thorax phantom with a normal heart was used. The activity images reconstructed by EMIS were compared to images reconstructed using a conventional MLEM with a fixed uniform attenuation map. Uniformity of normal heart was improved with EMIS as compared to a conventional MLEM.
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
EXTREME: an online EM algorithm for motif discovery.
Quang, Daniel; Xie, Xiaohui
2014-06-15
Identifying regulatory elements is a fundamental problem in the field of gene transcription. Motif discovery-the task of identifying the sequence preference of transcription factor proteins, which bind to these elements-is an important step in this challenge. MEME is a popular motif discovery algorithm. Unfortunately, MEME's running time scales poorly with the size of the dataset. Experiments such as ChIP-Seq and DNase-Seq are providing a rich amount of information on the binding preference of transcription factors. MEME cannot discover motifs in data from these experiments in a practical amount of time without a compromising strategy such as discarding a majority of the sequences. We present EXTREME, a motif discovery algorithm designed to find DNA-binding motifs in ChIP-Seq and DNase-Seq data. Unlike MEME, which uses the expectation-maximization algorithm for motif discovery, EXTREME uses the online expectation-maximization algorithm to discover motifs. EXTREME can discover motifs in large datasets in a practical amount of time without discarding any sequences. Using EXTREME on ChIP-Seq and DNase-Seq data, we discover many motifs, including some novel and infrequent motifs that can only be discovered by using the entire dataset. Conservation analysis of one of these novel infrequent motifs confirms that it is evolutionarily conserved and possibly functional. All source code is available at the Github repository http://github.com/uci-cbcl/EXTREME. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.
Directory of Open Access Journals (Sweden)
Qin Zhao
2016-04-01
Full Text Available Two experiments were conducted to investigate the effects of expecting immediate grades on numerical and verbal reasoning performance and the moderating role of achievement goals. Anticipated grade proximity (immediate vs. 1 week later and goal orientation (approach vs. avoidance were manipulated with instructions. Experiment 1 showed that expecting immediate grades yielded lower numerical performance than expecting delayed feedback, regardless of participants’ goal orientation. Neither grade proximity nor goal orientation impacted verbal performance. In Experiment 2, we used a stronger goal manipulation and included measures of motivation. Expecting immediate grades increased task anxiety, lowered task involvement, and lowered task effort among participants with avoidance goals, compared with expecting delayed grades. The effects on performance were not replicated in Experiment 2, however. The findings demonstrate that expecting immediate grades may have negative consequences under certain conditions, including demotivation and performance impairment.
Gholami, Ali; Honarvar, Farhang; Abrishami Moghaddam, Hamid
2017-06-01
This paper presents an accurate and easy-to-implement algorithm for estimating the parameters of the asymmetric Gaussian chirplet model (AGCM) used for modeling echoes measured in ultrasonic nondestructive testing (NDT) of materials. The proposed algorithm is a combination of particle swarm optimization (PSO) and Levenberg-Marquardt (LM) algorithms. PSO does not need an accurate initial guess and quickly converges to a reasonable output while LM needs a good initial guess in order to provide an accurate output. In the combined algorithm, PSO is run first to provide a rough estimate of the output and this result is consequently inputted to the LM algorithm for more accurate estimation of parameters. To apply the algorithm to signals with multiple echoes, the space alternating generalized expectation maximization (SAGE) is used. The proposed combined algorithm is robust and accurate. To examine the performance of the proposed algorithm, it is applied to a number of simulated echoes having various signal to noise ratios. The combined algorithm is also applied to a number of experimental ultrasonic signals. The results corroborate the accuracy and reliability of the proposed combined algorithm.
Determining health expectancies
National Research Council Canada - National Science Library
Robine, Jean-Marie
2003-01-01
... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jean-Marie Robine 9 1 Increase in Life Expectancy and Concentration of Ages at Death . . . . France Mesle´ and Jacques Vallin 13 2 Compression of Morbidity...
Brañas-Garza, Pablo; Rodríguez-Lara, Ismael; Sánchez, Angel
2017-02-01
Mechanisms supporting human ultra-cooperativeness are very much subject to debate. One psychological feature likely to be relevant is the formation of expectations, particularly about receiving cooperative or generous behavior from others. Without such expectations, social life will be seriously impeded and, in turn, expectations leading to satisfactory interactions can become norms and institutionalize cooperation. In this paper, we assess people’s expectations of generosity in a series of controlled experiments using the dictator game. Despite differences in respective roles, involvement in the game, degree of social distance or variation of stakes, the results are conclusive: subjects seldom predict that dictators will behave selfishly (by choosing the Nash equilibrium action, namely giving nothing). The majority of subjects expect that dictators will choose the equal split. This implies that generous behavior is not only observed in the lab, but also expected by subjects. In addition, expectations are accurate, matching closely the donations observed and showing that as a society we have a good grasp of how we interact. Finally, correlation between expectations and actual behavior suggests that expectations can be an important ingredient of generous or cooperative behavior.
Polarity related influence maximization in signed social networks.
Directory of Open Access Journals (Sweden)
Dong Li
Full Text Available Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Polarity related influence maximization in signed social networks.
Li, Dong; Xu, Zhi-Ming; Chakraborty, Nilanjan; Gupta, Anika; Sycara, Katia; Li, Sheng
2014-01-01
Influence maximization in social networks has been widely studied motivated by applications like spread of ideas or innovations in a network and viral marketing of products. Current studies focus almost exclusively on unsigned social networks containing only positive relationships (e.g. friend or trust) between users. Influence maximization in signed social networks containing both positive relationships and negative relationships (e.g. foe or distrust) between users is still a challenging problem that has not been studied. Thus, in this paper, we propose the polarity-related influence maximization (PRIM) problem which aims to find the seed node set with maximum positive influence or maximum negative influence in signed social networks. To address the PRIM problem, we first extend the standard Independent Cascade (IC) model to the signed social networks and propose a Polarity-related Independent Cascade (named IC-P) diffusion model. We prove that the influence function of the PRIM problem under the IC-P model is monotonic and submodular Thus, a greedy algorithm can be used to achieve an approximation ratio of 1-1/e for solving the PRIM problem in signed social networks. Experimental results on two signed social network datasets, Epinions and Slashdot, validate that our approximation algorithm for solving the PRIM problem outperforms state-of-the-art methods.
Preconditioned Alternating Projection Algorithms for Maximum a Posteriori ECT Reconstruction.
Krol, Andrzej; Li, Si; Shen, Lixin; Xu, Yuesheng
2012-11-01
We propose a preconditioned alternating projection algorithm (PAPA) for solving the maximum a posteriori (MAP) emission computed tomography (ECT) reconstruction problem. Specifically, we formulate the reconstruction problem as a constrained convex optimization problem with the total variation (TV) regularization. We then characterize the solution of the constrained convex optimization problem and show that it satisfies a system of fixed-point equations defined in terms of two proximity operators raised from the convex functions that define the TV-norm and the constrain involved in the problem. The characterization (of the solution) via the proximity operators that define two projection operators naturally leads to an alternating projection algorithm for finding the solution. For efficient numerical computation, we introduce to the alternating projection algorithm a preconditioning matrix (the EM-preconditioner) for the dense system matrix involved in the optimization problem. We prove theoretically convergence of the preconditioned alternating projection algorithm. In numerical experiments, performance of our algorithms, with an appropriately selected preconditioning matrix, is compared with performance of the conventional MAP expectation-maximization (MAP-EM) algorithm with TV regularizer (EM-TV) and that of the recently developed nested EM-TV algorithm for ECT reconstruction. Based on the numerical experiments performed in this work, we observe that the alternating projection algorithm with the EM-preconditioner outperforms significantly the EM-TV in all aspects including the convergence speed, the noise in the reconstructed images and the image quality. It also outperforms the nested EM-TV in the convergence speed while providing comparable image quality.
Unsupervised classification of multivariate geostatistical data: Two algorithms
Romary, Thomas; Ors, Fabien; Rivoirard, Jacques; Deraisme, Jacques
2015-12-01
With the increasing development of remote sensing platforms and the evolution of sampling facilities in mining and oil industry, spatial datasets are becoming increasingly large, inform a growing number of variables and cover wider and wider areas. Therefore, it is often necessary to split the domain of study to account for radically different behaviors of the natural phenomenon over the domain and to simplify the subsequent modeling step. The definition of these areas can be seen as a problem of unsupervised classification, or clustering, where we try to divide the domain into homogeneous domains with respect to the values taken by the variables in hand. The application of classical clustering methods, designed for independent observations, does not ensure the spatial coherence of the resulting classes. Image segmentation methods, based on e.g. Markov random fields, are not adapted to irregularly sampled data. Other existing approaches, based on mixtures of Gaussian random functions estimated via the expectation-maximization algorithm, are limited to reasonable sample sizes and a small number of variables. In this work, we propose two algorithms based on adaptations of classical algorithms to multivariate geostatistical data. Both algorithms are model free and can handle large volumes of multivariate, irregularly spaced data. The first one proceeds by agglomerative hierarchical clustering. The spatial coherence is ensured by a proximity condition imposed for two clusters to merge. This proximity condition relies on a graph organizing the data in the coordinates space. The hierarchical algorithm can then be seen as a graph-partitioning algorithm. Following this interpretation, a spatial version of the spectral clustering algorithm is also proposed. The performances of both algorithms are assessed on toy examples and a mining dataset.
A Maximally Supersymmetric Kondo Model
Energy Technology Data Exchange (ETDEWEB)
Harrison, Sarah; Kachru, Shamit; Torroba, Gonzalo; /Stanford U., Phys. Dept. /SLAC
2012-02-17
We study the maximally supersymmetric Kondo model obtained by adding a fermionic impurity to N = 4 supersymmetric Yang-Mills theory. While the original Kondo problem describes a defect interacting with a free Fermi liquid of itinerant electrons, here the ambient theory is an interacting CFT, and this introduces qualitatively new features into the system. The model arises in string theory by considering the intersection of a stack of M D5-branes with a stack of N D3-branes, at a point in the D3 worldvolume. We analyze the theory holographically, and propose a dictionary between the Kondo problem and antisymmetric Wilson loops in N = 4 SYM. We perform an explicit calculation of the D5 fluctuations in the D3 geometry and determine the spectrum of defect operators. This establishes the stability of the Kondo fixed point together with its basic thermodynamic properties. Known supergravity solutions for Wilson loops allow us to go beyond the probe approximation: the D5s disappear and are replaced by three-form flux piercing a new topologically non-trivial S3 in the corrected geometry. This describes the Kondo model in terms of a geometric transition. A dual matrix model reflects the basic properties of the corrected gravity solution in its eigenvalue distribution.
Conormal Geometry of Maximal Minors
Kleiman, S L
1997-01-01
Let A be a Noetherian local domain, N be a finitely generated torsion- free module, and M a proper submodule that is generically equal to N. Let A[N] be an arbitrary graded overdomain of A generated as an A-algebra by N placed in degree 1. Let A[M] be the subalgebra generated by M. Set C:=Proj(A[M]) and r:=dim C. Form the (closed) subset W of Spec(A) of primes p where A[N]_p is not a finitely generated module over A[M]_p, and denote the preimage of W in C by E. We prove this: (1) dim E=r-1 if either (a) N is free and A[N] is the symmetric algebra, or (b) W is nonempty and A is universally catenary, and (2) E is equidimensional if (a) holds and A is universally catenary. Our proof was inspired by some recent work of Gaffney and Massey, which we sketch; they proved (2) when A is the ring of germs of a complex- analytic variety, and applied it to perfect a characterization of Thom's A_f-condition in equisingularity theory. From (1), we recover, with new proofs, the usual height inequality for maximal minors and ...
A Revenue Maximization Approach for Provisioning Services in Clouds
Directory of Open Access Journals (Sweden)
Li Pan
2015-01-01
Full Text Available With the increased reliability, security, and reduced cost of cloud services, more and more users are attracted to having their jobs and applications outsourced into IAAS data centers. For a cloud provider, deciding how to provision services to clients is far from trivial. The objective of this decision is maximizing the provider’s revenue, while fulfilling its IAAS resource constraints. The above problem is defined as IAAS cloud provider revenue maximization (ICPRM problem in this paper. We formulate a service provision approach to help a cloud provider to determine which combination of clients to admit and in what Quality-of-Service (QoS levels and to maximize provider’s revenue given its available resources. We show that the overall problem is a nondeterministic polynomial- (NP- hard one and develop metaheuristic solutions based on the genetic algorithm to achieve revenue maximization. The experimental simulations and numerical results show that the proposed approach is both effective and efficient in solving ICPRM problems.
Communicating expectancies about others
Wigboldus, Daniel H. J.; Semin, Gun R.; Spears, Russell
2006-01-01
The linguistic expectancy bias hypothesis predicts that, in general, person impressions are shared with others via subtle differences in the level of linguistic abstraction that is used to communicate expected and unexpected information about an individual. In a two-part communication experiment, we
Marijuana: College Students' Expectations.
Rumstein, Regina
1980-01-01
Focused on college students' expectations about marijuana. Undergraduates (N=210) expected marijuana to have sedating effects; they largely discounted psychological consequences. Students considered marijuana to be an educational issue and favored decriminalization of the drug. Users, occasional users, and nonusers differed significantly in…
Wagener, F.
2014-01-01
The rational expectations hypothesis is one of the cornerstones of current economic theorizing. This review discusses a number of experiments that focus on expectation formation by human subjects in a number of learning-to-forecast experiments and analyzes the implications for the rational
A Rational Expectations Experiment.
Peterson, Norris A.
1990-01-01
Presents a simple classroom simulation of the Lucas supply curve mechanism with rational expectations. Concludes that the exercise has proved very useful as an introduction to the concepts of rational and adaptive expectations, the Lucas supply curve, the natural rate hypothesis, and random supply shocks. (DB)
DEFF Research Database (Denmark)
Mcneill, Ilona M.; Dunlop, Patrick D.; Heath, Jonathan B.
2013-01-01
People who live in wildfire-prone communities tend to form their own hazard-related expectations, which may influence their willingness to prepare for a fire. Past research has already identified two important expectancy-based factors associated with people's intentions to prepare for a natural......) and measured actual rather than intended preparedness. In addition, we tested the relation between preparedness and two additional threat-related expectations: the expectation that one can rely on an official warning and the expectation of encountering obstacles (e.g., the loss of utilities) during a fire....... A survey completed by 1,003 residents of wildfire-prone areas in Perth, Australia, revealed that perceived risk (especially risk severity) and perceived protection responsibility were both positively associated with all types of preparedness, but the latter did not significantly predict preparedness after...
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Theije, P.A.M. de
2002-01-01
A new adaptive method is presented to display large amounts of data on, for example, a computer screen. The algorithm reduces a set of N samples to a single value, using the statistics of the background and cormparing the true peak value in the set of N samples to the expected peak value of this
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low
Does mental exertion alter maximal muscle activation?
Directory of Open Access Journals (Sweden)
Vianney eRozand
2014-09-01
Full Text Available Mental exertion is known to impair endurance performance, but its effects on neuromuscular function remain unclear. The purpose of this study was to test the hypothesis that mental exertion reduces torque and muscle activation during intermittent maximal voluntary contractions of the knee extensors. Ten subjects performed in a randomized order three separate mental exertion conditions lasting 27 minutes each: i high mental exertion (incongruent Stroop task, ii moderate mental exertion (congruent Stroop task, iii low mental exertion (watching a movie. In each condition, mental exertion was combined with ten intermittent maximal voluntary contractions of the knee extensor muscles (one maximal voluntary contraction every 3 minutes. Neuromuscular function was assessed using electrical nerve stimulation. Maximal voluntary torque, maximal muscle activation and other neuromuscular parameters were similar across mental exertion conditions and did not change over time. These findings suggest that mental exertion does not affect neuromuscular function during intermittent maximal voluntary contractions of the knee extensors.
Maximal Inequalities for Dependent Random Variables
DEFF Research Database (Denmark)
Hoffmann-Jorgensen, Jorgen
2016-01-01
Maximal inequalities play a crucial role in many probabilistic limit theorem; for instance, the law of large numbers, the law of the iterated logarithm, the martingale limit theorem and the central limit theorem. Let X-1, X-2,... be random variables with partial sums S-k = X-1 + ... + X......-k. Then a maximal inequality gives conditions ensuring that the maximal partial sum M-n = max(1) (...
Maximizing Barber's bipartite modularity is also hard
Miyauchi, Atsushi; Sukegawa, Noriyoshi
2013-01-01
Modularity introduced by Newman and Girvan [Phys. Rev. E 69, 026113 (2004)] is a quality function for community detection. Numerous methods for modularity maximization have been developed so far. In 2007, Barber [Phys. Rev. E 76, 066102 (2007)] introduced a variant of modularity called bipartite modularity which is appropriate for bipartite networks. Although maximizing the standard modularity is known to be NP-hard, the computational complexity of maximizing bipartite modularity has yet to b...
Tel, G.
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of
DEFF Research Database (Denmark)
Canudas-Romo, Vladimir; DuGoff, Eva H; Wu, Albert W.
2016-01-01
expectancy at age 20 will increase by approximately one year per decade for females and males between now and 2040. According to the clinical experts, 70% of the improvement in life expectancy will occur in cardiovascular disease and cancer, while in the last 30 years most of the improvement has occurred......We use expert clinical and public health opinion to estimate likely changes in the prevention and treatment of important disease conditions and how they will affect future life expectancy. Focus groups were held including clinical and public health faculty with expertise in the six leading causes...... of death in the United States. Mortality rates and life tables for 2040 were derived by sex and age. Life expectancy at age 20 and 65 was compared to figures published by the Social Security Administration and to estimates from the Lee-Carter method. There was agreement among all three approaches that life...
Application and performance of an ML-EM algorithm in NEXT
Simón, A.; Lerche, C.; Monrabal, F.; Gómez-Cadenas, J. J.; Álvarez, V.; Azevedo, C. D. R.; Benlloch-Rodríguez, J. M.; Borges, F. I. G. M.; Botas, A.; Cárcel, S.; Carrión, J. V.; Cebrián, S.; Conde, C. A. N.; Díaz, J.; Diesburg, M.; Escada, J.; Esteve, R.; Felkai, R.; Fernandes, L. M. P.; Ferrario, P.; Ferreira, A. L.; Freitas, E. D. C.; Goldschmidt, A.; González-Díaz, D.; Gutiérrez, R. M.; Hauptman, J.; Henriques, C. A. O.; Hernandez, A. I.; Hernando Morata, J. A.; Herrero, V.; Jones, B. J. P.; Labarga, L.; Laing, A.; Lebrun, P.; Liubarsky, I.; López-March, N.; Losada, M.; Martín-Albo, J.; Martínez-Lema, G.; Martínez, A.; McDonald, A. D.; Monteiro, C. M. B.; Mora, F. J.; Moutinho, L. M.; Muñoz Vidal, J.; Musti, M.; Nebot-Guinot, M.; Novella, P.; Nygren, D. R.; Palmeiro, B.; Para, A.; Pérez, J.; Querol, M.; Renner, J.; Ripoll, L.; Rodríguez, J.; Rogers, L.; Santos, F. P.; dos Santos, J. M. F.; Sofka, C.; Sorel, M.; Stiegler, T.; Toledo, J. F.; Torrent, J.; Tsamalaidze, Z.; Veloso, J. F. C. A.; Webb, R.; White, J. T.; Yahlali, N.
2017-08-01
The goal of the NEXT experiment is the observation of neutrinoless double beta decay in 136Xe using a gaseous xenon TPC with electroluminescent amplification and specialized photodetector arrays for calorimetry and tracking. The NEXT Collaboration is exploring a number of reconstruction algorithms to exploit the full potential of the detector. This paper describes one of them: the Maximum Likelihood Expectation Maximization (ML-EM) method, a generic iterative algorithm to find maximum-likelihood estimates of parameters that has been applied to solve many different types of complex inverse problems. In particular, we discuss a bi-dimensional version of the method in which the photosensor signals integrated over time are used to reconstruct a transverse projection of the event. First results show that, when applied to detector simulation data, the algorithm achieves nearly optimal energy resolution (better than 0.5% FWHM at the Q value of 136Xe) for events distributed over the full active volume of the TPC.
A hybrid learning scheme combining EM and MASMOD algorithms for fuzzy local linearization modeling.
Gan, Q; Harris, C J
2001-01-01
Fuzzy local linearization (FLL) is a useful divide-and-conquer method for coping with complex problems such as modeling unknown nonlinear systems from data for state estimation and control. Based on a probabilistic interpretation of FLL, the paper proposes a hybrid learning scheme for FLL modeling, which uses a modified adaptive spline modeling (MASMOD) algorithm to construct the antecedent parts (membership functions) in the FLL model, and an expectation-maximization (EM) algorithm to parameterize the consequent parts (local linear models). The hybrid method not only has an approximation ability as good as most neuro-fuzzy network models, but also produces a parsimonious network structure (gain from MASMOD) and provides covariance information about the model error (gain from EM) which is valuable in applications such as state estimation and control. Numerical examples on nonlinear time-series analysis and nonlinear trajectory estimation using FLL models are presented to validate the derived algorithm.
Expected Classification Accuracy
Directory of Open Access Journals (Sweden)
Lawrence M. Rudner
2005-08-01
Full Text Available Every time we make a classification based on a test score, we should expect some number..of misclassifications. Some examinees whose true ability is within a score range will have..observed scores outside of that range. A procedure for providing a classification table of..true and expected scores is developed for polytomously scored items under item response..theory and applied to state assessment data. A simplified procedure for estimating the..table entries is also presented.
Nearly maximally predictive features and their dimensions
Marzen, Sarah E.; Crutchfield, James P.
2017-05-01
Scientific explanation often requires inferring maximally predictive features from a given data set. Unfortunately, the collection of minimal maximally predictive features for most stochastic processes is uncountably infinite. In such cases, one compromises and instead seeks nearly maximally predictive features. Here, we derive upper bounds on the rates at which the number and the coding cost of nearly maximally predictive features scale with desired predictive power. The rates are determined by the fractal dimensions of a process' mixed-state distribution. These results, in turn, show how widely used finite-order Markov models can fail as predictors and that mixed-state predictive features can offer a substantial improvement.
NATURE RESERVE SITE SELECTION TO MAXIMIZE EXPECTED SPECIES COVERED. (R825311)
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Two Time Point MS Lesion Segmentation in Brain MRI: An Expectation-Maximization Framework
National Research Council Canada - National Science Library
Jain, Saurabh; Ribbens, Annemie; Sima, Diana M; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2016-01-01
Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability...
DEFF Research Database (Denmark)
Cherchi, Elisabetta; Guevara, Cristian
2012-01-01
The random coefficients logit model allows a more realistic representation of agents' behavior. However, the estimation of that model may involve simulation, which may become impractical with many random coefficients because of the curse of dimensionality. In this paper, the traditional maximum...... with cross-sectional or with panel data, and (d) EM systematically attained more efficient estimators than the MSL method. The results imply that if the purpose of the estimation is only to determine the ratios of the model parameters (e.g., the value of time), the EM method should be preferred. For all...
Two Time Point MS Lesion Segmentation in Brain MRI : An Expectation-Maximization Framework
Jain, Saurabh; Ribbens, Annemie; Sima, Diana M.; Cambron, Melissa; De Keyser, Jacques; Wang, Chenyu; Barnett, Michael H.; Van Huffel, Sabine; Maes, Frederik; Smeets, Dirk
2016-01-01
Purpose: Lesion volume is a meaningful measure in multiple sclerosis (MS) prognosis. Manual lesion segmentation for computing volume in a single or multiple time points is time consuming and suffers from intra and inter-observer variability. Methods: In this paper, we present MSmetrix-long: a joint
The Ultimatum Game and Expected Utility Maximization – In View of Attachment Theory
Almakias, Shaul; Weiss, Avi
2010-01-01
In this paper we import a mainstream psycholgical theory, known as attachment theory, into economics and show the implications of this theory for economic behavior by individuals in the ultimatum bargaining game. Attachment theory examines the psychological tendency to seek proximity to another person, to feel secure when that person is present, and to feel anxious when that person is absent. An individual's attachment style can be classified along two-dimensional axes, one representing attac...
Light Microscopy at Maximal Precision
Bierbaum, Matthew; Leahy, Brian D.; Alemi, Alexander A.; Cohen, Itai; Sethna, James P.
2017-10-01
Microscopy is the workhorse of the physical and life sciences, producing crisp images of everything from atoms to cells well beyond the capabilities of the human eye. However, the analysis of these images is frequently little more accurate than manual marking. Here, we revolutionize the analysis of microscopy images, extracting all the useful information theoretically contained in a complex microscope image. Using a generic, methodological approach, we extract the information by fitting experimental images with a detailed optical model of the microscope, a method we call parameter extraction from reconstructing images (PERI). As a proof of principle, we demonstrate this approach with a confocal image of colloidal spheres, improving measurements of particle positions and radii by 10-100 times over current methods and attaining the maximum possible accuracy. With this unprecedented accuracy, we measure nanometer-scale colloidal interactions in dense suspensions solely with light microscopy, a previously impossible feat. Our approach is generic and applicable to imaging methods from brightfield to electron microscopy, where we expect accuracies of 1 nm and 0.1 pm, respectively.
Le, Thanh; Altman, Tom; Gardiner, Katheleen
2010-02-01
Identification of motifs in biological sequences is a challenging problem because such motifs are often short, degenerate, and may contain gaps. Most algorithms that have been developed for motif-finding use the expectation-maximization (EM) algorithm iteratively. Although EM algorithms can converge quickly, they depend strongly on initialization parameters and can converge to local sub-optimal solutions. In addition, they cannot generate gapped motifs. The effectiveness of EM algorithms in motif finding can be improved by incorporating methods that choose different sets of initial parameters to enable escape from local optima, and that allow gapped alignments within motif models. We have developed HIGEDA, an algorithm that uses the hierarchical gene-set genetic algorithm (HGA) with EM to initiate and search for the best parameters for the motif model. In addition, HIGEDA can identify gapped motifs using a position weight matrix and dynamic programming to generate an optimal gapped alignment of the motif model with sequences from the dataset. We show that HIGEDA outperforms MEME and other motif-finding algorithms on both DNA and protein sequences. Source code and test datasets are available for download at http://ouray.cudenver.edu/~tnle/, implemented in C++ and supported on Linux and MS Windows.
Seifarth, Joshua E; McGowan, Cheri L; Milne, Kevin J
2012-12-01
A sexual dimorphism in human life expectancy has existed in almost every country for as long as records have been kept. Although human life expectancy has increased each year, females still live longer, on average, than males. Undoubtedly, the reasons for the sex gap in life expectancy are multifaceted, and it has been discussed from both sociological and biological perspectives. However, even if biological factors make up only a small percentage of the determinants of the sex difference in this phenomenon, parity in average life expectancy should not be anticipated. The aim of this review is to highlight biological mechanisms that may underlie the sexual dimorphism in life expectancy. Using PubMed, ISI Web of Knowledge, and Google Scholar, as well as cited and citing reference histories of articles through August 2012, English-language articles were identified, read, and synthesized into categories that could account for biological sex differences in human life expectancy. The examination of biological mechanisms accounting for the female-based advantage in human life expectancy has been an active area of inquiry; however, it is still difficult to prove the relative importance of any 1 factor. Nonetheless, biological differences between the sexes do exist and include differences in genetic and physiological factors such as progressive skewing of X chromosome inactivation, telomere attrition, mitochondrial inheritance, hormonal and cellular responses to stress, immune function, and metabolic substrate handling among others. These factors may account for at least a part of the female advantage in human life expectancy. Despite noted gaps in sex equality, higher body fat percentages and lower physical activity levels globally at all ages, a sex-based gap in life expectancy exists in nearly every country for which data exist. There are several biological mechanisms that may contribute to explaining why females live longer than men on average, but the complexity of the
Allocating dissipation across a molecular machine cycle to maximize flux.
Brown, Aidan I; Sivak, David A
2017-10-17
Biomolecular machines consume free energy to break symmetry and make directed progress. Nonequilibrium ATP concentrations are the typical free energy source, with one cycle of a molecular machine consuming a certain number of ATP, providing a fixed free energy budget. Since evolution is expected to favor rapid-turnover machines that operate efficiently, we investigate how this free energy budget can be allocated to maximize flux. Unconstrained optimization eliminates intermediate metastable states, indicating that flux is enhanced in molecular machines with fewer states. When maintaining a set number of states, we show that-in contrast to previous findings-the flux-maximizing allocation of dissipation is not even. This result is consistent with the coexistence of both "irreversible" and reversible transitions in molecular machine models that successfully describe experimental data, which suggests that, in evolved machines, different transitions differ significantly in their dissipation.
Financial Management Practices, Wealth Maximization Criterion and ...
African Journals Online (AJOL)
In the field of financial management, shareholders wealth maximization is often seen as the desirable goal not only from the shareholders perspective but for the society at large; with the firm's primary goal aimed mainly at maximizing the wealth of its shareholders. This study thus aimed at determining the impact of the core ...
Corporate Social Responsibility and Profit Maximizing Behaviour
Becchetti, Leonardo; Giallonardo, Luisa; Tessitore, Maria Elisabetta
2005-01-01
We examine the behavior of a profit maximizing monopolist in a horizontal differentiation model in which consumers differ in their degree of social responsibility (SR) and consumers SR is dynamically influenced by habit persistence. The model outlines parametric conditions under which (consumer driven) corporate social responsibility is an optimal choice compatible with profit maximizing behavior.
Maximal Entanglement in High Energy Physics
Cervera-Lierta, Alba; Latorre, José I.; Rojo, Juan; Rottoli, Luca
2017-01-01
We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i) $s$-channel processes
Alternative trailer configurations for maximizing payloads
Jason D. Thompson; Dana Mitchell; John Klepac
2017-01-01
In order for harvesting contractors to stay ahead of increasing costs, it is imperative that they employ all options to maximize productivity and efficiency. Transportation can account for half the cost to deliver wood to a mill. Contractors seek to maximize truck payload to increase productivity. The Forest Operations Research Unit, Southern Research Station, USDA...
Iterative reconstruction of transcriptional regulatory networks: an algorithmic approach.
Directory of Open Access Journals (Sweden)
Christian L Barrett
2006-05-01
Full Text Available The number of complete, publicly available genome sequences is now greater than 200, and this number is expected to rapidly grow in the near future as metagenomic and environmental sequencing efforts escalate and the cost of sequencing drops. In order to make use of this data for understanding particular organisms and for discerning general principles about how organisms function, it will be necessary to reconstruct their various biochemical reaction networks. Principal among these will be transcriptional regulatory networks. Given the physical and logical complexity of these networks, the various sources of (often noisy data that can be utilized for their elucidation, the monetary costs involved, and the huge number of potential experiments approximately 10(12 that can be performed, experiment design algorithms will be necessary for synthesizing the various computational and experimental data to maximize the efficiency of regulatory network reconstruction. This paper presents an algorithm for experimental design to systematically and efficiently reconstruct transcriptional regulatory networks. It is meant to be applied iteratively in conjunction with an experimental laboratory component. The algorithm is presented here in the context of reconstructing transcriptional regulation for metabolism in Escherichia coli, and, through a retrospective analysis with previously performed experiments, we show that the produced experiment designs conform to how a human would design experiments. The algorithm is able to utilize probability estimates based on a wide range of computational and experimental sources to suggest experiments with the highest potential of discovering the greatest amount of new regulatory knowledge.
Distributed interference alignment iterative algorithms in symmetric wireless network
Directory of Open Access Journals (Sweden)
YANG Jingwen
2015-02-01
Full Text Available Interference alignment is a novel interference alignment way,which is widely noted all of the world.Interference alignment overlaps interference in the same signal space at receiving terminal by precoding so as to thoroughly eliminate the influence of interference impacted on expected signals,thus making the desire user achieve the maximum degree of freedom.In this paper we research three typical algorithms for realizing interference alignment,including minimizing the leakage interference,maximizing Signal to Interference plus Noise Ratio (SINR and minimizing mean square error(MSE.All of these algorithms utilize the reciprocity of wireless network,and iterate the precoders between original network and the reverse network so as to achieve interference alignment.We use the uplink transmit rate to analyze the performance of these three algorithms.Numerical simulation results show the advantages of these algorithms.which is the foundation for the further study in the future.The feasibility and future of interference alignment are also discussed at last.
DEFF Research Database (Denmark)
Buraschi, Andrea; Piatti, Ilaria; Whelan, Paul
hypothesis. Finally, we use ex-ante spanned subjective beliefs to evaluate several reduced-form and structural models. We find support for heterogeneous beliefs models and also uncover a number of statistically significant relationships in favour of alternative rational expectations models once the effect......This paper studies the properties of bond risk premia in the cross-section of subjective expectations. We exploit an extensive dataset of yield curve forecasts from financial institutions and document a number of novel findings. First, contrary to evidence presented for stock markets but consistent......-primary dealers. Third, we reject the null hypothesis that subjective expected bond returns are constant. When predicting long term rates, however, primary dealers have no information advantage. This suggests that a key source of variation in long-term bonds are risk premia and not short- term rate variation...
The Expected Time Complexity of Parallel Graph and Digraph Algorithms.
1982-04-01
random input graphs. This includes the work of [Angluin, Valiant, 79), [Karp, 76], [Karp, Sipser , 81], [Schnorr, 78], [Karp, Tarjan, 80], [Reif...Acad. Press, New York, 1976, pp. 1-19. R. M. Karp, and M. Sipser , "Maximum Matchings in Sparce Random Graphs," Foundations of Computer Science, 1981
Purification of Gaussian maximally mixed states
Energy Technology Data Exchange (ETDEWEB)
Jeong, Kabgyun [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul 08826 (Korea, Republic of); School of Computational Sciences, Korea Institute for Advanced Study, Seoul 02455 (Korea, Republic of); Lim, Youngrong, E-mail: sshaep@gmail.com [Center for Macroscopic Quantum Control, Department of Physics and Astronomy, Seoul National University, Seoul 08826 (Korea, Republic of)
2016-10-23
We find that the purifications of several Gaussian maximally mixed states (GMMSs) correspond to some Gaussian maximally entangled states (GMESs) in the continuous-variable regime. Here, we consider a two-mode squeezed vacuum (TMSV) state as a purification of the thermal state and construct a general formalism of the Gaussian purification process. Moreover, we introduce other kind of GMESs via the process. All of our purified states of the GMMSs exhibit Gaussian profiles; thus, the states show maximal quantum entanglement in the Gaussian regime. - Highlights: • Candidates of Gaussian maximally mixed state are proposed. • Obtaining Gaussian maximally entangled states using the purification process. • The suggested states can be applicable for the test of capacity problem in Gaussian regime.
Maximal Entanglement in High Energy Physics
Directory of Open Access Journals (Sweden)
Alba Cervera-Lierta, José I. Latorre, Juan Rojo, Luca Rottoli
2017-11-01
Full Text Available We analyze how maximal entanglement is generated at the fundamental level in QED by studying correlations between helicity states in tree-level scattering processes at high energy. We demonstrate that two mechanisms for the generation of maximal entanglement are at work: i $s$-channel processes where the virtual photon carries equal overlaps of the helicities of the final state particles, and ii the indistinguishable superposition between $t$- and $u$-channels. We then study whether requiring maximal entanglement constrains the coupling structure of QED and the weak interactions. In the case of photon-electron interactions unconstrained by gauge symmetry, we show how this requirement allows reproducing QED. For $Z$-mediated weak scattering, the maximal entanglement principle leads to non-trivial predictions for the value of the weak mixing angle $\\theta_W$. Our results are a first step towards understanding the connections between maximal entanglement and the fundamental symmetries of high-energy physics.
Rational Expectations in Games
Robert J. Aumann; Jacques H. Dreze
2008-01-01
A player i's actions in a game are determined by her beliefs about other players; these depend on the game's real-life context, not only its formal description. Define a game situation as a game together with such beliefs; call the beliefs— and i's resulting expectation—rational if there is common knowledge of rationality and a common prior. In two-person zero-sum games, i's only rational expectation is the game’s value. In an arbitrary game G, we characterize i's rational expectations in ter...
The Qualitative Expectations Hypothesis
DEFF Research Database (Denmark)
Frydman, Roman; Johansen, Søren; Rahbek, Anders
2017-01-01
We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economistís model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...
The Qualitative Expectations Hypothesis
DEFF Research Database (Denmark)
Frydman, Roman; Johansen, Søren; Rahbek, Anders
We introduce the Qualitative Expectations Hypothesis (QEH) as a new approach to modeling macroeconomic and financial outcomes. Building on John Muth's seminal insight underpinning the Rational Expectations Hypothesis (REH), QEH represents the market's forecasts to be consistent with the predictions...... of an economist's model. However, by assuming that outcomes lie within stochastic intervals, QEH, unlike REH, recognizes the ambiguity faced by an economist and market participants alike. Moreover, QEH leaves the model open to ambiguity by not specifying a mechanism determining specific values that outcomes take...
Energy Technology Data Exchange (ETDEWEB)
Ray, P.E.
1998-09-04
This document outlines the significant accomplishments of fiscal year 1998 for the Tank Waste Remediation System (TWRS) Project Hanford Management Contract (PHMC) team. Opportunities for improvement to better meet some performance expectations have been identified. The PHMC has performed at an excellent level in administration of leadership, planning, and technical direction. The contractor has met and made notable improvement of attaining customer satisfaction in mission execution. This document includes the team`s recommendation that the PHMC TWRS Performance Expectation Plan evaluation rating for fiscal year 1998 be an Excellent.
DEFF Research Database (Denmark)
Shutin, Dmitriy; Fleury, Bernard Henri
2011-01-01
the variational free energy, distributions of the multipath component parameters can be obtained instead of parameter point estimates and ii) the estimation of the number of relevant multipath components and the estimation of the component parameters are implemented jointly. The sparsity is achieved by defining......In this paper, we develop a sparse variational Bayesian (VB) extension of the space-alternating generalized expectation-maximization (SAGE) algorithm for the high resolution estimation of the parameters of relevant multipath components in the response of frequency and spatially selective wireless...
Jois, Manjunath Holaykoppa Nanjunda
The conventional Influence Maximization problem is the problem of finding such a team (a small subset) of seed nodes in a social network that would maximize the spread of influence over the whole network. This paper considers a lottery system aimed at maximizing the awareness spread to promote energy conservation behavior as a stochastic Influence Maximization problem with the constraints ensuring lottery fairness. The resulting Multi-Team Influence Maximization problem involves assigning the probabilities to multiple teams of seeds (interpreted as lottery winners) to maximize the expected awareness spread. Such a variation of the Influence Maximization problem is modeled as a Linear Program; however, enumerating all the possible teams is a hard task considering that the feasible team count grows exponentially with the network size. In order to address this challenge, we develop a column generation based approach to solve the problem with a limited number of candidate teams, where new candidates are generated and added to the problem iteratively. We adopt a piecewise linear function to model the impact of including a new team so as to pick only such teams which can improve the existing solution. We demonstrate that with this approach we can solve such influence maximization problems to optimality, and perform computational study with real-world social network data sets to showcase the efficiency of the approach in finding lottery designs for optimal awareness spread. Lastly, we explore other possible scenarios where this model can be utilized to optimally solve the otherwise hard to solve influence maximization problems.
DEFF Research Database (Denmark)
Buraschi, Andrea; Piatti, Ilaria; Whelan, Paul
dynamics. The consensus is not a sufficient statistics of the cross-section of expectations and we propose an alternative real-time aggregate measure of risk premia consistent with Friedmans market selection hypothesis. We then use this measure to evaluate structural models and find support...
Great Expectations. [Lesson Plan].
Devine, Kelley
Based on Charles Dickens' novel "Great Expectations," this lesson plan presents activities designed to help students understand the differences between totalitarianism and democracy; and a that a writer of a story considers theme, plot, characters, setting, and point of view. The main activity of the lesson involves students working in groups to…
Williams, Roger; Williams, Sherry
2014-01-01
Author and husband, Roger Williams, is hearing and signs fluently, and author and wife, Sherry Williams, is deaf and uses both speech and signs, although she is most comfortable signing. As parents of six children--deaf and hearing--they are determined to encourage their children to do their best, and they always set their expectations high. They…
Independent component analysis for brain FMRI does indeed select for maximal independence.
Directory of Open Access Journals (Sweden)
Vince D Calhoun
Full Text Available A recent paper by Daubechies et al. claims that two independent component analysis (ICA algorithms, Infomax and FastICA, which are widely used for functional magnetic resonance imaging (fMRI analysis, select for sparsity rather than independence. The argument was supported by a series of experiments on synthetic data. We show that these experiments fall short of proving this claim and that the ICA algorithms are indeed doing what they are designed to do: identify maximally independent sources.
Weber, James Daniel
1999-11-01
This dissertation presents a new algorithm that allows a market participant to maximize its individual welfare in the electricity spot market. The use of such an algorithm in determining market equilibrium points, called Nash equilibria, is also demonstrated. The start of the algorithm is a spot market model that uses the optimal power flow (OPF), with a full representation of the transmission system. The OPF is also extended to model consumer behavior, and a thorough mathematical justification for the inclusion of the consumer model in the OPF is presented. The algorithm utilizes price and dispatch sensitivities, available from the Hessian matrix of the OPF, to help determine an optimal change in an individual's bid. The algorithm is shown to be successful in determining local welfare maxima, and the prospects for scaling the algorithm up to realistically sized systems are very good. Assuming a market in which all participants maximize their individual welfare, economic equilibrium points, called Nash equilibria, are investigated. This is done by iteratively solving the individual welfare maximization algorithm for each participant until a point is reached where all individuals stop modifying their bids. It is shown that these Nash equilibria can be located in this manner. However, it is also demonstrated that equilibria do not always exist, and are not always unique when they do exist. It is also shown that individual welfare is a highly nonconcave function resulting in many local maxima. As a result, a more global optimization technique, using a genetic algorithm (GA), is investigated. The genetic algorithm is successfully demonstrated on several systems. It is also shown that a GA can be developed using special niche methods, which allow a GA to converge to several local optima at once. Finally, the last chapter of this dissertation covers the development of a new computer visualization routine for power system analysis: contouring. The contouring algorithm is
On Time with Minimal Expected Cost!
DEFF Research Database (Denmark)
David, Alexandre; Jensen, Peter Gjøl; Larsen, Kim Guldstrand
2014-01-01
) timed game essentially defines an infinite-state Markov (reward) decision proces. In this setting the objective is classically to find a strategy that will minimize the expected reachability cost, but with no guarantees on worst-case behaviour. In this paper, we provide efficient methods for computing...... reachability strategies that will both ensure worst case time-bounds as well as provide (near-) minimal expected cost. Our method extends the synthesis algorithms of the synthesis tool Uppaal-Tiga with suitable adapted reinforcement learning techniques, that exhibits several orders of magnitude improvements w...
Maximizing Weapon System Availability With a Multi-Echelon Supply Network
2014-06-01
that system availability is maximized. This thesis develops a stochastic optimization model that prescribes optimal investments in spare parts for... stochastic optimization to maximize the expected number of time periods a weapon system is 2 available for use. The key decisions in our model ...stationary demands over time. Simchi-Levi and Zhao provide methods for evaluating stochastic , multi-echelon inventory systems , specifically, the queuing
DEFF Research Database (Denmark)
Hansen, Casper Worm; Strulik, Holger
2017-01-01
This paper exploits the unexpected decline in the death rate from cardiovascular diseases since the 1970s as a large positive health shock that affected predominantly old-age mortality; i.e. the fourth stage of the epidemiological transition. Using a difference-in-differences estimation strategy......, we find that US states with higher mortality rates from cardiovascular disease prior to the 1970s experienced greater increases in adult life expectancy and higher education enrollment. Our estimates suggest that a one-standard deviation higher treatment intensity is associated with an increase...... in adult life expectancy of 0.37 years and 0.07–0.15 more years of higher education....
Spiking the expectancy profiles
DEFF Research Database (Denmark)
Hansen, Niels Chr.; Loui, Psyche; Vuust, Peter
statistical learning, causing comparatively sharper key profiles in musicians, we hypothesised that musical learning can be modelled as a process of entropy reduction through experience. Specifically, implicit learning of statistical regularities allows reduction in the relative entropy (i.e. symmetrised...... Kullback-Leibler or Jensen-Shannon Divergence) between listeners’ prior expectancy profiles and probability distributions of a musical style or of stimuli used in short-term experiments. Five previous probe-tone experiments with musicians and non-musicians were revisited. In Experiments 1-2 participants...... and relevance of musical training and within-participant decreases after short-term exposure to novel music. Thus, whereas inexperienced listeners make high-entropy predictions, statistical learning over varying timescales enables listeners to generate melodic expectations with reduced entropy...
Spiking the expectancy profiles
DEFF Research Database (Denmark)
Hansen, Niels Chr.; Loui, Psyche; Vuust, Peter
Melodic expectations have long been quantified using expectedness ratings. Motivated by statistical learning and sharper key profiles in musicians, we model musical learning as a process of reducing the relative entropy between listeners' prior expectancy profiles and probability distributions...... of a given musical style or of stimuli used in short-term experiments. Five previous probe-tone experiments with musicians and non-musicians are revisited. Exp. 1-2 used jazz, classical and hymn melodies. Exp. 3-5 collected ratings before and after exposure to 5, 15 or 400 novel melodies generated from...... a finite-state grammar using the Bohlen-Pierce scale. We find group differences in entropy corresponding to degree and relevance of musical training and within-participant decreases after short-term exposure. Thus, whereas inexperienced listeners make high-entropy predictions by default, statistical...
Genetic enhancements and expectations.
Sorensen, K
2009-07-01
Some argue that genetic enhancements and environmental enhancements are not importantly different: environmental enhancements such as private schools and chess lessons are simply the old-school way to have a designer baby. I argue that there is an important distinction between the two practices--a distinction that makes state restrictions on genetic enhancements more justifiable than state restrictions on environmental enhancements. The difference is that parents have no settled expectations about genetic enhancements.
Reputation and Rational Expectations
Andersen, Torben; Risager, Ole
1987-01-01
The paper considers the importance of reputation in relation to disinflationary policies in a continuous time ration expectations model, where the private sector has incomplete information about the true preferences of the government. It is proved that there is a unique equilibrium with the important property that the costs of disinflation arise in the start of the game where the policy has not yet gained credibility. Published in connection with a visit at the IIES.
Subjective Life Expectancy Among College Students.
Rodemann, Alyssa E; Arigo, Danielle
2017-09-14
Establishing healthy habits in college is important for long-term health. Despite existing health promotion efforts, many college students fail to meet recommendations for behaviors such as healthy eating and exercise, which may be due to low perceived risk for health problems. The goals of this study were to examine: (1) the accuracy of life expectancy predictions, (2) potential individual differences in accuracy (i.e., gender and conscientiousness), and (3) potential change in accuracy after inducing awareness of current health behaviors. College students from a small northeastern university completed an electronic survey, including demographics, initial predictions of their life expectancy, and their recent health behaviors. At the end of the survey, participants were asked to predict their life expectancy a second time. Their health data were then submitted to a validated online algorithm to generate calculated life expectancy. Participants significantly overestimated their initial life expectancy, and neither gender nor conscientiousness was related to the accuracy of these predictions. Further, subjective life expectancy decreased from initial to final predictions. These findings suggest that life expectancy perceptions present a unique-and potentially modifiable-psychological process that could influence college students' self-care.
HEALTH INSURANCE: CONTRIBUTIONS AND REIMBURSEMENT MAXIMAL
HR Division
2000-01-01
Affected by both the salary adjustment index on 1.1.2000 and the evolution of the staff members and fellows population, the average reference salary, which is used as an index for fixed contributions and reimbursement maximal, has changed significantly. An adjustment of the amounts of the reimbursement maximal and the fixed contributions is therefore necessary, as from 1 January 2000.Reimbursement maximalThe revised reimbursement maximal will appear on the leaflet summarising the benefits for the year 2000, which will soon be available from the divisional secretariats and from the AUSTRIA office at CERN.Fixed contributionsThe fixed contributions, applicable to some categories of voluntarily insured persons, are set as follows (amounts in CHF for monthly contributions):voluntarily insured member of the personnel, with complete coverage:815,- (was 803,- in 1999)voluntarily insured member of the personnel, with reduced coverage:407,- (was 402,- in 1999)voluntarily insured no longer dependent child:326,- (was 321...
Independent Component Analysis by Entropy Maximization (INFOMAX)
National Research Council Canada - National Science Library
Garvey, Jennie H
2007-01-01
... (BSS). The Infomax method separates unknown source signals from a number of signal mixtures by maximizing the entropy of a transformed set of signal mixtures and is accomplished by performing gradient ascent in MATLAB...
Insulin resistance and maximal oxygen uptake
DEFF Research Database (Denmark)
Seibaek, Marie; Vestergaard, Henrik; Burchardt, Hans
2003-01-01
BACKGROUND: Type 2 diabetes, coronary atherosclerosis, and physical fitness all correlate with insulin resistance, but the relative importance of each component is unknown. HYPOTHESIS: This study was undertaken to determine the relationship between insulin resistance, maximal oxygen uptake......, and the presence of either diabetes or ischemic heart disease. METHODS: The study population comprised 33 patients with and without diabetes and ischemic heart disease. Insulin resistance was measured by a hyperinsulinemic euglycemic clamp; maximal oxygen uptake was measured during a bicycle exercise test. RESULTS......: There was a strong correlation between maximal oxygen uptake and insulin-stimulated glucose uptake (r = 0.7, p = 0.001), and maximal oxygen uptake was the only factor of importance for determining insulin sensitivity in a model, which also included the presence of diabetes and ischemic heart disease. CONCLUSION...
Maximizing biogas production from the anaerobic digestion
Ghouali, A.; Sari, T.; Harmand, J.
2015-01-01
This paper presents an optimal control law policy for maximizing biogas production of anaerobic digesters. In particular, using a simple model of the anaerobic digestion process, we derive a control law to maximize the biogas production over a period T using the dilution rate as the control variable. Depending on initial conditions and constraints on the actuator (the dilution rate D(·)), the search for a solution to the optimal control problem reveals very different levels of difficulty. In ...
Maximizing biogas production from the anaerobic digestion
Ghouali, Amel; Sari, Tewfik; Harmand, Jérôme
2015-01-01
Sous presse; International audience; This paper presents an optimal control law policy for maximizing biogas pro-duction of anaerobic digesters. In particular, using a simple model of the anaerobicdigestion process, we derive a control law to maximize the biogas production overa period T using the dilution rate as the control variable. Depending on initialconditions and constraints on the actuator (the dilution rate D(:)), the search fora solution to the optimal control problem reveals very d...
Adaptive Influence Maximization in Dynamic Social Networks
Tong, Guangmo; Wu, Weili; Tang, Shaojie; Du, Ding-Zhu
2015-01-01
For the purpose of propagating information and ideas through a social network, a seeding strategy aims to find a small set of seed users that are able to maximize the spread of the influence, which is termed as influence maximization problem. Despite a large number of works have studied this problem, the existing seeding strategies are limited to the static social networks. In fact, due to the high speed data transmission and the large population of participants, the diffusion processes in re...
Maximizing Interconnectedness and Availability in Directional Airborne RangeExtension Networks
2017-10-25
surveyed advances in directional networking , noting much work in topology management , medium access control algorithms, aircraft-based directional... networks . Other issues, such as topology management and spectrum management , also require further research and development to enable high-quality...Maximizing Interconnectedness and Availability in Directional Airborne Range Extension Networks Thomas Shake, Rahul Amin MIT Lincoln Laboratory
Maximal sfermion flavour violation in super-GUTs
Ellis, John; Olive, Keith A.; Velasco-Sevilla, L.
2016-10-01
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses m_0 specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses m_{1/2}, as is expected in no-scale models, the dominant effects of renormalisation between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to m_{1/2} and generation independent. In this case, the input scalar masses m_0 may violate flavour maximally, a scenario we call MaxSFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity.
Maximal sfermion flavour violation in super-GUTs
AUTHOR|(CDS)2108556; Velasco-Sevilla, Liliana
2016-01-01
We consider supersymmetric grand unified theories with soft supersymmetry-breaking scalar masses $m_0$ specified above the GUT scale (super-GUTs) and patterns of Yukawa couplings motivated by upper limits on flavour-changing interactions beyond the Standard Model. If the scalar masses are smaller than the gaugino masses $m_{1/2}$, as is expected in no-scale models, the dominant effects of renormalization between the input scale and the GUT scale are generally expected to be those due to the gauge couplings, which are proportional to $m_{1/2}$ and generation-independent. In this case, the input scalar masses $m_0$ may violate flavour maximally, a scenario we call MaxFV, and there is no supersymmetric flavour problem. We illustrate this possibility within various specific super-GUT scenarios that are deformations of no-scale gravity.
A Monte Carlo EM algorithm for de novo motif discovery in biomolecular sequences.
Bi, Chengpeng
2009-01-01
Motif discovery methods play pivotal roles in deciphering the genetic regulatory codes (i.e., motifs) in genomes as well as in locating conserved domains in protein sequences. The Expectation Maximization (EM) algorithm is one of the most popular methods used in de novo motif discovery. Based on the position weight matrix (PWM) updating technique, this paper presents a Monte Carlo version of the EM motif-finding algorithm that carries out stochastic sampling in local alignment space to overcome the conventional EM's main drawback of being trapped in a local optimum. The newly implemented algorithm is named as Monte Carlo EM Motif Discovery Algorithm (MCEMDA). MCEMDA starts from an initial model, and then it iteratively performs Monte Carlo simulation and parameter update until convergence. A log-likelihood profiling technique together with the top-k strategy is introduced to cope with the phase shifts and multiple modal issues in motif discovery problem. A novel grouping motif alignment (GMA) algorithm is designed to select motifs by clustering a population of candidate local alignments and successfully applied to subtle motif discovery. MCEMDA compares favorably to other popular PWM-based and word enumerative motif algorithms tested using simulated (l, d)-motif cases, documented prokaryotic, and eukaryotic DNA motif sequences. Finally, MCEMDA is applied to detect large blocks of conserved domains using protein benchmarks and exhibits its excellent capacity while compared with other multiple sequence alignment methods.
The Robust EM-type Algorithms for Log-concave Mixtures of Regression Models.
Hu, Hao; Yao, Weixin; Wu, Yichao
2017-07-01
Finite mixture of regression (FMR) models can be reformulated as incomplete data problems and they can be estimated via the expectation-maximization (EM) algorithm. The main drawback is the strong parametric assumption such as FMR models with normal distributed residuals. The estimation might be biased if the model is misspecified. To relax the parametric assumption about the component error densities, a new method is proposed to estimate the mixture regression parameters by only assuming that the components have log-concave error densities but the specific parametric family is unknown. Two EM-type algorithms for the mixtures of regression models with log-concave error densities are proposed. Numerical studies are made to compare the performance of our algorithms with the normal mixture EM algorithms. When the component error densities are not normal, the new methods have much smaller MSEs when compared with the standard normal mixture EM algorithms. When the underlying component error densities are normal, the new methods have comparable performance to the normal EM algorithm.
Modelling and Forecasting Health Expectancy
I.M. Májer (István)
2012-01-01
textabstractLife expectancy of a human population measures the expected (or average) remaining years of life at a given age. Life expectancy can be defined by two forms of measurement: the period and the cohort life expectancy. The period life expectancy represents the mortality conditions at a
Chinese students' great expectations
DEFF Research Database (Denmark)
Thøgersen, Stig
2013-01-01
to interpret their own educational histories and prior experiences, while at the same time making use of imaginaries of 'Western' education to redefine themselves as independent individuals in an increasingly globalised and individualised world. Through a case study of prospective pre-school teachers preparing...... to study abroad, the article shows how personal, professional and even national goals are closely interwoven. Students expect education abroad to be a personally transformative experience, but rather than defining their goals of individual freedom and creativity in opposition to the authoritarian political...... system, they think of themselves as having a role in the transformation of Chinese attitudes to education and parent-child relations....
Zaidi, Habib; Ruest, Torsten; Schoenahl, Frederic; Montandon, Marie-Louise
2006-10-01
Magnetic resonance imaging (MRI)-guided partial volume effect correction (PVC) in brain positron emission tomography (PET) is now a well-established approach to compensate the large bias in the estimate of regional radioactivity concentration, especially for small structures. The accuracy of the algorithms developed so far is, however, largely dependent on the performance of segmentation methods partitioning MRI brain data into its main classes, namely gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF). A comparative evaluation of three brain MRI segmentation algorithms using simulated and clinical brain MR data was performed, and subsequently their impact on PVC in 18F-FDG and 18F-DOPA brain PET imaging was assessed. Two algorithms, the first is bundled in the Statistical Parametric Mapping (SPM2) package while the other is the Expectation Maximization Segmentation (EMS) algorithm, incorporate a priori probability images derived from MR images of a large number of subjects. The third, here referred to as the HBSA algorithm, is a histogram-based segmentation algorithm incorporating an Expectation Maximization approach to model a four-Gaussian mixture for both global and local histograms. Simulated under different combinations of noise and intensity non-uniformity, MR brain phantoms with known true volumes for the different brain classes were generated. The algorithms' performance was checked by calculating the kappa index assessing similarities with the "ground truth" as well as multiclass type I and type II errors including misclassification rates. The impact of image segmentation algorithms on PVC was then quantified using clinical data. The segmented tissues of patients' brain MRI were given as input to the region of interest (RoI)-based geometric transfer matrix (GTM) PVC algorithm, and quantitative comparisons were made. The results of digital MRI phantom studies suggest that the use of HBSA produces the best performance for WM classification
Road environment perception algorithm based on object semantic probabilistic model
Liu, Wei; Wang, XinMei; Tian, Jinwen; Wang, Yong
2015-12-01
This article seeks to discover the object categories' semantic probabilistic model (OSPM) based on statistical test analysis method. We applied this model on road forward environment perception algorithm, including on-road object recognition and detection. First, the image was represented by a set composed of words (local feature regions). Then, found the probability distribution among image, local regions and object semantic category based on the new model. In training, the parameters of the object model are estimated. This is done by using expectation-maximization in a maximum likelihood setting. In recognition, this model is used to classify images by using a Bayesian manner. In detection, the posterios is calculated to detect the typical on-road objects. Experiments release the good performance on object recognition and detection in urban street background.
Directory of Open Access Journals (Sweden)
Susana A. Eisenchlas
2013-09-01
Full Text Available One consequence of the advent of cyber communication is that increasing numbers of people go online to ask for, obtain, and presumably act upon advice dispensed by unknown peers. Just as advice seekers may not have access to information about the identities, ideologies, and other personal characteristics of advice givers, advice givers are equally ignorant about their interlocutors except for the bits of demographic information that the latter may offer freely. In the present study, that information concerns sex. As the sex of the advice seeker may be the only, or the predominant, contextual variable at hand, it is expected that that identifier will guide advice givers in formulating their advice. The aim of this project is to investigate whether and how the sex of advice givers and receivers affects the type of advice, through the empirical analysis of a corpus of web-based Spanish language forums on personal relationship difficulties. The data revealed that, in the absence of individuating information beyond that implicit in the advice request, internalized gender expectations along the lines of agency and communality are the sources from which advice givers draw to guide their counsel. This is despite the trend in discursive practices used in formulating advice, suggesting greater language convergence across sexes.
ATLAS: Exceeding all expectations
CERN Bulletin
2010-01-01
“One year ago it would have been impossible for us to guess that the machine and the experiments could achieve so much so quickly”, says Fabiola Gianotti, ATLAS spokesperson. The whole chain – from collision to data analysis – has worked remarkably well in ATLAS. The first LHC proton run undoubtedly exceeded expectations for the ATLAS experiment. “ATLAS has worked very well since the beginning. Its overall data-taking efficiency is greater than 90%”, says Fabiola Gianotti. “The quality and maturity of the reconstruction and simulation software turned out to be better than we expected for this initial stage of the experiment. The Grid is a great success, and right from the beginning it has allowed members of the collaboration all over the world to participate in the data analysis in an effective and timely manner, and to deliver physics results very quickly”. In just a few months of data taking, ATLAS has observed t...
Whittle, Peter
1992-01-01
This book is a complete revision of the earlier work Probability which ap peared in 1970. While revised so radically and incorporating so much new material as to amount to a new text, it preserves both the aim and the approach of the original. That aim was stated as the provision of a 'first text in probability, de manding a reasonable but not extensive knowledge of mathematics, and taking the reader to what one might describe as a good intermediate level'. In doing so it attempted to break away from stereotyped applications, and consider applications of a more novel and significant character. The particular novelty of the approach was that expectation was taken as the prime concept, and the concept of expectation axiomatized rather than that of a probability measure. In the preface to the original text of 1970 (reproduced below, together with that to the Russian edition of 1982) I listed what I saw as the advantages of the approach in as unlaboured a fashion as I could. I also took the view that the text...
Automatic control algorithm effects on energy production
Mcnerney, G. M.
1981-01-01
A computer model was developed using actual wind time series and turbine performance data to simulate the power produced by the Sandia 17-m VAWT operating in automatic control. The model was used to investigate the influence of starting algorithms on annual energy production. The results indicate that, depending on turbine and local wind characteristics, a bad choice of a control algorithm can significantly reduce overall energy production. The model can be used to select control algorithms and threshold parameters that maximize long term energy production. The results from local site and turbine characteristics were generalized to obtain general guidelines for control algorithm design.
Continuous subjective expected utility with non-additive probabilities
P.P. Wakker (Peter)
1989-01-01
textabstractA well-known theorem of Debreu about additive representations of preferences is applied in a non-additive context, to characterize continuous subjective expected utility maximization for the case where the probability measures may be non-additive. The approach of this paper does not need
2009-01-01
Abstract Genome-wide analyses of protein binding sites generate large amounts of data; a ChIP dataset might contain 10,000 sites. Unbiased motif discovery in such datasets is not generally feasible using current methods that employ probabilistic models. We propose an efficient method, gadem, which combines spaced dyads and an expectation-maximization (EM) algorithm. Candidate words (four to six nucleotides) for constructing spaced dyads are prioritized by their degree of overrepresentation in the input sequence data. Spaced dyads are converted into starting position weight matrices (PWMs). gadem then employs a genetic algorithm (GA), with an embedded EM algorithm to improve starting PWMs, to guide the evolution of a population of spaced dyads toward one whose entropy scores are more statistically significant. Spaced dyads whose entropy scores reach a pre-specified significance threshold are declared motifs. gadem performed comparably with meme on 500 sets of simulated “ChIP” sequences with embedded known P53 binding sites. The major advantage of gadem is its computational efficiency on large ChIP datasets compared to competitors. We applied gadem to six genome-wide ChIP datasets. Approximately, 15 to 30 motifs of various lengths were identified in each dataset. Remarkably, without any prior motif information, the expected known motif (e.g., P53 in P53 data) was identified every time. gadem discovered motifs of various lengths (6–40 bp) and characteristics in these datasets containing from 0.5 to >13 million nucleotides with run times of 5 to 96 h. gadem can be viewed as an extension of the well-known meme algorithm and is an efficient tool for de novo motif discovery in large-scale genome-wide data. The gadem software is available at www.niehs.nih.gov/research/resources/software/GADEM/. PMID:19193149
A comparative study of expectant parents ' childbirth expectations.
Kao, Bi-Chin; Gau, Meei-Ling; Wu, Shian-Feng; Kuo, Bih-Jaw; Lee, Tsorng-Yeh
2004-09-01
The purpose of this study was to understand childbirth expectations and differences in childbirth expectations among expectant parents. For convenience sampling, 200 couples willing to participate in this study were chosen from two hospitals in central Taiwan. Inclusion criteria were at least 36 weeks of gestation, aged 18 and above, no prenatal complications, and willing to consent to participate in this study. Instruments used to collect data included basic demographic data and the Childbirth Expectations Questionnaire. Findings of the study revealed that (1) five factors were identified by expectant parents regarding childbirth expectations including the caregiving environment, expectation of labor pain, spousal support, control and participation, and medical and nursing support; (2) no general differences were identified in the childbirth expectations between expectant fathers and expectant mothers; and (3) expectant fathers with a higher socioeconomic status and who had received prenatal (childbirth) education had higher childbirth expectations, whereas mothers displayed no differences in demographic characteristics. The study results may help clinical healthcare providers better understand differences in expectations during labor and birth and childbirth expectations by expectant parents in order to improve the medical and nursing system and promote positive childbirth experiences and satisfaction for expectant parents.
Mapping chemicals in air using an environmental CAT scanning system: evaluation of algorithms
Samanta, A.; Todd, L. A.
A new technique is being developed which creates near real-time maps of chemical concentrations in air for environmental and occupational environmental applications. This technique, we call Environmental CAT Scanning, combines the real-time measuring technique of open-path Fourier transform infrared spectroscopy with the mapping capabilitites of computed tomography to produce two-dimensional concentration maps. With this system, a network of open-path measurements is obtained over an area; measurements are then processed using a tomographic algorithm to reconstruct the concentrations. This research focussed on the process of evaluating and selecting appropriate reconstruction algorithms, for use in the field, by using test concentration data from both computer simultation and laboratory chamber studies. Four algorithms were tested using three types of data: (1) experimental open-path data from studies that used a prototype opne-path Fourier transform/computed tomography system in an exposure chamber; (2) synthetic open-path data generated from maps created by kriging point samples taken in the chamber studies (in 1), and; (3) synthetic open-path data generated using a chemical dispersion model to create time seires maps. The iterative algorithms used to reconstruct the concentration data were: Algebraic Reconstruction Technique without Weights (ART1), Algebraic Reconstruction Technique with Weights (ARTW), Maximum Likelihood with Expectation Maximization (MLEM) and Multiplicative Algebraic Reconstruction Technique (MART). Maps were evaluated quantitatively and qualitatively. In general, MART and MLEM performed best, followed by ARTW and ART1. However, algorithm performance varied under different contaminant scenarios. This study showed the importance of using a variety of maps, particulary those generated using dispersion models. The time series maps provided a more rigorous test of the algorithms and allowed distinctions to be made among the algorithms. A
Mikhaylova, E.; Kolstein, M.; De Lorenzo, G.; Chmeissani, M.
2014-01-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics. PMID:25018777
Maximal frequent sequence based test suite reduction through DU-pairs
Directory of Open Access Journals (Sweden)
Narendra Kumar Rao Bangole
2014-08-01
Full Text Available The current paper illustrates the importance of clustering the frequent items of code coverage during test suite reduction. A modular Most maximal frequent sequence clustered algorithm has been used along with a Requirement residue based test case reduction process. DU-pairs form the basic code coverage requirement under consideration for test suite reduction. This algorithm farewell when compared with few other algorithms like Harrold Gupta and Soffa (HGS and Bi-Objective Greedy (BOG algorithms and Greedy algorithms in covering all the DU-Pairs. The coverage criteria achieved is 100% in many cases, except for few insufficient and incomplete test suites.DOI: http://dx.doi.org/10.15181/csat.v2i1.396
[Chemical constituents from Salvia przewalskii Maxim].
Yang, Li-Xin; Li, Xing-Cui; Liu, Chao; Xiao, Lei; Qin, De-Hua; Chen, Ruo-Yun
2011-07-01
The investigation on Salvia przewalskii Maxim was carried out to find the relationship of the constituents and their pharmacological activities. The isolation and purification were performed by various chromatographies such as silica gel, Sephadex LH-20, RP-C18 column chromatography, etc. Further investigation on the fraction of the 95% ethanol extract of Salvia przewalskii Maxim yielded przewalskin Y-1 (1), anhydride of tanshinone-II A (2), sugiol (3), epicryptoacetalide (4), cryptoacetalide (5), arucadiol (6), 1-dehydromiltirone (7), miltirone (8), cryptotanshinone (9), tanshinone II A (10) and isotanshinone-I (11). Their structures were elucidated by the spectral analysis such as NMR (Nuclear Magnetic Resonance) and MS (Mass Spectrometry). Compound 1 is a new compound. Compounds 4 and 5 are mirror isomers (1 : 3). Compounds 4, 5, 6, 8, 11 were isolated from Salvia przewalskii Maxim for the first time.
Maximizing band gaps in plate structures
DEFF Research Database (Denmark)
Halkjær, Søren; Sigmund, Ole; Jensen, Jakob Søndergaard
2006-01-01
periodic plate using Bloch theory, which conveniently reduces the maximization problem to that of a single base cell. Secondly, we construct a finite periodic plate using a number of the optimized base cells in a postprocessed version. The dynamic properties of the finite plate are investigated......Band gaps, i.e., frequency ranges in which waves cannot propagate, can be found in elastic structures for which there is a certain periodic modulation of the material properties or structure. In this paper, we maximize the band gap size for bending waves in a Mindlin plate. We analyze an infinite...
Continuous Analog of Accelerated OS-EM Algorithm for Computed Tomography
Directory of Open Access Journals (Sweden)
Kiyoko Tateishi
2017-01-01
Full Text Available The maximum-likelihood expectation-maximization (ML-EM algorithm is used for an iterative image reconstruction (IIR method and performs well with respect to the inverse problem as cross-entropy minimization in computed tomography. For accelerating the convergence rate of the ML-EM, the ordered-subsets expectation-maximization (OS-EM with a power factor is effective. In this paper, we propose a continuous analog to the power-based accelerated OS-EM algorithm. The continuous-time image reconstruction (CIR system is described by nonlinear differential equations with piecewise smooth vector fields by a cyclic switching process. A numerical discretization of the differential equation by using the geometric multiplicative first-order expansion of the nonlinear vector field leads to an exact equivalent iterative formula of the power-based OS-EM. The convergence of nonnegatively constrained solutions to a globally stable equilibrium is guaranteed by the Lyapunov theorem for consistent inverse problems. We illustrate through numerical experiments that the convergence characteristics of the continuous system have the highest quality compared with that of discretization methods. We clarify how important the discretization method approximates the solution of the CIR to design a better IIR method.
Pareto optimization of an industrial ecosystem: sustainability maximization
Directory of Open Access Journals (Sweden)
J. G. M.-S. Monteiro
2010-09-01
Full Text Available This work investigates a procedure to design an Industrial Ecosystem for sequestrating CO2 and consuming glycerol in a Chemical Complex with 15 integrated processes. The Complex is responsible for the production of methanol, ethylene oxide, ammonia, urea, dimethyl carbonate, ethylene glycol, glycerol carbonate, β-carotene, 1,2-propanediol and olefins, and is simulated using UNISIM Design (Honeywell. The process environmental impact (EI is calculated using the Waste Reduction Algorithm, while Profit (P is estimated using classic cost correlations. MATLAB (The Mathworks Inc is connected to UNISIM to enable optimization. The objective is granting maximum process sustainability, which involves finding a compromise between high profitability and low environmental impact. Sustainability maximization is therefore understood as a multi-criteria optimization problem, addressed by means of the Pareto optimization methodology for trading off P vs. EI.
Maximizing sparse matrix vector product performance in MIMD computers
Energy Technology Data Exchange (ETDEWEB)
McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.
1994-12-31
A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.
Designing lattice structures with maximal nearest-neighbor entanglement
Energy Technology Data Exchange (ETDEWEB)
Navarro-Munoz, J C; Lopez-Sandoval, R [Instituto Potosino de Investigacion CientIfica y Tecnologica, Camino a la presa San Jose 2055, 78216 San Luis Potosi (Mexico); Garcia, M E [Theoretische Physik, FB 18, Universitaet Kassel and Center for Interdisciplinary Nanostructure Science and Technology (CINSaT), Heinrich-Plett-Str.40, 34132 Kassel (Germany)
2009-08-07
In this paper, we study the numerical optimization of nearest-neighbor concurrence of bipartite one- and two-dimensional lattices, as well as non-bipartite two-dimensional lattices. These systems are described in the framework of a tight-binding Hamiltonian while the optimization of concurrence was performed using genetic algorithms. Our results show that the concurrence of the optimized lattice structures is considerably higher than that of non-optimized systems. In the case of one-dimensional chains, the concurrence increases dramatically when the system begins to dimerize, i.e., it undergoes a structural phase transition (Peierls distortion). This result is consistent with the idea that entanglement is maximal or shows a singularity near quantum phase transitions. Moreover, the optimization of concurrence in two-dimensional bipartite and non-bipartite lattices is achieved when the structures break into smaller subsystems, which are arranged in geometrically distinguishable configurations.
On Maximal Hard-Core Thinnings of Stationary Particle Processes
Hirsch, Christian; Last, Günter
2017-12-01
The present paper studies existence and distributional uniqueness of subclasses of stationary hard-core particle systems arising as thinnings of stationary particle processes. These subclasses are defined by natural maximality criteria. We investigate two specific criteria, one related to the intensity of the hard-core particle process, the other one being a local optimality criterion on the level of realizations. In fact, the criteria are equivalent under suitable moment conditions. We show that stationary hard-core thinnings satisfying such criteria exist and are frequently distributionally unique. More precisely, distributional uniqueness holds in subcritical and barely supercritical regimes of continuum percolation. Additionally, based on the analysis of a specific example, we argue that fluctuations in grain sizes can play an important role for establishing distributional uniqueness at high intensities. Finally, we provide a family of algorithmically constructible approximations whose volume fractions are arbitrarily close to the maximum.
[Retinal implants. Patients' expectations].
Gusseck, H
2005-10-01
The "Pro Retina" Society and the "Retina Implant" Foundation, two patients associations with the goal of "preventing blindness," view the "Retina Implant" project as a possibility for providing blind individuals a modicum of restored vision. Both patients associations cultivated a cooperative relationship with researchers and policy makers already during the research phase, introducing the wishes and concerns of patients into considerations and providing information and the groundwork for acceptance in society and among those who may potentially benefit from the method. An initial survey of patients, the visually impaired, and blind people revealed that recovery of sight not only represents a medical and technical problem but that it also involves numerous psychosocial implications. By adhering to ethical standards in implantations, in particular by taking patient autonomy into consideration, anxieties and fears can be reduced. It would appear from early positive results in a short-term clinical study that soon successful chronic retinal implantation can be expected. The dedication displayed by physicians, researchers, and the industry as well as the willingness of the Federal Ministry for Research to take the risk are appreciated and greatfully accepted by the patients and their relatives.
Expectations and speech intelligibility.
Babel, Molly; Russell, Jamie
2015-05-01
Socio-indexical cues and paralinguistic information are often beneficial to speech processing as this information assists listeners in parsing the speech stream. Associations that particular populations speak in a certain speech style can, however, make it such that socio-indexical cues have a cost. In this study, native speakers of Canadian English who identify as Chinese Canadian and White Canadian read sentences that were presented to listeners in noise. Half of the sentences were presented with a visual-prime in the form of a photo of the speaker and half were presented in control trials with fixation crosses. Sentences produced by Chinese Canadians showed an intelligibility cost in the face-prime condition, whereas sentences produced by White Canadians did not. In an accentedness rating task, listeners rated White Canadians as less accented in the face-prime trials, but Chinese Canadians showed no such change in perceived accentedness. These results suggest a misalignment between an expected and an observed speech signal for the face-prime trials, which indicates that social information about a speaker can trigger linguistic associations that come with processing benefits and costs.
Online Nonlinear AUC Maximization for Imbalanced Data Sets.
Hu, Junjie; Yang, Haiqin; Lyu, Michael R; King, Irwin; So, Anthony Man-Cho
2017-01-27
Classifying binary imbalanced streaming data is a significant task in both machine learning and data mining. Previously, online area under the receiver operating characteristic (ROC) curve (AUC) maximization has been proposed to seek a linear classifier. However, it is not well suited for handling nonlinearity and heterogeneity of the data. In this paper, we propose the kernelized online imbalanced learning (KOIL) algorithm, which produces a nonlinear classifier for the data by maximizing the AUC score while minimizing a functional regularizer. We address four major challenges that arise from our approach. First, to control the number of support vectors without sacrificing the model performance, we introduce two buffers with fixed budgets to capture the global information on the decision boundary by storing the corresponding learned support vectors. Second, to restrict the fluctuation of the learned decision function and achieve smooth updating, we confine the influence on a new support vector to its k-nearest opposite support vectors. Third, to avoid information loss, we propose an effective compensation scheme after the replacement is conducted when either buffer is full. With such a compensation scheme, the performance of the learned model is comparable to the one learned with infinite budgets. Fourth, to determine good kernels for data similarity representation, we exploit the multiple kernel learning framework to automatically learn a set of kernels. Extensive experiments on both synthetic and real-world benchmark data sets demonstrate the efficacy of our proposed approach.
Maximizing Information Diffusion in the Cyber-physical Integrated Network
Directory of Open Access Journals (Sweden)
Hongliang Lu
2015-11-01
Full Text Available Nowadays, our living environment has been embedded with smart objects, such as smart sensors, smart watches and smart phones. They make cyberspace and physical space integrated by their abundant abilities of sensing, communication and computation, forming a cyber-physical integrated network. In order to maximize information diffusion in such a network, a group of objects are selected as the forwarding points. To optimize the selection, a minimum connected dominating set (CDS strategy is adopted. However, existing approaches focus on minimizing the size of the CDS, neglecting an important factor: the weight of links. In this paper, we propose a distributed maximizing the probability of information diffusion (DMPID algorithm in the cyber-physical integrated network. Unlike previous approaches that only consider the size of CDS selection, DMPID also considers the information spread probability that depends on the weight of links. To weaken the effects of excessively-weighted links, we also present an optimization strategy that can properly balance the two factors. The results of extensive simulation show that DMPID can nearly double the information diffusion probability, while keeping a reasonable size of selection with low overhead in different distributed networks.
Effect of filters and reconstruction algorithms on I-124 PET in Siemens Inveon PET scanner
Ram Yu, A.; Kim, Jin Su
2015-10-01
Purpose: To assess the effects of filtering and reconstruction on Siemens I-124 PET data. Methods: A Siemens Inveon PET was used. Spatial resolution of I-124 was measured to a transverse offset of 50 mm from the center FBP, 2D ordered subset expectation maximization (OSEM2D), 3D re-projection algorithm (3DRP), and maximum a posteriori (MAP) methods were tested. Non-uniformity (NU), recovery coefficient (RC), and spillover ratio (SOR) parameterized image quality. Mini deluxe phantom data of I-124 was also assessed. Results: Volumetric resolution was 7.3 mm3 from the transverse FOV center when FBP reconstruction algorithms with ramp filter was used. MAP yielded minimal NU with β =1.5. OSEM2D yielded maximal RC. SOR was below 4% for FBP with ramp, Hamming, Hanning, or Shepp-Logan filters. Based on the mini deluxe phantom results, an FBP with Hanning or Parzen filters, or a 3DRP with Hanning filter yielded feasible I-124 PET data.Conclusions: Reconstruction algorithms and filters were compared. FBP with Hanning or Parzen filters, or 3DRP with Hanning filter yielded feasible data for quantifying I-124 PET.
Energy Technology Data Exchange (ETDEWEB)
Lee, C.G.; Chen, C.H. [Univ. of Massachusetts, North Dartmouth, MA (United States)
1996-12-31
In this paper a novel multiresolution wavelet analysis (MWA) and non-stationary Gaussian Markov random field (GMRF) technique is introduced for the identification of microcalcifications with high accuracy. The hierarchical multiresolution wavelet information in conjunction with the contextual information of the images extracted from GMRF provides a highly efficient technique for microcalcification detection. A Bayesian teaming paradigm realized via the expectation maximization (EM) algorithm was also introduced for edge detection or segmentation of larger lesions recorded on the mammograms. The effectiveness of the approach has been extensively tested with a number of mammographic images provided by a local hospital.
Directory of Open Access Journals (Sweden)
Bal Sanghera
2016-02-01
Full Text Available Objective: Application of distinct positron emission tomography (PET scan reconstruction algorithms can lead to statistically significant differences in measuring lesion functional properties. We looked at the influence of two-dimensional filtered back projection (2D FBP, two-dimensional ordered subset expectation maximization (2D OSEM, three-dimensional ordered subset expectation maximization (3D OSEM without 3D maximum a posteriori and with (3D OSEM MAP on lesion hypoxia tracer uptake using a pre-clinical PET scanner. Methods: Reconstructed images of a rodent tumor model bearing P22 carcinosarcoma injected with hypoxia tracer Copper- 64-Diacetyl-bis (N4-methylthiosemicarbazone (i.e. Cu-64 ATSM were analyzed at 10 minute intervals till 60 minute post injection. Lesion maximum standardized uptake values (SUVmax and SUVmax/background SUVmean (T/B were recorded and investigated after application of multiple algorithm and reconstruction parameters to assess their influence on Cu-64 ATSM measurements and associated trends over time. Results: SUVmax exhibited convergence for OSEM reconstructions while ANOVA results showed a significant difference in SUVmax or T/B between 2D FBP, 2D OSEM, 3D OSEM and 3D OSEM MAP reconstructions across all time frames. SUVmax and T/B were greatest in magnitude for 2D OSEM followed by 3D OSEM MAP, 3D OSEM and then 2D FBP at all time frames respectively. Similarly SUVmax and T/B standard deviations (SD were lowest for 2D OSEM in comparison with other algorithms. Conclusion: Significantly higher magnitude lesion SUVmax and T/B combined with lower SD were observed using 2D OSEM reconstruction in comparison with 2D FBP, 3D OSEM and 3D OSEM MAP algorithms at all time frames. Results are consistent with other published studies however more specimens are required for full validation.
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Maximizing the Spectacle of Water Fountains
Simoson, Andrew J.
2009-01-01
For a given initial speed of water from a spigot or jet, what angle of the jet will maximize the visual impact of the water spray in the fountain? This paper focuses on fountains whose spigots are arranged in circular fashion, and couches the measurement of the visual impact in terms of the surface area and the volume under the fountain's natural…
An ethical justification of profit maximization
DEFF Research Database (Denmark)
Koch, Carsten Allan
2010-01-01
behaviour. It is argued that some form of consequential ethics must be applied, and that both profit seeking and profit maximization can be defended from a rule-consequential point of view. It is noted, however, that the result does not apply unconditionally, but requires that certain form of profit (and...
Maximization of eigenvalues using topology optimization
DEFF Research Database (Denmark)
Pedersen, Niels Leergaard
2000-01-01
to localized modes in low density areas. The topology optimization problem is formulated using the SIMP method. Special attention is paid to a numerical method for removing localized eigenmodes in low density areas. The method is applied to numerical examples of maximizing the first eigenfrequency, One example...
Robust Utility Maximization Under Convex Portfolio Constraints
Energy Technology Data Exchange (ETDEWEB)
Matoussi, Anis, E-mail: anis.matoussi@univ-lemans.fr [Université du Maine, Risk and Insurance institut of Le Mans Laboratoire Manceau de Mathématiques (France); Mezghani, Hanen, E-mail: hanen.mezghani@lamsin.rnu.tn; Mnif, Mohamed, E-mail: mohamed.mnif@enit.rnu.tn [University of Tunis El Manar, Laboratoire de Modélisation Mathématique et Numérique dans les Sciences de l’Ingénieur, ENIT (Tunisia)
2015-04-15
We study a robust maximization problem from terminal wealth and consumption under a convex constraints on the portfolio. We state the existence and the uniqueness of the consumption–investment strategy by studying the associated quadratic backward stochastic differential equation. We characterize the optimal control by using the duality method and deriving a dynamic maximum principle.
How to Generate Good Profit Maximization Problems
Davis, Lewis
2014-01-01
In this article, the author considers the merits of two classes of profit maximization problems: those involving perfectly competitive firms with quadratic and cubic cost functions. While relatively easy to develop and solve, problems based on quadratic cost functions are too simple to address a number of important issues, such as the use of…
Maximizing scientific knowledge from randomized clinical trials
DEFF Research Database (Denmark)
Gustafsson, Finn; Atar, Dan; Pitt, Bertram
2010-01-01
Trialists have an ethical and financial responsibility to plan and conduct clinical trials in a manner that will maximize the scientific knowledge gained from the trial. However, the amount of scientific information generated by randomized clinical trials in cardiovascular medicine is highly...
Definable maximal discrete sets in forcing extensions
DEFF Research Database (Denmark)
Törnquist, Asger Dag; Schrittesser, David
2017-01-01
that in the Sacks and Miller extensions there is a Π11 maximal orthogonal family ("mof") of Borel probability measures on Cantor space. A similar result is also obtained for Π11 mad families. By contrast, we show that if there is a Mathias real over L then there are no Σ12 mofs....
Maximizing Learning Potential in the Communicative Classroom.
Kumaravadivelu, B.
1993-01-01
A classroom observational study is presented to assess whether a macrostrategies framework will help communicative language teaching teachers to maximize learner potential in the classroom. Analysis of two classroom episodes revealed that one episode was evidently more communicative than the other. (seven references) (VWL)
Maximizing Resource Utilization in Video Streaming Systems
Alsmirat, Mohammad Abdullah
2013-01-01
Video streaming has recently grown dramatically in popularity over the Internet, Cable TV, and wire-less networks. Because of the resource demanding nature of video streaming applications, maximizing resource utilization in any video streaming system is a key factor to increase the scalability and decrease the cost of the system. Resources to…
A THEORY OF MAXIMIZING SENSORY INFORMATION
Hateren, J.H. van
1992-01-01
A theory is developed on the assumption that early sensory processing aims at maximizing the information rate in the channels connecting the sensory system to more central parts of the brain, where it is assumed that these channels are noisy and have a limited dynamic range. Given a stimulus power
Ehrenfest's Lottery--Time and Entropy Maximization
Ashbaugh, Henry S.
2010-01-01
Successful teaching of the Second Law of Thermodynamics suffers from limited simple examples linking equilibrium to entropy maximization. I describe a thought experiment connecting entropy to a lottery that mixes marbles amongst a collection of urns. This mixing obeys diffusion-like dynamics. Equilibrium is achieved when the marble distribution is…
Relationship between maximal exercise parameters and individual ...
African Journals Online (AJOL)
... predicted 83% of the variance when performance was measured as 20km average watts and was the only significant variable, amongst all VT and maximal variables, included in the stepwise multiple regression model. These results suggest that the self-selected exercise intensity of cyclists with physical disabilities during ...
Singularity Structure of Maximally Supersymmetric Scattering Amplitudes
DEFF Research Database (Denmark)
Arkani-Hamed, Nima; Bourjaily, Jacob L.; Cachazo, Freddy
2014-01-01
We present evidence that loop amplitudes in maximally supersymmetric (N=4) Yang-Mills theory (SYM) beyond the planar limit share some of the remarkable structures of the planar theory. In particular, we show that through two loops, the four-particle amplitude in full N=4 SYM has only logarithmic ...
Understanding Violations of Gricean Maxims in Preschoolers and Adults
Directory of Open Access Journals (Sweden)
Mako eOkanda
2015-07-01
Full Text Available This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants’ understanding of the following maxims was assessed: be informative (first maxim of quantity, avoid redundancy (second maxim of quantity, be truthful (maxim of quality, be relevant (maxim of relation, avoid ambiguity (second maxim of manner, and be polite (maxim of politeness. Sensitivity to violations of Gricean maxims increased with age: 4-year-olds’ understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner, and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Understanding violations of Gricean maxims in preschoolers and adults.
Okanda, Mako; Asada, Kosuke; Moriguchi, Yusuke; Itakura, Shoji
2015-01-01
This study used a revised Conversational Violations Test to examine Gricean maxim violations in 4- to 6-year-old Japanese children and adults. Participants' understanding of the following maxims was assessed: be informative (first maxim of quantity), avoid redundancy (second maxim of quantity), be truthful (maxim of quality), be relevant (maxim of relation), avoid ambiguity (second maxim of manner), and be polite (maxim of politeness). Sensitivity to violations of Gricean maxims increased with age: 4-year-olds' understanding of maxims was near chance, 5-year-olds understood some maxims (first maxim of quantity and maxims of quality, relation, and manner), and 6-year-olds and adults understood all maxims. Preschoolers acquired the maxim of relation first and had the greatest difficulty understanding the second maxim of quantity. Children and adults differed in their comprehension of the maxim of politeness. The development of the pragmatic understanding of Gricean maxims and implications for the construction of developmental tasks from early childhood to adulthood are discussed.
Change Detection Algorithms for Surveillance in Visual IoT: A Comparative Study
Akram, Beenish Ayesha; Zafar, Amna; Akbar, Ali Hammad; Wajid, Bilal; Chaudhry, Shafique Ahmad
2018-01-01
The VIoT (Visual Internet of Things) connects virtual information world with real world objects using sensors and pervasive computing. For video surveillance in VIoT, ChD (Change Detection) is a critical component. ChD algorithms identify regions of change in multiple images of the same scene recorded at different time intervals for video surveillance. This paper presents performance comparison of histogram thresholding and classification ChD algorithms using quantitative measures for video surveillance in VIoT based on salient features of datasets. The thresholding algorithms Otsu, Kapur, Rosin and classification methods k-means, EM (Expectation Maximization) were simulated in MATLAB using diverse datasets. For performance evaluation, the quantitative measures used include OSR (Overall Success Rate), YC (Yule's Coefficient) and JC (Jaccard's Coefficient), execution time and memory consumption. Experimental results showed that Kapur's algorithm performed better for both indoor and outdoor environments with illumination changes, shadowing and medium to fast moving objects. However, it reflected degraded performance for small object size with minor changes. Otsu algorithm showed better results for indoor environments with slow to medium changes and nomadic object mobility. k-means showed good results in indoor environment with small object size producing slow change, no shadowing and scarce illumination changes.
Determination of Pavement Rehabilitation Activities through a Permutation Algorithm
Directory of Open Access Journals (Sweden)
Sangyum Lee
2013-01-01
Full Text Available This paper presents a mathematical programming model for optimal pavement rehabilitation planning. The model maximized the rehabilitation area through a newly developed permutation algorithm, based on the procedures outlined in the harmony search (HS algorithm. Additionally, the proposed algorithm was based on an optimal solution method for the problem of multilocation rehabilitation activities on pavement structure, using empirical deterioration and rehabilitation effectiveness models, according to a limited maintenance budget. Thus, nonlinear pavement performance and rehabilitation activity decision models were used to maximize the objective functions of the rehabilitation area within a limited budget, through the permutation algorithm. Our results showed that the heuristic permutation algorithm provided a good optimum in terms of maximizing the rehabilitation area, compared with a method of the worst-first maintenance currently used in Seoul.
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Macro Expectations, Aggregate Uncertainty, and Expected Term Premia
DEFF Research Database (Denmark)
Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas
as well as aggregate macroeconomic uncertainty at the level of individual forecasters. We find that expected term premia are (i) time-varying and reasonably persistent, (ii) strongly related to expectations about future output growth, and (iii) positively affected by uncertainty about future output growth......, and that curvature is related to subjective term premium expectations themselves. Finally, an aggregate measure of forecasters' term premium expectations has predictive power for bond excess returns over horizons of up to one year....
Macro Expectations, Aggregate Uncertainty, and Expected Term Premia
DEFF Research Database (Denmark)
Dick, Christian D.; Schmeling, Maik; Schrimpf, Andreas
2013-01-01
as well as aggregate macroeconomic uncertainty at the level of individual forecasters. We find that expected term premia are (i) time-varying and reasonably persistent, (ii) strongly related to expectations about future output growth, and (iii) positively affected by uncertainty about future output growth......, and that curvature is related to subjective term premium expectations themselves. Finally, an aggregate measure of forecasters' term premium expectations has predictive power for bond excess returns over horizons of up to one year....
Social gradient in life expectancy and health expectancy in Denmark
DEFF Research Database (Denmark)
Brønnum-Hansen, Henrik; Andersen, Otto; Kjøller, Mette
2004-01-01
Health status of a population can be evaluated by health expectancy expressed as average lifetime in various states of health. The purpose of the study was to compare health expectancy in population groups at high, medium and low educational levels.......Health status of a population can be evaluated by health expectancy expressed as average lifetime in various states of health. The purpose of the study was to compare health expectancy in population groups at high, medium and low educational levels....
Familiarity changes expectations about fullness.
Brunstrom, Jeffrey M; Shakeshaft, Nicholas G; Alexander, Erin
2010-06-01
Expected satiation (the extent to which a food is expected to deliver fullness) is an excellent predictor of self-selected portion size (kcal). Here, we explored the prospect that expected satiation changes over time. Fifty-eight participants evaluated expected satiation in eight test foods (including two 'candidate' foods: sushi and muesli) and reported how often they consumed each food. In one of the candidate foods (sushi), and across other test foods, expected satiation increased with familiarity. Together, these findings are considered in the context of 'satiation drift' - the hypothesis that foods are expected to deliver poor satiation until experience teaches us otherwise. Copyright 2010 Elsevier Ltd. All rights reserved.
Gomi, Tsutomu; Sakai, Rina; Goto, Masami; Hara, Hidetake; Watanabe, Yusuke; Umeda, Tokuo
2017-10-01
To investigate methods to reduce metal artifacts during digital tomosynthesis for arthroplasty, we evaluated five algorithms with and without metal artifact reduction (MAR)-processing tested under different radiation doses (0.54, 0.47, and 0.33mSv): adaptive steepest descent projection onto convex sets (ASD-POCS), simultaneous algebraic reconstruction technique total variation (SART-TV), filtered back projection (FBP), maximum likelihood expectation maximization (MLEM), and SART. The algorithms were assessed by determining the artifact index (AI) and artifact spread function (ASF) on a prosthesis phantom. The AI data were statistically analyzed by two-way analysis of variance. Without MAR-processing, the greatest degree of effectiveness of the MLEM algorithm for reducing prosthetic phantom-related metal artifacts was achieved by quantification using the AI (MLEM vs. ASD-POCS, SART-TV, SART, and FBP; all PTV, and SART algorithms for reducing prosthetic phantom-related metal artifacts was achieved by quantification using the AI (MLEM, ASD-POCS, SART-TV, and SART vs. FBP; all PTV, and SART algorithm with MAR-processing. In ASF, the effect of metal artifact reduction was always greater at reduced radiation doses, regardless of which reconstruction algorithm with and without MAR-processing was used. In this phantom study, the MLEM algorithm without MAR-processing and ASD-POCS, SART-TV, and SART algorithm with MAR-processing gave improved metal artifact reduction. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Matilainen, Kaarina; Mäntysaari, Esa A; Lidauer, Martin H; Strandén, Ismo; Thompson, Robin
2013-01-01
Estimation of variance components by Monte Carlo (MC) expectation maximization (EM) restricted maximum likelihood (REML) is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR), where the information matrix was generated via sampling; MC average information(AI), where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
Directory of Open Access Journals (Sweden)
Kaarina Matilainen
Full Text Available Estimation of variance components by Monte Carlo (MC expectation maximization (EM restricted maximum likelihood (REML is computationally efficient for large data sets and complex linear mixed effects models. However, efficiency may be lost due to the need for a large number of iterations of the EM algorithm. To decrease the computing time we explored the use of faster converging Newton-type algorithms within MC REML implementations. The implemented algorithms were: MC Newton-Raphson (NR, where the information matrix was generated via sampling; MC average information(AI, where the information was computed as an average of observed and expected information; and MC Broyden's method, where the zero of the gradient was searched using a quasi-Newton-type algorithm. Performance of these algorithms was evaluated using simulated data. The final estimates were in good agreement with corresponding analytical ones. MC NR REML and MC AI REML enhanced convergence compared to MC EM REML and gave standard errors for the estimates as a by-product. MC NR REML required a larger number of MC samples, while each MC AI REML iteration demanded extra solving of mixed model equations by the number of parameters to be estimated. MC Broyden's method required the largest number of MC samples with our small data and did not give standard errors for the parameters directly. We studied the performance of three different convergence criteria for the MC AI REML algorithm. Our results indicate the importance of defining a suitable convergence criterion and critical value in order to obtain an efficient Newton-type method utilizing a MC algorithm. Overall, use of a MC algorithm with Newton-type methods proved feasible and the results encourage testing of these methods with different kinds of large-scale problem settings.
M-Theory and Maximally Supersymmetric Gauge Theories
Lambert, Neil
2012-01-01
In this informal review for non-specalists we discuss the construction of maximally supersymmetric gauge theories that arise on the worldvolumes branes in String Theory and M-Theory. Particular focus is made on the relatively recent construction of M2-brane worldvolume theories. In a formal sense, the existence of these quantum field theories can be viewed as predictions of M-Theory. Their construction is therefore a reinforcement of the ideas underlying String Theory and M-Theory. We also briefly discuss the six-dimensional conformal field theory that is expected to arise on M5-branes. The construction of this theory is not only an important open problem for M-Theory but also a significant challenge to our current understanding of quantum field theory more generally.
On a nonstandard Brownian motion and its maximal function
Andrade, Bernardo B. de
2015-07-01
This article uses Radically Elementary Probability Theory (REPT) to prove results about the Wiener walk (the radically elementary Brownian motion) without the technical apparatus required by stochastic integration. The techniques used replace measure-theoretic tools by discrete probability and the rigorous use of infinitesimals. Specifically, REPT is applied to the results in Palacios (The American Statistician, 2008) to calculate certain expectations related to the Wiener walk and its maximal function. Because Palacios uses mostly combinatorics and no measure theory his results carry over through REPT with minimal changes. The paper also presents a construction of the Wiener walk which is intended to mimic the construction of Brownian motion from "continuous" white noise. A brief review of the nonstandard model on which REPT is based is given in the Appendix in order to minimize the need for previous exposure to the subject.
Dilworth's Theorem Revisited, an Algorithmic Proof
W.H.L.M. Pijls (Wim); R. Potharst (Rob)
2011-01-01
textabstractDilworth's theorem establishes a link between a minimal path cover and a maximal antichain in a digraph. A new proof for Dilworth's theorem is given. Moreover an algorithm to find both the path cover and the antichain, as considered in the theorem, is presented.
Algorithmic test design using classical item parameters
van der Linden, Willem J.; Adema, Jos J.
Two optimalization models for the construction of tests with a maximal value of coefficient alpha are given. Both models have a linear form and can be solved by using a branch-and-bound algorithm. The first model assumes an item bank calibrated under the Rasch model and can be used, for instance,
Formation of Rationally Heterogeneous Expectations
Pfajfar, D.
2012-01-01
Abstract: This paper models expectation formation by taking into account that agents produce heterogeneous expectations due to model uncertainty, informational frictions and different capacities for processing information. We show that there are two general classes of steady states within this
[Chemical constituents of Trichosanthes kirilowii Maxim].
Sun, Xiao-Ye; Wu, Hong-Hua; Fu, Ai-Zhen; Zhang, Peng
2012-07-01
To study the chemical constituents of Trichosanthes kirilowii Maxim., chromatographic methods such as D101 macroporous resin, silica gel column chromatographic technology, Sephadex LH-20, octadecylsilyl (ODS) column chromatographic technique and preparative HPLC were used and nine compounds were isolated from a 95% (v/v) ethanol extract of the plant. By using spectroscopic techniques including 1H NMR, 13C NMR, 1H-1H COSY, HSQC and HMBC, these compounds were identified as 5-ethoxymethyl-1-carboxyl propyl-1H-pyrrole-2-carbaldehyde (1), 5-hydroxymethyl-2-furfural (2), chrysoeriol (3), 4'-hydroxyscutellarin (4), vanillic acid (5), alpha-spinasterol (6), beta-D-glucopyranosyl-a-spinasterol (7), stigmast-7-en-3beta-ol (8), and adenosine (9), separately. Among them, compound 1 is a new compound, and compounds 3, 4 and 5 are isolated from the genus Trichosanthes kirilowii Maxim. for the first time.
Maximal temperature in a simple thermodynamical system
Dai, De-Chang; Stojkovic, Dejan
2016-06-01
Temperature in a simple thermodynamical system is not limited from above. It is also widely believed that it does not make sense talking about temperatures higher than the Planck temperature in the absence of the full theory of quantum gravity. Here, we demonstrate that there exist a maximal achievable temperature in a system where particles obey the laws of quantum mechanics and classical gravity before we reach the realm of quantum gravity. Namely, if two particles with a given center of mass energy come at the distance shorter than the Schwarzschild diameter apart, according to classical gravity they will form a black hole. It is possible to calculate that a simple thermodynamical system will be dominated by black holes at a critical temperature which is about three times lower than the Planck temperature. That represents the maximal achievable temperature in a simple thermodynamical system.
Modularity maximization using completely positive programming
Yazdanparast, Sakineh; Havens, Timothy C.
2017-04-01
Community detection is one of the most prominent problems of social network analysis. In this paper, a novel method for Modularity Maximization (MM) for community detection is presented which exploits the Alternating Direction Augmented Lagrangian (ADAL) method for maximizing a generalized form of Newman's modularity function. We first transform Newman's modularity function into a quadratic program and then use Completely Positive Programming (CPP) to map the quadratic program to a linear program, which provides the globally optimal maximum modularity partition. In order to solve the proposed CPP problem, a closed form solution using the ADAL merged with a rank minimization approach is proposed. The performance of the proposed method is evaluated on several real-world data sets used for benchmarks community detection. Simulation results shows the proposed technique provides outstanding results in terms of modularity value for crisp partitions.
Coulomb's law in maximally symmetric spaces
Vakili, B.; Gorji, M. A.
2012-01-01
We study the modifications to the Coulomb's law when the background geometry is a $n$-dimensional maximally symmetric space, by using of the $n$-dimensional version of the Gauss' theorem. It is shown that some extra terms are added to the usual expression of the Coulomb electric field due to the curvature of the background space. Also, we consider the problem of existence of magnetic monopoles in such spaces and present analytical expressions for the corresponding magnetic fields and vector p...
DEFF Research Database (Denmark)
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also......This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Algorithms on ensemble quantum computers.
Boykin, P Oscar; Mor, Tal; Roychowdhury, Vwani; Vatan, Farrokh
2010-06-01
In ensemble (or bulk) quantum computation, all computations are performed on an ensemble of computers rather than on a single computer. Measurements of qubits in an individual computer cannot be performed; instead, only expectation values (over the complete ensemble of computers) can be measured. As a result of this limitation on the model of computation, many algorithms cannot be processed directly on such computers, and must be modified, as the common strategy of delaying the measurements usually does not resolve this ensemble-measurement problem. Here we present several new strategies for resolving this problem. Based on these strategies we provide new versions of some of the most important quantum algorithms, versions that are suitable for implementing on ensemble quantum computers, e.g., on liquid NMR quantum computers. These algorithms are Shor's factorization algorithm, Grover's search algorithm (with several marked items), and an algorithm for quantum fault-tolerant computation. The first two algorithms are simply modified using a randomizing and a sorting strategies. For the last algorithm, we develop a classical-quantum hybrid strategy for removing measurements. We use it to present a novel quantum fault-tolerant scheme. More explicitly, we present schemes for fault-tolerant measurement-free implementation of Toffoli and σ(z)(¼) as these operations cannot be implemented "bitwise", and their standard fault-tolerant implementations require measurement.
Placebo effects of caffeine on maximal voluntary concentric force of the knee flexors and extensors.
Tallis, Jason; Muhammad, Bilal; Islam, Mohammed; Duncan, Michael J
2016-09-01
We examined the placebo effect of caffeine and the combined effect of caffeine and caffeine expectancy on maximal voluntary strength. Fourteen men completed 4 randomized, single-blind experimental trials: (1) told caffeine, given caffeine (5 mg/kg) (CC); (2) told caffeine, given placebo (CP); (3) told placebo, given placebo (PP); and (4) told placebo, given caffeine (PC). Maximal voluntary concentric force and fatigue resistance of the knee flexors and extensors were measured using isokinetic dynamometry. A significant and equal improvement in peak concentric force was found in the CC and PC trials. Despite participants believing caffeine would evoke a performance benefit, there was no effect of CP. Caffeine caused an improvement in some aspects of muscle strength, but there was no additional effect of expectancy. Performance was poorer in participants who believed caffeine would have the greatest benefit, which highlights a link between expected ergogenicity, motivation, and personality characteristics. Muscle Nerve 54: 479-486, 2016. © 2015 Wiley Periodicals, Inc.
Rabideau, Gregg R.; Chien, Steve A.
2010-01-01
AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.
Ant Colony Optimization for Social Utility Maximization in a Multiuser Communication System
Directory of Open Access Journals (Sweden)
Ming-Hua Lin
2013-01-01
Full Text Available In a multiuser communication system such as cognitive radio or digital subscriber lines, the transmission rate of each user is affected by the channel background noise and the crosstalk interference from other users. This paper presents an efficient ant colony optimization algorithm to allocate each user’s limited power on different channels for maximizing social utility (i.e., the sum of all individual utilities. The proposed algorithm adopts an initial solution that allocates more power on the channel with a lower background noise level. Besides, the cooling concept of simulated annealing is integrated into the proposed method to improve the convergence rate during the local search of the ant colony optimization algorithm. A number of experiments are conducted to validate the effectiveness of the proposed algorithm.
Directory of Open Access Journals (Sweden)
Eduardo Castañeda
2014-01-01
Full Text Available We present in this work a low-complexity algorithm to solve the sum rate maximization problem in multiuser MIMO broadcast channels with downlink beamforming. Our approach decouples the user selection problem from the resource allocation problem and its main goal is to create a set of quasiorthogonal users. The proposed algorithm exploits physical metrics of the wireless channels that can be easily computed in such a way that a null space projection power can be approximated efficiently. Based on the derived metrics we present a mathematical model that describes the dynamics of the user selection process which renders the user selection problem into an integer linear program. Numerical results show that our approach is highly efficient to form groups of quasiorthogonal users when compared to previously proposed algorithms in the literature. Our user selection algorithm achieves a large portion of the optimum user selection sum rate (90% for a moderate number of active users.
Cardiorespiratory Coordination in Repeated Maximal Exercise
Directory of Open Access Journals (Sweden)
Sergi Garcia-Retortillo
2017-06-01
Full Text Available Increases in cardiorespiratory coordination (CRC after training with no differences in performance and physiological variables have recently been reported using a principal component analysis approach. However, no research has yet evaluated the short-term effects of exercise on CRC. The aim of this study was to delineate the behavior of CRC under different physiological initial conditions produced by repeated maximal exercises. Fifteen participants performed 2 consecutive graded and maximal cycling tests. Test 1 was performed without any previous exercise, and Test 2 6 min after Test 1. Both tests started at 0 W and the workload was increased by 25 W/min in males and 20 W/min in females, until they were not able to maintain the prescribed cycling frequency of 70 rpm for more than 5 consecutive seconds. A principal component (PC analysis of selected cardiovascular and cardiorespiratory variables (expired fraction of O2, expired fraction of CO2, ventilation, systolic blood pressure, diastolic blood pressure, and heart rate was performed to evaluate the CRC defined by the number of PCs in both tests. In order to quantify the degree of coordination, the information entropy was calculated and the eigenvalues of the first PC (PC1 were compared between tests. Although no significant differences were found between the tests with respect to the performed maximal workload (Wmax, maximal oxygen consumption (VO2 max, or ventilatory threshold (VT, an increase in the number of PCs and/or a decrease of eigenvalues of PC1 (t = 2.95; p = 0.01; d = 1.08 was found in Test 2 compared to Test 1. Moreover, entropy was significantly higher (Z = 2.33; p = 0.02; d = 1.43 in the last test. In conclusion, despite the fact that no significant differences were observed in the conventionally explored maximal performance and physiological variables (Wmax, VO2 max, and VT between tests, a reduction of CRC was observed in Test 2. These results emphasize the interest of CRC
Directory of Open Access Journals (Sweden)
Md. Rezaul Karim
2012-03-01
Full Text Available Mining interesting patterns from DNA sequences is one of the most challenging tasks in bioinformatics and computational biology. Maximal contiguous frequent patterns are preferable for expressing the function and structure of DNA sequences and hence can capture the common data characteristics among related sequences. Biologists are interested in finding frequent orderly arrangements of motifs that are responsible for similar expression of a group of genes. In order to reduce mining time and complexity, however, most existing sequence mining algorithms either focus on finding short DNA sequences or require explicit specification of sequence lengths in advance. The challenge is to find longer sequences without specifying sequence lengths in advance. In this paper, we propose an efficient approach to mining maximal contiguous frequent patterns from large DNA sequence datasets. The experimental results show that our proposed approach is memory-efficient and mines maximal contiguous frequent patterns within a reasonable time.
Generation of Referring Expressions: Assessing the Incremental Algorithm
van Deemter, Kees; Gatt, Albert; van der Sluis, Ielka; Power, Richard
2012-01-01
A substantial amount of recent work in natural language generation has focused on the generation of "one-shot" referring expressions whose only aim is to identify a target referent. Dale and Reiter's Incremental Algorithm (IA) is often thought to be the best algorithm for maximizing the similarity to referring expressions produced by people. We…
Expectations on Track? High School Tracking and Adolescent Educational Expectations
DEFF Research Database (Denmark)
Karlson, Kristian Bernt
2015-01-01
This paper examines the role of adaptation in expectation formation processes by analyzing how educational tracking in high schools affects adolescents' educational expectations. I argue that adolescents view track placement as a signal about their academic abilities and respond to it in terms...... of modifying their educational expectations. Applying a difference-in-differences approach to the National Educational Longitudinal Study of 1988, I find that being placed in an advanced or honors class in high school positively affects adolescents’ expectations, particularly if placement is consistent across...
Algorithms, complexity, and the sciences.
Papadimitriou, Christos
2014-11-11
Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.
Creating real network with expected degree distribution: A statistical simulation
WenJun Zhang; GuangHua Liu
2012-01-01
The degree distribution of known networks is one of the focuses in network analysis. However, its inverse problem, i.e., to create network from known degree distribution has not yet been reported. In present study, a statistical simulation algorithm was developed to create real network with expected degree distribution. It is aniteration procedure in which a real network, with the least deviation of actual degree distribution to expected degree distribution, was created. Random assignment was...
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Buśko, Krzysztof; Nowak, Anna
2008-01-01
Purpose. The aim of the study was to follow changes of the maximal muscle torque and maximal power output of lower extremities in male judoists during pre-competition training (PCT). The original hypothesis assumed that different training loads would cause changes of the maximal muscle torque and maximal power output of legs in male judoists during pre-competition training, but not changes of the topography of the maximal muscle torque in all muscle groups. Basic procedures. The study sample ...
Directory of Open Access Journals (Sweden)
Y. Tang
2006-01-01
Full Text Available This study provides a comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO tools' relative effectiveness in calibrating hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (ε-NSGAII, the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA, and the Strength Pareto Evolutionary Algorithm 2 (SPEA2. This study uses three test cases to compare the algorithms' performances: (1 a standardized test function suite from the computer science literature, (2 a benchmark hydrologic calibration test case for the Leaf River near Collins, Mississippi, and (3 a computationally intensive integrated surface-subsurface model application in the Shale Hills watershed in Pennsylvania. One challenge and contribution of this work is the development of a methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques. Overall, SPEA2 attained competitive to superior results for most of the problems tested in this study. The primary strengths of the SPEA2 algorithm lie in its search reliability and its diversity preservation operator. The biggest challenge in maximizing the performance of SPEA2 lies in specifying an effective archive size without a priori knowledge of the Pareto set. In practice, this would require significant trial-and-error analysis, which is problematic for more complex, computationally intensive calibration applications. ε-NSGAII appears to be superior to MOSCEM-UA and competitive with SPEA2 for hydrologic model calibration. ε-NSGAII's primary strength lies in its ease-of-use due to its dynamic population sizing and archiving which lead to rapid convergence to very high quality solutions with minimal user input. MOSCEM-UA is best suited for hydrologic model calibration applications that have small
Bosch, Carl; Degirmenci, Soysal; Barlow, Jason; Mesika, Assaf; Politte, David G.; O'Sullivan, Joseph A.
2016-05-01
X-ray computed tomography reconstruction for medical, security and industrial applications has evolved through 40 years of experience with rotating gantry scanners using analytic reconstruction techniques such as filtered back projection (FBP). In parallel, research into statistical iterative reconstruction algorithms has evolved to apply to sparse view scanners in nuclear medicine, low data rate scanners in Positron Emission Tomography (PET) [5, 7, 10] and more recently to reduce exposure to ionizing radiation in conventional X-ray CT scanners. Multiple approaches to statistical iterative reconstruction have been developed based primarily on variations of expectation maximization (EM) algorithms. The primary benefit of EM algorithms is the guarantee of convergence that is maintained when iterative corrections are made within the limits of convergent algorithms. The primary disadvantage, however is that strict adherence to correction limits of convergent algorithms extends the number of iterations and ultimate timeline to complete a 3D volumetric reconstruction. Researchers have studied methods to accelerate convergence through more aggressive corrections [1], ordered subsets [1, 3, 4, 9] and spatially variant image updates. In this paper we describe the development of an AM reconstruction algorithm with accelerated convergence for use in a real-time explosive detection application for aviation security. By judiciously applying multiple acceleration techniques and advanced GPU processing architectures, we are able to perform 3D reconstruction of scanned passenger baggage at a rate of 75 slices per second. Analysis of the results on stream of commerce passenger bags demonstrates accelerated convergence by factors of 8 to 15, when comparing images from accelerated and strictly convergent algorithms.
Maximization of instantaneous wind penetration using particle ...
African Journals Online (AJOL)
The developed algorithm has been tested on modified IEEE 14-bus test system. The results have shown the maximum instantaneous wind energy penetration limit in percentage and also maximum bus loading point explicitly beyond which system drives into instability. Keywords: Wind power generation, wind penetration, ...
Algorithms for Graph Rigidity and Scene Analysis
DEFF Research Database (Denmark)
Berg, Alex Rune; Jordán, Tibor
2003-01-01
We investigate algorithmic questions and structural problems concerning graph families defined by `edge-counts'. Motivated by recent developments in the unique realization problem of graphs, we give an efficient algorithm to compute the rigid, redundantly rigid, M-connected, and globally rigid...... components of a graph. Our algorithm is based on (and also extends and simplifies) the idea of Hendrickson and Jacobs, as it uses orientations as the main algorithmic tool. We also consider families of bipartite graphs which occur in parallel drawings and scene analysis. We verify a conjecture of Whiteley...... by showing that 2d-connected bipartite graphs are d-tight. We give a new algorithm for finding a maximal d-sharp subgraph. We also answer a question of Imai and show that finding a maximum size d-sharp subgraph is NP-hard....
Using molecular biology to maximize concurrent training.
Baar, Keith
2014-11-01
Very few sports use only endurance or strength. Outside of running long distances on a flat surface and power-lifting, practically all sports require some combination of endurance and strength. Endurance and strength can be developed simultaneously to some degree. However, the development of a high level of endurance seems to prohibit the development or maintenance of muscle mass and strength. This interaction between endurance and strength is called the concurrent training effect. This review specifically defines the concurrent training effect, discusses the potential molecular mechanisms underlying this effect, and proposes strategies to maximize strength and endurance in the high-level athlete.
Process Improvement for Maximized Therapeutic Innovation Outcome.
Waldman, Scott A; Terzic, Andre
2018-01-01
Deconvoluting key biological mechanisms forms the framework for therapeutic discovery. Strategies that enable effective translation of those insights along the development and regulatory path ultimately drive validated clinical application in patients and populations. Accordingly, parity in What vs. How we transform novel mechanistic insights into therapeutic paradigms is essential in achieving success. Aligning molecular discovery with innovations in structures and processes along the discovery-development-regulation-utilization continuum maximizes the return on public and private investments for next-generation solutions in managing health and disease. © 2017 ASCPT.
Relaxation dynamics of maximally clustered networks
Klaise, Janis; Johnson, Samuel
2018-01-01
We study the relaxation dynamics of fully clustered networks (maximal number of triangles) to an unclustered state under two different edge dynamics—the double-edge swap, corresponding to degree-preserving randomization of the configuration model, and single edge replacement, corresponding to full randomization of the Erdős-Rényi random graph. We derive expressions for the time evolution of the degree distribution, edge multiplicity distribution and clustering coefficient. We show that under both dynamics networks undergo a continuous phase transition in which a giant connected component is formed. We calculate the position of the phase transition analytically using the Erdős-Rényi phenomenology.
Intraoperative MRI and Maximizing Extent of Resection.
Rao, Ganesh
2017-10-01
Intraoperative MRI (iMRI) is a neurosurgical adjunct used to maximize the removal of glioma, the most common primary brain tumor. Increased extent of resection of gliomas has been shown to correlate with longer survival times. iMRI units are variable in design and magnet strength, which can affect patient selection and image quality. Multiple studies have shown that surgical resection of gliomas using iMRI results in increased extent of resection and survival time. Level II evidence supports the use of iMRI in the surgical treatment of glioma. Copyright © 2017 Elsevier Inc. All rights reserved.
Moving multiple sinks through wireless sensor networks for lifetime maximization.
Energy Technology Data Exchange (ETDEWEB)
Petrioli, Chiara (Universita di Roma); Carosi, Alessio (Universita di Roma); Basagni, Stefano (Northeastern University); Phillips, Cynthia Ann
2008-01-01
Unattended sensor networks typically watch for some phenomena such as volcanic events, forest fires, pollution, or movements in animal populations. Sensors report to a collection point periodically or when they observe reportable events. When sensors are too far from the collection point to communicate directly, other sensors relay messages for them. If the collection point location is static, sensor nodes that are closer to the collection point relay far more messages than those on the periphery. Assuming all sensor nodes have roughly the same capabilities, those with high relay burden experience battery failure much faster than the rest of the network. However, since their death disconnects the live nodes from the collection point, the whole network is then dead. We consider the problem of moving a set of collectors (sinks) through a wireless sensor network to balance the energy used for relaying messages, maximizing the lifetime of the network. We show how to compute an upper bound on the lifetime for any instance using linear and integer programming. We present a centralized heuristic that produces sink movement schedules that produce network lifetimes within 1.4% of the upper bound for realistic settings. We also present a distributed heuristic that produces lifetimes at most 25:3% below the upper bound. More specifically, we formulate a linear program (LP) that is a relaxation of the scheduling problem. The variables are naturally continuous, but the LP relaxes some constraints. The LP has an exponential number of constraints, but we can satisfy them all by enforcing only a polynomial number using a separation algorithm. This separation algorithm is a p-median facility location problem, which we can solve efficiently in practice for huge instances using integer programming technology. This LP selects a set of good sensor configurations. Given the solution to the LP, we can find a feasible schedule by selecting a subset of these configurations, ordering them
Limited angle C-arm tomosynthesis reconstruction algorithms
Malalla, Nuhad A. Y.; Xu, Shiyu; Chen, Ying
2015-03-01
In this paper, C-arm tomosynthesis with digital detector was investigated as a novel three dimensional (3D) imaging technique. Digital tomosythses is an imaging technique to provide 3D information of the object by reconstructing slices passing through the object, based on a series of angular projection views with respect to the object. C-arm tomosynthesis provides two dimensional (2D) X-ray projection images with rotation (-/+20 angular range) of both X-ray source and detector. In this paper, four representative reconstruction algorithms including point by point back projection (BP), filtered back projection (FBP), simultaneous algebraic reconstruction technique (SART) and maximum likelihood expectation maximization (MLEM) were investigated. Dataset of 25 projection views of 3D spherical object that located at center of C-arm imaging space was simulated from 25 angular locations over a total view angle of 40 degrees. With reconstructed images, 3D mesh plot and 2D line profile of normalized pixel intensities on focus reconstruction plane crossing the center of the object were studied with each reconstruction algorithm. Results demonstrated the capability to generate 3D information from limited angle C-arm tomosynthesis. Since C-arm tomosynthesis is relatively compact, portable and can avoid moving patients, it has been investigated for different clinical applications ranging from tumor surgery to interventional radiology. It is very important to evaluate C-arm tomosynthesis for valuable applications.
Accelerating Popular Tomographic Reconstruction Algorithms on Commodity PC Graphics Hardware
Xu, Fang; Mueller, K.
2005-06-01
The task of reconstructing an object from its projections via tomographic methods is a time-consuming process due to the vast complexity of the data. For this reason, manufacturers of equipment for medical computed tomography (CT) rely mostly on special application specified integrated circuits (ASICs) to obtain the fast reconstruction times required in clinical settings. Although modern CPUs have gained sufficient power in recent years to be competitive for two-dimensional (2D) reconstruction, this is not the case for three-dimensional (3D) reconstructions, especially not when iterative algorithms must be applied. The recent evolution of commodity PC computer graphics boards (GPUs) has the potential to change this picture in a very dramatic way. In this paper we will show how the new floating point GPUs can be exploited to perform both analytical and iterative reconstruction from X-ray and functional imaging data. For this purpose, we decompose three popular three-dimensional (3D) reconstruction algorithms (Feldkamp filtered backprojection, the simultaneous algebraic reconstruction technique, and expectation maximization) into a common set of base modules, which all can be executed on the GPU and their output linked internally. Visualization of the reconstructed object is easily achieved since the object already resides in the graphics hardware, allowing one to run a visualization module at any time to view the reconstruction results. Our implementation allows speedups of over an order of magnitude with respect to CPU implementations, at comparable image quality.
Babadi, Behtash; Ba, Demba; Purdon, Patrick L; Brown, Emery N
2013-10-30
In this paper, we study the theoretical properties of a class of iteratively re-weighted least squares (IRLS) algorithms for sparse signal recovery in the presence of noise. We demonstrate a one-to-one correspondence between this class of algorithms and a class of Expectation-Maximization (EM) algorithms for constrained maximum likelihood estimation under a Gaussian scale mixture (GSM) distribution. The IRLS algorithms we consider are parametrized by 0 0. The EM formalism, as well as the connection to GSMs, allow us to establish that the IRLS(ν, ε) algorithms minimize ε-smooth versions of the ℓ ν 'norms'. We leverage EM theory to show that, for each 0 < ν ≤ 1, the limit points of the sequence of IRLS(ν, ε) iterates are stationary point of the ε-smooth ℓ ν 'norm' minimization problem on the constraint set. Finally, we employ techniques from Compressive sampling (CS) theory to show that the class of IRLS(ν, ε) algorithms is stable for each 0 < ν ≤ 1, if the limit point of the iterates coincides the global minimizer. For the case ν = 1, we show that the algorithm converges exponentially fast to a neighborhood of the stationary point, and outline its generalization to super-exponential convergence for ν < 1. We demonstrate our claims via simulation experiments. The simplicity of IRLS, along with the theoretical guarantees provided in this contribution, make a compelling case for its adoption as a standard tool for sparse signal recovery.
Maximizing versus satisficing: happiness is a matter of choice.
Schwartz, Barry; Ward, Andrew; Monterosso, John; Lyubomirsky, Sonja; White, Katherine; Lehman, Darrin R
2002-11-01
Can people feel worse off as the options they face increase? The present studies suggest that some people--maximizers--can. Study 1 reported a Maximization Scale, which measures individual differences in desire to maximize. Seven samples revealed negative correlations between maximization and happiness, optimism, self-esteem, and life satisfaction, and positive correlations between maximization and depression, perfectionism, and regret. Study 2 found maximizers less satisfied than nonmaximizers (satisficers) with consumer decisions, and more likely to engage in social comparison. Study 3 found maximizers more adversely affected by upward social comparison. Study 4 found maximizers more sensitive to regret and less satisfied in an ultimatum bargaining game. The interaction between maximizing and choice is discussed in terms of regret, adaptation, and self-blame.
Gompertz-Makeham Life Expectancies
DEFF Research Database (Denmark)
Missov, Trifon I.; Lenart, Adam; Vaupel, James W.
We study the Gompertz and Gompertz-Makeham mortality models. We prove that the resulting life expectancy can be expressed in terms of a hypergeometric function if the population is heterogeneous with gamma-distributed individual frailty, or an incomplete gamma function if the study population...... is homogeneous. We use the properties of hypergeometric and incomplete gamma functions to construct approximations that allow calculating the respective life expectancy with high accuracy and interpreting the impact of model parameters on life expectancy....
Efficient Algorithms and Data Structures for Massive Data Sets
Alka
2010-05-01
For many algorithmic problems, traditional algorithms that optimise on the number of instructions executed prove expensive on I/Os. Novel and very different design techniques, when applied to these problems, can produce algorithms that are I/O efficient. This thesis adds to the growing chorus of such results. The computational models we use are the external memory model and the W-Stream model. On the external memory model, we obtain the following results. (1) An I/O efficient algorithm for computing minimum spanning trees of graphs that improves on the performance of the best known algorithm. (2) The first external memory version of soft heap, an approximate meldable priority queue. (3) Hard heap, the first meldable external memory priority queue that matches the amortised I/O performance of the known external memory priority queues, while allowing a meld operation at the same amortised cost. (4) I/O efficient exact, approximate and randomised algorithms for the minimum cut problem, which has not been explored before on the external memory model. (5) Some lower and upper bounds on I/Os for interval graphs. On the W-Stream model, we obtain the following results. (1) Algorithms for various tree problems and list ranking that match the performance of the best known algorithms and are easier to implement than them. (2) Pass efficient algorithms for sorting, and the maximal independent set problems, that improve on the best known algorithms. (3) Pass efficient algorithms for the graphs problems of finding vertex-colouring, approximate single source shortest paths, maximal matching, and approximate weighted vertex cover. (4) Lower bounds on passes for list ranking and maximal matching. We propose two variants of the W-Stream model, and design algorithms for the maximal independent set, vertex-colouring, and planar graph single source shortest paths problems on those models.
Decomposing change in life expectancy
DEFF Research Database (Denmark)
Vaupel, James W.; Canudas Romo, Vladimir
2003-01-01
at all ages, and the second term captures the effect of heterogeneity in the pace of improvement in mortality at different ages. We extend the formula to decompose change in life expectancy into age-specific and cause-specific components, and apply the methods to analyze changes in life expectancy......We extend Nathan Keyfitz's research on continuous change in life expectancy over time by presenting and proving a new formula for decomposing such change. The formula separates change in life expectancy over time into two terms. The first term captures the general effect of reduction in death rates...
SMV⊥: Simplex of maximal volume based upon the Gram-Schmidt process
Salazar-Vazquez, Jairo; Mendez-Vazquez, Andres
2015-10-01
In recent years, different algorithms for Hyperspectral Image (HI) analysis have been introduced. The high spectral resolution of these images allows to develop different algorithms for target detection, material mapping, and material identification for applications in Agriculture, Security and Defense, Industry, etc. Therefore, from the computer science's point of view, there is fertile field of research for improving and developing algorithms in HI analysis. In some applications, the spectral pixels of a HI can be classified using laboratory spectral signatures. Nevertheless, for many others, there is no enough available prior information or spectral signatures, making any analysis a difficult task. One of the most popular algorithms for the HI analysis is the N-FINDR because it is easy to understand and provides a way to unmix the original HI in the respective material compositions. The N-FINDR is computationally expensive and its performance depends on a random initialization process. This paper proposes a novel idea to reduce the complexity of the N-FINDR by implementing a bottom-up approach based in an observation from linear algebra and the use of the Gram-Schmidt process. Therefore, the Simplex of Maximal Volume Perpendicular (SMV⊥) algorithm is proposed for fast endmember extraction in hyperspectral imagery. This novel algorithm has complexity O(n) with respect to the number of pixels. In addition, the evidence shows that SMV⊥ calculates a bigger volume, and has lower computational time complexity than other poular algorithms on synthetic and real scenarios.
Generalized linear model for mapping discrete trait loci implemented with LASSO algorithm.
Directory of Open Access Journals (Sweden)
Jun Xing
Full Text Available Generalized estimating equation (GEE algorithm under a heterogeneous residual variance model is an extension of the iteratively reweighted least squares (IRLS method for continuous traits to discrete traits. In contrast to mixture model-based expectation-maximization (EM algorithm, the GEE algorithm can well detect quantitative trait locus (QTL, especially large effect QTLs located in large marker intervals in the manner of high computing speed. Based on a single QTL model, however, the GEE algorithm has very limited statistical power to detect multiple QTLs because of ignoring other linked QTLs. In this study, the fast least absolute shrinkage and selection operator (LASSO is derived for generalized linear model (GLM with all possible link functions. Under a heterogeneous residual variance model, the LASSO for GLM is used to iteratively estimate the non-zero genetic effects of those loci over entire genome. The iteratively reweighted LASSO is therefore extended to mapping QTL for discrete traits, such as ordinal, binary, and Poisson traits. The simulated and real data analyses are conducted to demonstrate the efficiency of the proposed method to simultaneously identify multiple QTLs for binary and Poisson traits as examples.
Maximal lactate steady state in Judo.
de Azevedo, Paulo Henrique Silva Marques; Pithon-Curi, Tania; Zagatto, Alessandro Moura; Oliveira, João; Perez, Sérgio
2014-04-01
the purpose of this study was to verify the validity of respiratory compensation threshold (RCT) measured during a new single judo specific incremental test (JSIT) for aerobic demand evaluation. to test the validity of the new test, the JSIT was compared with Maximal Lactate Steady State (MLSS), which is the gold standard procedure for aerobic demand measuring. Eight well-trained male competitive judo players (24.3 ± 7.9 years; height of 169.3 ± 6.7cm; fat mass of 12.7 ± 3.9%) performed a maximal incremental specific test for judo to assess the RCT and performed on 30-minute MLSS test, where both tests were performed mimicking the UchiKomi drills. the intensity at RCT measured on JSIT was not significantly different compared to MLSS (p=0.40). In addition, it was observed high and significant correlation between MLSS and RCT (r=0.90, p=0.002), as well as a high agreement. RCT measured during JSIT is a valid procedure to measure the aerobic demand, respecting the ecological validity of Judo.
Spiders Tune Glue Viscosity to Maximize Adhesion.
Amarpuri, Gaurav; Zhang, Ci; Diaz, Candido; Opell, Brent D; Blackledge, Todd A; Dhinojwala, Ali
2015-11-24
Adhesion in humid conditions is a fundamental challenge to both natural and synthetic adhesives. Yet, glue from most spider species becomes stickier as humidity increases. We find the adhesion of spider glue, from five diverse spider species, maximizes at very different humidities that matches their foraging habitats. By using high-speed imaging and spreading power law, we find that the glue viscosity varies over 5 orders of magnitude with humidity for each species, yet the viscosity at maximal adhesion for each species is nearly identical, 10(5)-10(6) cP. Many natural systems take advantage of viscosity to improve functional response, but spider glue's humidity responsiveness is a novel adaptation that makes the glue stickiest in each species' preferred habitat. This tuning is achieved by a combination of proteins and hygroscopic organic salts that determines water uptake in the glue. We therefore anticipate that manipulation of polymer-salts interaction to control viscosity can provide a simple mechanism to design humidity responsive smart adhesives.
Gaitanis, Anastasios; Kontaxakis, George; Spyrou, George; Panayiotakis, George; Tzanakos, George
2010-09-01
We have studied the properties of the pixel updating coefficients in the 2D ordered subsets expectation maximization (OSEM) algorithm for iterative image reconstruction in positron emission tomography, in order to address the problem of image quality degradation-a known property of the technique after a number of iterations. The behavior of the updating coefficients has been extensively analyzed on synthetic coincidence data, using the necessary software tools. The experiments showed that the statistical properties of these coefficients can be correlated with the quality of the reconstructed images as a function of the activity distribution in the source and the number of subsets used. Considering the fact that these properties can be quantified during the reconstruction process of data from real scans where the activity distribution in the source is unknown the results of this study might be useful for the development of a stopping criterion for the OSEM algorithm. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.
Constrained Multiobjective Biogeography Optimization Algorithm
Directory of Open Access Journals (Sweden)
Hongwei Mo
2014-01-01
Full Text Available Multiobjective optimization involves minimizing or maximizing multiple objective functions subject to a set of constraints. In this study, a novel constrained multiobjective biogeography optimization algorithm (CMBOA is proposed. It is the first biogeography optimization algorithm for constrained multiobjective optimization. In CMBOA, a disturbance migration operator is designed to generate diverse feasible individuals in order to promote the diversity of individuals on Pareto front. Infeasible individuals nearby feasible region are evolved to feasibility by recombining with their nearest nondominated feasible individuals. The convergence of CMBOA is proved by using probability theory. The performance of CMBOA is evaluated on a set of 6 benchmark problems and experimental results show that the CMBOA performs better than or similar to the classical NSGA-II and IS-MOEA.
Fiscal Consolidations and Heterogeneous Expectations
Hommes, C.; Lustenhouwer, J.; Mavromatis, K.
2015-01-01
We analyze fiscal consolidations using a New Keynesian model where agents have heterogeneous expectations and are uncertain about the composition of consoidations. Heterogeneity in expectations may amplify expansions, stabilizing thus the debt-to-GDP ratio faster under tax based consolidations, in
On maximal eigenfrequency separation in two-material structures: the 1D and 2D scalar cases
DEFF Research Database (Denmark)
Jensen, Jakob Søndergaard; Pedersen, Niels Leergaard
2006-01-01
We present a method to maximize the separation of two adjacent eigenfrequencies in structures with two material components. The method is based on finite element analysis and topology optimization in which an iterative algorithm is used to find the optimal distribution of the materials. Results a...
Overstatement and Rational Market Expectation
Illoong Kwon; Eunjung Yeo
2008-01-01
When an agent overstates his/her true performance, a rational market can simply discount the reported performance, and correctly guess the true performance. This paper shows, however, that such rational market discounting leads to less productive effort by the agent and less performance-pay by the principal. Therefore, a rational market and a profit-maximizing principal can exacerbate the lack of productive effort by the agent.
Algorithmic test design using classical item parameters
van der Linden, Willem J.; Adema, Jos J.
1988-01-01
Two optimalization models for the construction of tests with a maximal value of coefficient alpha are given. Both models have a linear form and can be solved by using a branch-and-bound algorithm. The first model assumes an item bank calibrated under the Rasch model and can be used, for instance, when classical test theory has to serve as an interface between the item bank system and a user not familiar with modern test theory. Maximization of alpha was obtained by inserting a special constra...
Efficient Algorithms for the Maximum Sum Problems
Directory of Open Access Journals (Sweden)
Sung Eun Bae
2017-01-01
Full Text Available We present efficient sequential and parallel algorithms for the maximum sum (MS problem, which is to maximize the sum of some shape in the data array. We deal with two MS problems; the maximum subarray (MSA problem and the maximum convex sum (MCS problem. In the MSA problem, we find a rectangular part within the given data array that maximizes the sum in it. The MCS problem is to find a convex shape rather than a rectangular shape that maximizes the sum. Thus, MCS is a generalization of MSA. For the MSA problem, O ( n time parallel algorithms are already known on an ( n , n 2D array of processors. We improve the communication steps from 2 n − 1 to n, which is optimal. For the MCS problem, we achieve the asymptotic time bound of O ( n on an ( n , n 2D array of processors. We provide rigorous proofs for the correctness of our parallel algorithm based on Hoare logic and also provide some experimental results of our algorithm that are gathered from the Blue Gene/P super computer. Furthermore, we briefly describe how to compute the actual shape of the maximum convex sum.
Primordial two-component maximally symmetric inflation
Enqvist, K.; Nanopoulos, D. V.; Quirós, M.; Kounnas, C.
1985-12-01
We propose a two-component inflation model, based on maximally symmetric supergravity, where the scales of reheating and the inflation potential at the origin are decoupled. This is possible because of the second-order phase transition from SU(5) to SU(3)×SU(2)×U(1) that takes place when φ≅φcinflation at the global minimum, and leads to a reheating temperature TR≅(1015-1016) GeV. This makes it possible to generate baryon asymmetry in the conventional way without any conflict with experimental data on proton lifetime. The mass of the gravitinos is m3/2≅1012 GeV, thus avoiding the gravitino problem. Monopoles are diluted by residual inflation in the broken phase below the cosmological bounds if φcUSA.
Holographic equipartition and the maximization of entropy
Krishna, P. B.; Mathew, Titus K.
2017-09-01
The accelerated expansion of the Universe can be interpreted as a tendency to satisfy holographic equipartition. It can be expressed by a simple law, Δ V =Δ t (Nsurf-ɛ Nbulk) , where V is the Hubble volume in Planck units, t is the cosmic time in Planck units, and Nsurf /bulk is the number of degrees of freedom on the horizon/bulk of the Universe. We show that this holographic equipartition law effectively implies the maximization of entropy. In the cosmological context, a system that obeys the holographic equipartition law behaves as an ordinary macroscopic system that proceeds to an equilibrium state of maximum entropy. We consider the standard Λ CDM model of the Universe and show that it is consistent with the holographic equipartition law. Analyzing the entropy evolution, we find that it also proceeds to an equilibrium state of maximum entropy.
MAXIMIZING THE BENEFITS OF ERP SYSTEMS
Directory of Open Access Journals (Sweden)
Paulo André da Conceição Menezes
2010-04-01
Full Text Available The ERP (Enterprise Resource Planning systems have been consolidated in companies with different sizes and sectors, allowing their real benefits to be definitively evaluated. In this study, several interactions have been studied in different phases, such as the strategic priorities and strategic planning defined as ERP Strategy; business processes review and the ERP selection in the pre-implementation phase, the project management and ERP adaptation in the implementation phase, as well as the ERP revision and integration efforts in the post-implementation phase. Through rigorous use of case study methodology, this research led to developing and to testing a framework for maximizing the benefits of the ERP systems, and seeks to contribute for the generation of ERP initiatives to optimize their performance.
Maximal mydriasis evaluation in cataract surgery
Directory of Open Access Journals (Sweden)
Ho Tony
1992-01-01
Full Text Available We propose the Maximal Mydriasis Test (MMT as a simple and safe means to provide the cataract surgeon with objective and dependable pre-operative information on the idiosyncratic mydriatic response of the pupil. The MMT results of a consecutive series of 165 eyes from 100 adults referred for cataract evaluation are presented to illustrate its practical applications and value. The results of the MMT allows the surgeon to anticipate problem eyes pre-operatively so that he can plan his surgical strategy more appropriately and effectively. Conversely, the surgeon can also appropriately and confidently plan surgical procedures where wide pupillary dilation is important. The MMT has also helped improve our cost-effectiveness by cutting down unnecessary delays in the operating room and enabling better utilisation of restricted costly resources.
The maximal family of exactly solvable chaos
Umeno, K
1996-01-01
A new two-parameter family of ergordic transformations with non-uniform invariant measures on the unit interval (I=[0,1]) is found here. The family has a special property that their invariant measures can be explicitly written in terms of algebraic functions of parameters and a dynamical variable. Furthermore, it is also proven here that this family is the most generalized class of exactly solvable chaos on (I) including the Ulam=Neumann map (y=4x(1-x)). Unpredictably, by choosing certain parameters, the maximal class of exactly solvable chaos is found to describe the asymmetric shape of the experimentally obtained first return maps of the Beloussof-Zhabotinski chemical reaction.
Maximizing policy learning in international committees
DEFF Research Database (Denmark)
Nedergaard, Peter
2007-01-01
, this article demonstrates that valuable lessons can be learned about policy learning, in practice and theoretically, by analysing the cooperation in the OMC committees. Using the Advocacy Coalition Framework as the starting point of analysis, 15 hypotheses on policy learning are tested. Among other things......In the voluminous literature on the European Union's open method of coordination (OMC), no one has hitherto analysed on the basis of scholarly examination the question of what contributes to the learning processes in the OMC committees. On the basis of a questionnaire sent to all participants......, it is concluded that in order to maximize policy learning in international committees, empirical data should be made available to committees and provided by sources close to the participants (i.e. the Commission). In addition, the work in the committees should be made prestigious in order to attract well...
Statistical complexity is maximized in a small-world brain.
Directory of Open Access Journals (Sweden)
Teck Liang Tan
Full Text Available In this paper, we study a network of Izhikevich neurons to explore what it means for a brain to be at the edge of chaos. To do so, we first constructed the phase diagram of a single Izhikevich excitatory neuron, and identified a small region of the parameter space where we find a large number of phase boundaries to serve as our edge of chaos. We then couple the outputs of these neurons directly to the parameters of other neurons, so that the neuron dynamics can drive transitions from one phase to another on an artificial energy landscape. Finally, we measure the statistical complexity of the parameter time series, while the network is tuned from a regular network to a random network using the Watts-Strogatz rewiring algorithm. We find that the statistical complexity of the parameter dynamics is maximized when the neuron network is most small-world-like. Our results suggest that the small-world architecture of neuron connections in brains is not accidental, but may be related to the information processing that they do.
Statistical complexity is maximized in a small-world brain.
Tan, Teck Liang; Cheong, Siew Ann
2017-01-01
In this paper, we study a network of Izhikevich neurons to explore what it means for a brain to be at the edge of chaos. To do so, we first constructed the phase diagram of a single Izhikevich excitatory neuron, and identified a small region of the parameter space where we find a large number of phase boundaries to serve as our edge of chaos. We then couple the outputs of these neurons directly to the parameters of other neurons, so that the neuron dynamics can drive transitions from one phase to another on an artificial energy landscape. Finally, we measure the statistical complexity of the parameter time series, while the network is tuned from a regular network to a random network using the Watts-Strogatz rewiring algorithm. We find that the statistical complexity of the parameter dynamics is maximized when the neuron network is most small-world-like. Our results suggest that the small-world architecture of neuron connections in brains is not accidental, but may be related to the information processing that they do.
MANAGING CONTENTION AVOIDANCE AND MAXIMIZING THROUGHPUT IN OBS NETWORK
Directory of Open Access Journals (Sweden)
AMIT KUMAR GARG
2013-04-01
Full Text Available Optical Burst Switching (OBS is a promising technology for future optical networks. Due to its less complicated implementation using current optical and electrical components, OBS is seen as the first step towards the future Optical Packet Switching (OPS. In OBS, a key problem is to schedule bursts on wavelength channels whose bandwidth may become fragmented with the so-called void (or idle intervals with both fast and bandwidth efficient algorithms so as to reduce burst loss. In this paper, a new scheme has been proposed to improve the throughput and to avoid the contention in the OBS network. The proposed scheme offers the same node complexity as that in general OBS networks with optical buffers. Also, it avoids burst blockings in transit nodes, turning it into an efficient and simple burst contention avoidance mechanism. Simulation results show that the proposed scheme has improvement of 15% in terms of burst loss probability as compared to OBS existing schemes and also maximizes the throughput of the network without deteriorating excessively other parameters such as end to end delay or ingress queues.
Maximizing the biochemical resolving power of fluorescence microscopy.
Esposito, Alessandro; Popleteeva, Marina; Venkitaraman, Ashok R
2013-01-01
Most recent advances in fluorescence microscopy have focused on achieving spatial resolutions below the diffraction limit. However, the inherent capability of fluorescence microscopy to non-invasively resolve different biochemical or physical environments in biological samples has not yet been formally described, because an adequate and general theoretical framework is lacking. Here, we develop a mathematical characterization of the biochemical resolution in fluorescence detection with Fisher information analysis. To improve the precision and the resolution of quantitative imaging methods, we demonstrate strategies for the optimization of fluorescence lifetime, fluorescence anisotropy and hyperspectral detection, as well as different multi-dimensional techniques. We describe optimized imaging protocols, provide optimization algorithms and describe precision and resolving power in biochemical imaging thanks to the analysis of the general properties of Fisher information in fluorescence detection. These strategies enable the optimal use of the information content available within the limited photon-budget typically available in fluorescence microscopy. This theoretical foundation leads to a generalized strategy for the optimization of multi-dimensional optical detection, and demonstrates how the parallel detection of all properties of fluorescence can maximize the biochemical resolving power of fluorescence microscopy, an approach we term Hyper Dimensional Imaging Microscopy (HDIM). Our work provides a theoretical framework for the description of the biochemical resolution in fluorescence microscopy, irrespective of spatial resolution, and for the development of a new class of microscopes that exploit multi-parametric detection systems.
Network architecture underlying maximal separation of neuronal representations
Directory of Open Access Journals (Sweden)
Ron A Jortner
2013-01-01
Full Text Available One of the most basic and general tasks faced by all nervous systems is extracting relevant information from the organism’s surrounding world. While physical signals available to sensory systems are often continuous, variable, overlapping and noisy, high-level neuronal representations used for decision-making tend to be discrete, specific, invariant, and highly separable. This study addresses the question of how neuronal specificity is generated. Inspired by experimental findings on network architecture in the olfactory system of the locust, I construct a highly simplified theoretical framework which allows for analytic solution of its key properties. For generalized feed-forward systems, I show that an intermediate range of connectivity values between source- and target-populations leads to a combinatorial explosion of wiring possibilities, resulting in input spaces which are, by their very nature, exquisitely sparsely populated. In particular, connection probability ½, as found in the locust antennal-lobe–mushroom-body circuit, serves to maximize separation of neuronal representations across the target Kenyon-cells, and explains their specific and reliable responses. This analysis yields a function expressing response specificity in terms of lower network-parameters; together with appropriate gain control this leads to a simple neuronal algorithm for generating arbitrarily sparse and selective codes and linking network architecture and neural coding. I suggest a way to easily construct ecologically meaningful representations from this code.
Managing the innovation supply chain to maximize personalized medicine.
Waldman, S A; Terzic, A
2014-02-01
Personalized medicine epitomizes an evolving model of care tailored to the individual patient. This emerging paradigm harnesses radical technological advances to define each patient's molecular characteristics and decipher his or her unique pathophysiological processes. Translated into individualized algorithms, personalized medicine aims to predict, prevent, and cure disease without producing therapeutic adverse events. Although the transformative power of personalized medicine is generally recognized by physicians, patients, and payers, the complexity of translating discoveries into new modalities that transform health care is less appreciated. We often consider the flow of innovation and technology along a continuum of discovery, development, regulation, and application bridging the bench with the bedside. However, this process also can be viewed through a complementary prism, as a necessary supply chain of services and providers, each making essential contributions to the development of the final product to maximize value to consumers. Considering personalized medicine in this context of supply chain management highlights essential points of vulnerability and/or scalability that can ultimately constrain translation of the biological revolution or potentiate it into individualized diagnostics and therapeutics for optimized value creation and delivery.
Optimization of solar air collector using genetic algorithm and artificial bee colony algorithm
Energy Technology Data Exchange (ETDEWEB)
Sencan Sahin, Arzu [Sueleyman Demirel University, Technology Faculty, Isparta (Turkey)
2012-11-15
Thermal performance of solar air collector depends on many parameters as inlet air temperature, air velocity, collector slope and properties related to collector. In this study, the effect of the different parameters which affect the performance of the solar air collector are investigated. In order to maximize the thermal performance of a solar air collector genetic algorithm (GA) and artificial bee colony algorithm (ABC) have been used. The results obtained indicate that GA and ABC algorithms can be applied successfully for the optimization of the thermal performance of solar air collector. (orig.)
Neural correlates of rhythmic expectancy
Directory of Open Access Journals (Sweden)
Theodore P. Zanto
2006-01-01
Full Text Available Temporal expectancy is thought to play a fundamental role in the perception of rhythm. This review summarizes recent studies that investigated rhythmic expectancy by recording neuroelectric activity with high temporal resolution during the presentation of rhythmic patterns. Prior event-related brain potential (ERP studies have uncovered auditory evoked responses that reflect detection of onsets, offsets, sustains,and abrupt changes in acoustic properties such as frequency, intensity, and spectrum, in addition to indexing higher-order processes such as auditory sensory memory and the violation of expectancy. In our studies of rhythmic expectancy, we measured emitted responses - a type of ERP that occurs when an expected event is omitted from a regular series of stimulus events - in simple rhythms with temporal structures typical of music. Our observations suggest that middle-latency gamma band (20-60 Hz activity (GBA plays an essential role in auditory rhythm processing. Evoked (phase-locked GBA occurs in the presence of physically presented auditory events and reflects the degree of accent. Induced (non-phase-locked GBA reflects temporally precise expectancies for strongly and weakly accented events in sound patterns. Thus far, these findings support theories of rhythm perception that posit temporal expectancies generated by active neural processes.
Zagatto, A; Redkva, P; Loures, J; Kalva Filho, C; Franco, V; Kaminagakura, E; Papoti, M
2011-12-01
The aims of this study were: (i) to measure energy system contributions in maximal anaerobic running test (MART); and (ii) to verify any correlation between MART and maximal accumulated oxygen deficit (MAOD). Eleven members of the armed forces were recruited for this study. Participants performed MART and MAOD, both accomplished on a treadmill. MART consisted of intermittent exercise, 20 s effort with 100 s recovery, after each spell of effort exercise. Energy system contributions by MART were also determined by excess post-exercise oxygen consumption, lactate response, and oxygen uptake measurements. MAOD was determined by five submaximal intensities and one supramaximal intensity exercises corresponding to 120% at maximal oxygen uptake intensity. Energy system contributions were 65.4±1.1% to aerobic; 29.5±1.1% to anaerobic a-lactic; and 5.1±0.5% to anaerobic lactic system throughout the whole test, while only during effort periods the anaerobic contribution corresponded to 73.5±1.0%. Maximal power found in MART corresponded to 111.25±1.33 mL/kg/min but did not significantly correlate with MAOD (4.69±0.30 L and 70.85±4.73 mL/kg). We concluded that the anaerobic a-lactic system is the main energy system in MART efforts and this test did not significantly correlate to MAOD. © 2011 John Wiley & Sons A/S.
From entropy-maximization to equality-maximization: Gauss, Laplace, Pareto, and Subbotin
Eliazar, Iddo
2014-12-01
The entropy-maximization paradigm of statistical physics is well known to generate the omnipresent Gauss law. In this paper we establish an analogous socioeconomic model which maximizes social equality, rather than physical disorder, in the context of the distributions of income and wealth in human societies. We show that-on a logarithmic scale-the Laplace law is the socioeconomic equality-maximizing counterpart of the physical entropy-maximizing Gauss law, and that this law manifests an optimized balance between two opposing forces: (i) the rich and powerful, striving to amass ever more wealth, and thus to increase social inequality; and (ii) the masses, struggling to form more egalitarian societies, and thus to increase social equality. Our results lead from log-Gauss statistics to log-Laplace statistics, yield Paretian power-law tails of income and wealth distributions, and show how the emergence of a middle-class depends on the underlying levels of socioeconomic inequality and variability. Also, in the context of asset-prices with Laplace-distributed returns, our results imply that financial markets generate an optimized balance between risk and predictability.
Asymmetry During Maximal Sprint Performance in 11- to 16-Year-Old Boys.
Meyers, Robert W; Oliver, Jon L; Hughes, Michael G; Lloyd, Rhodri S; Cronin, John B
2017-02-01
The aim of this study was to examine the influence of age and maturation upon magnitude of asymmetry in the force, stiffness and the spatiotemporal determinants of maximal sprint speed in a large cohort of boys. 344 boys between the ages of 11 and 16 years completed an anthropometric assessment and a 35 m sprint test, during which sprint performance was recorded via a ground-level optical measurement system. Maximal sprint velocity, as well as asymmetry in spatiotemporal variables, modeled force and stiffness data were established for each participant. For analysis, participants were grouped into chronological age, maturation and percentile groups. The range of mean asymmetry across age groups and variables was 2.3-12.6%. The magnitude of asymmetry in all the sprint variables was not significantly different across age and maturation groups (p > .05), except relative leg stiffness (p < .05). No strong relationships between asymmetry in sprint variables and maximal sprint velocity were evident (rs < .39). These results provide a novel benchmark for the expected magnitude of asymmetry in a large cohort of uninjured boys during maximal sprint performance. Asymmetry in sprint performance is largely unaffected by age or maturation and no strong relationships exist between the magnitude of asymmetry and maximal sprint velocity.
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Exponential Lower Bounds for the PPSZ k-SAT Algorithm
DEFF Research Database (Denmark)
Chen, Shiteng; Scheder, Dominik Alban; Talebanfard, Navid
2013-01-01
In 1998, Paturi, Pudl´ak, Saks, and Zane presented PPSZ, an elegant randomized algorithm for k-SAT. Fourteen years on, this algorithm is still the fastest known worst-case algorithm. They proved that its expected running time on k-CNF formulas with n variables is at most 2(1−k)n, where k 2 (1/k...
Expectation propagation for continuous time stochastic processes
Cseke, Botond; Schnoerr, David; Opper, Manfred; Sanguinetti, Guido
2016-12-01
We consider the inverse problem of reconstructing the posterior measure over the trajectories of a diffusion process from discrete time observations and continuous time constraints. We cast the problem in a Bayesian framework and derive approximations to the posterior distributions of single time marginals using variational approximate inference, giving rise to an expectation propagation type algorithm. For non-linear diffusion processes, this is achieved by leveraging moment closure approximations. We then show how the approximation can be extended to a wide class of discrete-state Markov jump processes by making use of the chemical Langevin equation. Our empirical results show that the proposed method is computationally efficient and provides good approximations for these classes of inverse problems.
Rational Expectations and Economic Models.
Sheffrin, Steven M.
1980-01-01
Examines how rational expectation models can help describe and predict trends within an economy and explains research needs within the discipline of economics which will enable economists to make more valid predictions. (DB)
Life expectancy in bipolar disorder
DEFF Research Database (Denmark)
Kessing, Lars Vedel; Vradi, Eleni; Andersen, Per Kragh
2015-01-01
OBJECTIVE: Life expectancy in patients with bipolar disorder has been reported to be decreased by 11 to 20 years. These calculations are based on data for individuals at the age of 15 years. However, this may be misleading for patients with bipolar disorder in general as most patients have a later...... onset of illness. The aim of the present study was to calculate the remaining life expectancy for patients of different ages with a diagnosis of bipolar disorder. METHODS: Using nationwide registers of all inpatient and outpatient contacts to all psychiatric hospitals in Denmark from 1970 to 2012 we...... calculated remaining life expectancies for values of age 15, 25, 35 ⃛ 75 years among all individuals alive in year 2000. RESULTS: For the typical male or female patient aged 25 to 45 years, the remaining life expectancy was decreased by 12.0-8.7 years and 10.6-8.3 years, respectively. The ratio between...
Physical activity extends life expectancy
Leisure-time physical activity is associated with longer life expectancy, even at relatively low levels of activity and regardless of body weight, according to a study by a team of researchers led by the NCI.
Kube, Tobias; D'Astolfo, Lisa; Glombiewski, Julia A; Doering, Bettina K; Rief, Winfried
2017-09-01
should be exposed to situations where the discrepancy between patients' expectations and actual situational outcomes can be maximized. The Depressive Expectations Scale can be completed repeatedly to monitor a patient's progress within cognitive-behavioural treatment. © 2016 The British Psychological Society.
Burn Patient Expectations from Nurses
Sibel Yilmaz sahin; Umran Dal; Gulsen Vural
2014-01-01
AIM: Burn is a kind of painful trauma that requires a long period of treatment and also changes patients body image. For this reason, nursing care of burn patients is very important. In this study in order to provide qualified care to the burned patients, patient and #8217;s expectations from nurses were aimed to be established. METHODS: Patients and #8217; expectations were evaluated on 101 patients with burn in Ministry of Health Ankara Numune Education and Research Hospital Burn Servic...
Rational Expectation Can Preclude Trades
Matsuhisa, Takashi; Ishikawa, Ryuichiro
2003-01-01
We consider a pure exchange economy under uncertainty in which the traders have the non-partition structure of information. They willing to trade the amounts of state-contingent commodities and they know their own expectations. Common knowledge of these conditions among all the traders can preclude trade if the initial endowments allocation is ex-ante Pareto optimal. Furthermore we introduce rational expectations equilibrium under the non-partition information, and prove the existence theorem...
Rational Expectations: Retrospect and Prospect
Hoover, Kevin; Young, Warren
2011-01-01
The transcript of a panel discussion marking the fiftieth anniversary of John Muth's "Rational Expectations and the Theory of Price Movements" (Econometrica 1961). The panel consists of Michael Lovell, Robert Lucas, Dale Mortensen, Robert Shiller, and Neil Wallace. The discussion is moderated by Kevin Hoover and Warren Young. The panel touches on a wide variety of issues related to the rational-expectations hypothesis, including: its history, starting with Muth's work at Carnegie Tech; its me...
Conceptual space systems design using meta-heuristic algorithms
Kim, Byoungsoo
A recent tendency in designing Space Systems for a specific mission can be described easily and explicitly by the new design-to-cost philosophy, "faster, better, cheaper" (fast-track, innovative, lower-cost, small-sat). This means that Space Systems engineers must do more with less and in less time. This new philosophy can result in space exploration programs with smaller spacecraft, more frequent flights at a remarkably lower cost per flight (cost first, performance second), shorter development schedules, and more focused missions. Some early attempts at "faster, better, cheaper" possibly moved too fast and eliminated critical tests or did not "space-qualify" the innovations, causing failure. A new discipline of Constrained Optimization must be employed. With this new philosophy, Space Systems Design becomes a difficult problem to model in the new, more challenging environment. The objective of Space Systems Design has moved from maximizing space mission performance under weak time and weak cost constraints (accepting schedule slippage and cost growth) but with technology risk constraints, to maximizing mission goals under firm cost and schedule constraints but with prudent technology risk constraints, or, equivalently maximizing "expected" space mission performance per unit cost. Within this mindset, a complex Conceptual Space Systems Design Model was formulated as a (simply bounded) Constrained Combinatorial Optimization Problem with Estimated Total Mission Cost (ETMC) as its objective function to be minimized and subsystems trade-offs and design parameters as the decision variables in its design space, using parametric estimating relationships (PERs) and cost estimating relationships (CERs). Here, given a complex Conceptual Space Systems Design Problem, a (simply bounded) Constrained Combinatorial Optimization "solution" is defined as the process of achieving the most favorable alternative for the system on the basis of objective decision-making evaluation
Life expectancy in bipolar disorder.
Kessing, Lars Vedel; Vradi, Eleni; Andersen, Per Kragh
2015-08-01
Life expectancy in patients with bipolar disorder has been reported to be decreased by 11 to 20 years. These calculations are based on data for individuals at the age of 15 years. However, this may be misleading for patients with bipolar disorder in general as most patients have a later onset of illness. The aim of the present study was to calculate the remaining life expectancy for patients of different ages with a diagnosis of bipolar disorder. Using nationwide registers of all inpatient and outpatient contacts to all psychiatric hospitals in Denmark from 1970 to 2012 we calculated remaining life expectancies for values of age 15, 25, 35 ⃛ 75 years among all individuals alive in year 2000. For the typical male or female patient aged 25 to 45 years, the remaining life expectancy was decreased by 12.0-8.7 years and 10.6-8.3 years, respectively. The ratio between remaining life expectancy in bipolar disorder and that of the general population decreased with age, indicating that patients with bipolar disorder start losing life-years during early and mid-adulthood. Life expectancy in bipolar disorder is decreased substantially, but less so than previously reported. Patients start losing life-years during early and mid-adulthood. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Cluster algorithms and computational complexity
Li, Xuenan
Cluster algorithms for the 2D Ising model with a staggered field have been studied and a new cluster algorithm for path sampling has been worked out. The complexity properties of Bak-Seppen model and the Growing network model have been studied by using the Computational Complexity Theory. The dynamic critical behavior of the two-replica cluster algorithm is studied. Several versions of the algorithm are applied to the two-dimensional, square lattice Ising model with a staggered field. The dynamic exponent for the full algorithm is found to be less than 0.5. It is found that odd translations of one replica with respect to the other together with global flips are essential for obtaining a small value of the dynamic exponent. The path sampling problem for the 1D Ising model is studied using both a local algorithm and a novel cluster algorithm. The local algorithm is extremely inefficient at low temperature, where the integrated autocorrelation time is found to be proportional to the fourth power of correlation length. The dynamic exponent of the cluster algorithm is found to be zero and therefore proved to be much more efficient than the local algorithm. The parallel computational complexity of the Bak-Sneppen evolution model is studied. It is shown that Bak-Sneppen histories can be generated by a massively parallel computer in a time that is polylog in the length of the history, which means that the logical depth of producing a Bak-Sneppen history is exponentially less than the length of the history. The parallel dynamics for generating Bak-Sneppen histories is contrasted to standard Bak-Sneppen dynamics. The parallel computational complexity of the Growing Network model is studied. The growth of the network with linear kernels is shown to be not complex and an algorithm with polylog parallel running time is found. The growth of the network with gamma ≥ 2 super-linear kernels can be realized by a randomized parallel algorithm with polylog expected running time.
Impact of startup scheme on Francis runner life expectancy
Energy Technology Data Exchange (ETDEWEB)
Gagnon, M; Tahan, S A; Bocher, P [Department of Mechanical Engineering, Ecole de technologie superieure (ETS) 1100, rue Notre-Dame Ouest, Montreal (Canada); Thibault, D, E-mail: martin.gagnon.8@ens.etsmtl.c [Institut de recherche d' Hydro-Quebec (IREQ), 1800, boul. Lionel-Boulet, Varennes, J3X 1S1 (Canada)
2010-08-15
Francis runners are subject to complex dynamic forces which might lead to eventual blade cracking and the need for corrective measure. Damage due to cracks in runner blades are usually not a safety issues but might generate unexpected down time and high repair cost. Avoiding the main damaging operating conditions is often the only option left to plant operators to maximize the life expectancy of their runner. The startup scheme is one of the available parameter which is controlled by the end user and could be used to minimize the damage induced to the runner. In this study, two startup schemes have been used to investigate life expectancy of Francis runner using in situ measurements. The results obtained show that the induced damage during the startup event could be significantly reduced with change to the startup scheme. In our opinion, an optimization of the startup scheme with regard to fatigue damage could extend significantly the life expectancy and the reliability of Francis runner.
A Lyapunov based approach to energy maximization in renewable energy technologies
Iyasere, Erhun
This dissertation describes the design and implementation of Lyapunov-based control strategies for the maximization of the power captured by renewable energy harnessing technologies such as (i) a variable speed, variable pitch wind turbine, (ii) a variable speed wind turbine coupled to a doubly fed induction generator, and (iii) a solar power generating system charging a constant voltage battery. First, a torque control strategy is presented to maximize wind energy captured in variable speed, variable pitch wind turbines at low to medium wind speeds. The proposed strategy applies control torque to the wind turbine pitch and rotor subsystems to simultaneously control the blade pitch and tip speed ratio, via the rotor angular speed, to an optimum point at which the capture efficiency is maximum. The control method allows for aerodynamic rotor power maximization without exact knowledge of the wind turbine model. A series of numerical results show that the wind turbine can be controlled to achieve maximum energy capture. Next, a control strategy is proposed to maximize the wind energy captured in a variable speed wind turbine, with an internal induction generator, at low to medium wind speeds. The proposed strategy controls the tip speed ratio, via the rotor angular speed, to an optimum point at which the efficiency constant (or power coefficient) is maximal for a particular blade pitch angle and wind speed by using the generator rotor voltage as a control input. This control method allows for aerodynamic rotor power maximization without exact wind turbine model knowledge. Representative numerical results demonstrate that the wind turbine can be controlled to achieve near maximum energy capture. Finally, a power system consisting of a photovoltaic (PV) array panel, dc-to-dc switching converter, charging a battery is considered wherein the environmental conditions are time-varying. A backstepping PWM controller is developed to maximize the power of the solar generating
Burn Patient Expectations from Nurses
Directory of Open Access Journals (Sweden)
Sibel Yilmaz sahin
2014-02-01
Full Text Available AIM: Burn is a kind of painful trauma that requires a long period of treatment and also changes patients body image. For this reason, nursing care of burn patients is very important. In this study in order to provide qualified care to the burned patients, patient and #8217;s expectations from nurses were aimed to be established. METHODS: Patients and #8217; expectations were evaluated on 101 patients with burn in Ministry of Health Ankara Numune Education and Research Hospital Burn Service and Gulhane Military Medical Academy Education and Research Hospital Burn Center. A questionnaire which was developed by the researchers was used for collecting data. The questions on the questionnaire were classified into four groups to evaluate the patients and #8217; expectations about communication, information, care and discharge. Data was evaluated by using SPSS 12 package software. RESULTS: In this study, 48.5% of patients were at 18-28 age group, 79.2% were male and 51.5% of patients were employed. Almost all of patients expect nurses to give them confidence (98% and to give them information about latest developments with the disease. Patients prior expectation from nurses about care was to do their treatments regularly (100% and to take the necessary precautions in order to prevent infection (100%. 97% of patient expect nurses to give them information about the drugs, materials and equipment that they are going to use while discharge. CONCLUSION: As a result we found that burn patient expectations from nurses about communication, information, care and discharge were high. [TAF Prev Med Bull 2014; 13(1.000: 37-46
Diffusion Tensor Estimation by Maximizing Rician Likelihood.
Landman, Bennett; Bazin, Pierre-Louis; Prince, Jerry
2007-01-01
Diffusion tensor imaging (DTI) is widely used to characterize white matter in health and disease. Previous approaches to the estimation of diffusion tensors have either been statistically suboptimal or have used Gaussian approximations of the underlying noise structure, which is Rician in reality. This can cause quantities derived from these tensors - e.g., fractional anisotropy and apparent diffusion coefficient - to diverge from their true values, potentially leading to artifactual changes that confound clinically significant ones. This paper presents a novel maximum likelihood approach to tensor estimation, denoted Diffusion Tensor Estimation by Maximizing Rician Likelihood (DTEMRL). In contrast to previous approaches, DTEMRL considers the joint distribution of all observed data in the context of an augmented tensor model to account for variable levels of Rician noise. To improve numeric stability and prevent non-physical solutions, DTEMRL incorporates a robust characterization of positive definite tensors and a new estimator of underlying noise variance. In simulated and clinical data, mean squared error metrics show consistent and significant improvements from low clinical SNR to high SNR. DTEMRL may be readily supplemented with spatial regularization or a priori tensor distributions for Bayesian tensor estimation.
Maximal respiratory pressures among adolescent swimmers
Directory of Open Access Journals (Sweden)
M.A. Rocha Crispino Santos
2011-03-01
Full Text Available Maximal inspiratory pressures (MIP and maximal expiratory pressures (MEP are useful indices of respiratory muscle strength in athletes.The aims of this study were: to describe the strength of the respiratory muscles of Olympic junior swim team, at baseline and after a standard physical training; and to determine if there is a differential inspiratory and expiratory pressure response to the physical training.A cross-sectional study evaluated 28 international-level swimmers with ages ranging from 15 to 17 years, 19 (61% being males. At baseline, MIP was found to be lower in females (PÂ =Â .001. The mean values reached by males and females were: MIP(cmH2O = M: 100.4 (Â±Â 26.5/F: 67.8 (Â±Â 23.2; MEP (cmH2OÂ =Â M: 87.4 (Â±Â 20.7/F: 73.9 (Â±Â 17.3. After the physical training they reached: MIP (cmH2OÂ =Â M: 95.3 (Â±Â 30.3/F: 71.8 (Â±Â 35.6; MEP (cmH2OÂ =Â M: 82.8 (Â±Â 26.2/F: 70.4 (Â±Â 8.3.No differential pressure responses were observed in either males or females. These results suggest that swimmers can sustain the magnitude of the initial maximal pressures. Other studies should be developed to clarify if MIP and MEP could be used as a marker of an athlete's performance SumÃ¡rio: PressÃµes inspiratÃ³rias mÃ¡ximas (PIM e pressÃµes expiratÃ³rias mÃ¡ximas (PEM sÃ£o indicadores Ãºteis de forÃ§a muscular em atletas.Os objetivos desse estudo foram: descrever a forÃ§a da musculatura respiratÃ³ria de uma equipe OlÃmpica jÃºnior de nataÃ§Ã£o, em repouso e apÃ³s um exercÃcio fÃsico padronizado e. determinar o diferencial de pressÃ£o inspiratÃ³ria e expiratÃ³ria obtido como resposta ao exercÃcio fÃsico.Um estudo descritivo avaliou 28 nadadores de nÃvel internacional, com idades variÃ¡veis entre 15 a 17 anos, sendo 19 (61% do sexo masculino. Em repouso, os valores mais baixos de PIM foram encontrados no sexo feminino (pÂ =Â 0,001. Os valores m
Maximizing binding capacity for protein A chromatography.
Ghose, Sanchayita; Zhang, Jennifer; Conley, Lynn; Caple, Ryan; Williams, Kevin P; Cecchini, Douglas
2014-01-01
Advances in cell culture expression levels in the last two decades have resulted in monoclonal antibody titers of ≥10 g/L to be purified downstream. A high capacity capture step is crucial to prevent purification from being the bottleneck in the manufacturing process. Despite its high cost and other disadvantages, Protein A chromatography still remains the optimal choice for antibody capture due to the excellent selectivity provided by this step. A dual flow loading strategy was used in conjunction with a new generation high capacity Protein A resin to maximize binding capacity without significantly increasing processing time. Optimum conditions were established using a simple empirical Design of Experiment (DOE) based model and verified with a wide panel of antibodies. Dynamic binding capacities of >65 g/L could be achieved under these new conditions, significantly higher by more than one and half times the values that have been typically achieved with Protein A in the past. Furthermore, comparable process performance and product quality was demonstrated for the Protein A step at the increased loading. © 2014 American Institute of Chemical Engineers.
Quantum Mechanics and the Principle of Maximal Variety
Smolin, Lee
2016-06-01
Quantum mechanics is derived from the principle that the universe contain as much variety as possible, in the sense of maximizing the distinctiveness of each subsystem. The quantum state of a microscopic system is defined to correspond to an ensemble of subsystems of the universe with identical constituents and similar preparations and environments. A new kind of interaction is posited amongst such similar subsystems which acts to increase their distinctiveness, by extremizing the variety. In the limit of large numbers of similar subsystems this interaction is shown to give rise to Bohm's quantum potential. As a result the probability distribution for the ensemble is governed by the Schroedinger equation. The measurement problem is naturally and simply solved. Microscopic systems appear statistical because they are members of large ensembles of similar systems which interact non-locally. Macroscopic systems are unique, and are not members of any ensembles of similar systems. Consequently their collective coordinates may evolve deterministically. This proposal could be tested by constructing quantum devices from entangled states of a modest number of quits which, by its combinatorial complexity, can be expected to have no natural copies.
Suppression of maximal linear gluon polarization in angular asymmetries
Boer, Daniël; Mulders, Piet J.; Zhou, Jian; Zhou, Ya-jin
2017-10-01
We perform a phenomenological analysis of the cos 2 ϕ azimuthal asymmetry in virtual photon plus jet production induced by the linear polarization of gluons in unpolarized pA collisions. Although the linearly polarized gluon distribution becomes maximal at small x, TMD evolution leads to a Sudakov suppression of the asymmetry with increasing invariant mass of the γ ∗-jet pair. Employing a small- x model input distribution, the asymmetry is found to be strongly suppressed under TMD evolution, but still remains sufficiently large to be measurable in the typical kinematical region accessible at RHIC or LHC at moderate photon virtuality, whereas it is expected to be negligible in Z/W -jet pair production at LHC. We point out the optimal kinematics for RHIC and LHC studies, in order to expedite the first experimental studies of the linearly polarized gluon distribution through this process. We further argue that this is a particularly clean process to test the k t -resummation formalism in the small- x regime.
The maximal operator in weighted variable spaces Lp(⋅
Directory of Open Access Journals (Sweden)
Vakhtang Kokilashvili
2007-01-01
Full Text Available We study the boundedness of the maximal operator in the weighted spaces Lp(⋅(ρ over a bounded open set Ω in the Euclidean space ℝn or a Carleson curve Γ in a complex plane. The weight function may belong to a certain version of a general Muckenhoupt-type condition, which is narrower than the expected Muckenhoupt condition for variable exponent, but coincides with the usual Muckenhoupt class Ap in the case of constant p. In the case of Carleson curves there is also considered another class of weights of radial type of the form ρ(t=∏k=1mwk(|t-tk|, tk∈Γ, where wk has the property that r1p(tkwk(r∈Φ10, where Φ10 is a certain Zygmund-Bari-Stechkin-type class. It is assumed that the exponent p(t satisfies the Dini–Lipschitz condition. For such radial type weights the final statement on the boundedness is given in terms of the index numbers of the functions wk (similar in a sense to the Boyd indices for the Young functions defining Orlich spaces.
Optimized Design of Microresonators Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
G.Uma
2006-10-01
Full Text Available This paper represents the optimization of micro resonator design using Genetic Algorithm. Optimized physical layout parameters are generated using genetic algorithm. Optimization evaluates parameter by minimizing active device area, electrostatic drive voltage or a weighted combination of area and drive voltage or by maximizing displacement at resonance. Desired resonant frequency and mode frequency separations are governed by the objective function. Layouts are generated for optimized design parameters using Coventorware. Modal analysis is performed and it is compared with the designed resonant frequency.
Maximal and sub-maximal functional lifting performance at different platform heights.
Savage, Robert J; Jaffrey, Mark A; Billing, Daniel C; Ham, Daniel J
2015-01-01
Introducing valid physical employment tests requires identifying and developing a small number of practical tests that provide broad coverage of physical performance across the full range of job tasks. This study investigated discrete lifting performance across various platform heights reflective of common military lifting tasks. Sixteen Australian Army personnel performed a discrete lifting assessment to maximal lifting capacity (MLC) and maximal acceptable weight of lift (MAWL) at four platform heights between 1.30 and 1.70 m. There were strong correlations between platform height and normalised lifting performance for MLC (R(2) = 0.76 ± 0.18, p < 0.05) and MAWL (R(2) = 0.73 ± 0.21, p < 0.05). The developed relationship allowed prediction of lifting capacity at one platform height based on lifting capacity at any of the three other heights, with a standard error of < 4.5 kg and < 2.0 kg for MLC and MAWL, respectively.
Broken Expectations: Violation of Expectancies, Not Novelty, Captures Auditory Attention
Vachon, Francois; Hughes, Robert W.; Jones, Dylan M.
2012-01-01
The role of memory in behavioral distraction by auditory attentional capture was investigated: We examined whether capture is a product of the novelty of the capturing event (i.e., the absence of a recent memory for the event) or its violation of learned expectancies on the basis of a memory for an event structure. Attentional capture--indicated…
Maximizing wind power integration in distribution system
Energy Technology Data Exchange (ETDEWEB)
Nursebo Salih, S.; Chen, Peiyuan; Carlson, Ola [Chalmers Univ. of Technology (Sweden)
2011-07-01
Due to the location of favorable wind sites and lower connection costs associated with installing wind power in a distribution system, there is a need to know the hosting capacity of a distribution system so that it can be used effectively for injecting wind power into the power system. Therefore this paper presents a methodology to investigate the wind power hosting capacity of a distribution system. Stochastic nature of wind power and customer loads is taken into account using copulas. Hence it is possible to investigate various levels of correlation among customer loads. A simple algorithm is proposed for selecting the connection points of wind power in the network. The effectiveness of active management strategies such as wind power curtailment and reactive power compensation are thoroughly investigated. The analysis shows that allowing a curtailment level of as low as 0.2% with power factor (PF) control of wind turbines could boost the hosting capacity by 118%. (orig.)
Consumer's inflation expectations in Brazil
Directory of Open Access Journals (Sweden)
Fernando Ormonde Teixeira
Full Text Available Abstract This paper investigates what are the main components of consumer's inflation expectations. We combine the FGV's Consumer Survey with the indices of inflation (IPCA and government regulated prices, professional forecasts disclosed in the Focus report, and media data which we crawl from one of the biggest and most important Brazilian newspapers, Folha de São Paulo, to determine what factors are responsible for and improve consumer's forecast accuracy. We found gender, age and city of residence as major elements when analyzing micro-data. Aggregate data shows the past inflation as an important trigger in the formation of consumers' expectations and professional forecasts as negligible. Moreover, the media plays a significant role, accounting not only for the expectations' formation but for a better understanding of actual inflation as well.
Expectations for a scientific collaboratory
DEFF Research Database (Denmark)
Sonnenwald, Diane H.
2003-01-01
In the past decade, a number of scientific collaboratories have emerged, yet adoption of scientific collaboratories remains limited. Meeting expectations is one factor that influences adoption of innovations, including scientific collaboratories. This paper investigates expectations scientists have...... with respect to scientific collaboratories. Interviews were conducted with 17 scientists who work in a variety of settings and have a range of experience conducting and managing scientific research. Results indicate that scientists expect a collaboratory to: support their strategic plans; facilitate management...... of the scientific process; have a positive or neutral impact on scientific outcomes; provide advantages and disadvantages for scientific task execution; and provide personal conveniences when collaborating across distances. These results both confirm existing knowledge and raise new issues for the design...
Directory of Open Access Journals (Sweden)
José Pinto Casquilho
2017-02-01
Full Text Available The search for hypothetical optimal solutions of landscape composition is a major issue in landscape planning and it can be outlined in a two-dimensional decision space involving economic value and landscape diversity, the latter being considered as a potential safeguard to the provision of services and externalities not accounted in the economic value. In this paper, we use decision models with different utility valuations combined with weighted entropies respectively incorporating rarity factors associated to Gini-Simpson and Shannon measures. A small example of this framework is provided and discussed for landscape compositional scenarios in the region of Nisa, Portugal. The optimal solutions relative to the different cases considered are assessed in the two-dimensional decision space using a benchmark indicator. The results indicate that the likely best combination is achieved by the solution using Shannon weighted entropy and a square root utility function, corresponding to a risk-averse behavior associated to the precautionary principle linked to safeguarding landscape diversity, anchoring for ecosystem services provision and other externalities. Further developments are suggested, mainly those relative to the hypothesis that the decision models here outlined could be used to revisit the stability-complexity debate in the field of ecological studies.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System.
Chinnadurai, Sunil; Selvaprabhu, Poongundran; Jeong, Yongchae; Jiang, Xueqin; Lee, Moon Ho
2017-09-18
In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE) maximization problem in a 5G massive multiple-input multiple-output (MIMO)-non-orthogonal multiple access (NOMA) downlink system with imperfect channel state information (CSI) at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA) algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM). A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP) that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach's algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA) scheme.
Worst-Case Energy Efficiency Maximization in a 5G Massive MIMO-NOMA System
Directory of Open Access Journals (Sweden)
Sunil Chinnadurai
2017-09-01
Full Text Available In this paper, we examine the robust beamforming design to tackle the energy efficiency (EE maximization problem in a 5G massive multiple-input multiple-output (MIMO-non-orthogonal multiple access (NOMA downlink system with imperfect channel state information (CSI at the base station. A novel joint user pairing and dynamic power allocation (JUPDPA algorithm is proposed to minimize the inter user interference and also to enhance the fairness between the users. This work assumes imperfect CSI by adding uncertainties to channel matrices with worst-case model, i.e., ellipsoidal uncertainty model (EUM. A fractional non-convex optimization problem is formulated to maximize the EE subject to the transmit power constraints and the minimum rate requirement for the cell edge user. The designed problem is difficult to solve due to its nonlinear fractional objective function. We firstly employ the properties of fractional programming to transform the non-convex problem into its equivalent parametric form. Then, an efficient iterative algorithm is proposed established on the constrained concave-convex procedure (CCCP that solves and achieves convergence to a stationary point of the above problem. Finally, Dinkelbach’s algorithm is employed to determine the maximum energy efficiency. Comprehensive numerical results illustrate that the proposed scheme attains higher worst-case energy efficiency as compared with the existing NOMA schemes and the conventional orthogonal multiple access (OMA scheme.
Maximization of Energy Efficiency in Wireless ad hoc and Sensor Networks With SERENA
Directory of Open Access Journals (Sweden)
Saoucene Mahfoudh
2009-01-01
Full Text Available In wireless ad hoc and sensor networks, an analysis of the node energy consumption distribution shows that the largest part is due to the time spent in the idle state. This result is at the origin of SERENA, an algorithm to SchEdule RoutEr Nodes Activity. SERENA allows router nodes to sleep, while ensuring end-to-end communication in the wireless network. It is a localized and decentralized algorithm assigning time slots to nodes. Any node stays awake only during its slot and the slots assigned to its neighbors, it sleeps the remaining time. Simulation results show that SERENA enables us to maximize network lifetime while increasing the number of user messages delivered. SERENA is based on a two-hop coloring algorithm, whose complexity in terms of colors and rounds is evaluated. We then quantify the slot reuse. Finally, we show how SERENA improves the node energy consumption distribution and maximizes the energy efficiency of wireless ad hoc and sensor networks. We compare SERENA with classical TDMA and optimized variants such as USAP in wireless ad hoc and sensor networks.
Obtaining a pet: realistic expectations.
Marder, Amy; Duxbury, Margaret M
2008-09-01
Millions of dog-human relationships fail each year-some from simple and preventable mismatches. False or unrealistic expectations of a dog's behavior are a common reason for failed human-animal bonds. Veterinarians can reduce the incidence of false expectations and thereby increase the likelihood of successful adoptions by offering preadoption counseling to help clients sort through the many factors involved in the process of successful pet selection, by preparing clients to take on the important tasks of puppy socialization and the management of the home learning environment, and by educating new owners about the needs and behavior of dogs.
Myths, maxims and universal health care.
Conn, J K
1990-10-01
There is considerable indirect evidence that our legislative bodies, because of inability to control costs, are reluctant to further expand government responsibilities into health care. There continues to be general societal, and limited professional, pressure to assure access to health care for the large segment of society which presently encounters barriers to care because of lack of insurance. Congress and state legislatures are actively proposing health care legislation but on the whole it is aimed at reducing the cost of the programs to which government is already committed, not for expansion into new fields. However, providers, physicians and hospitals are begging for relief from the burden of uncompensated care. A suggested solution is to require all employers to provide health insurance for their employees. This may become impossibly burdensome for many small employers and could still leave a sizeable uninsured group of unemployed or underemployed. Tax revenues would still be needed to fund a government administered program to assure their access. We have a problem. We must devise a method to assure needed care for the 13% to 15% of our population which is presently uninsured. It must be accomplished in a manner that will not direct too much money from other socially important programs such as education, law enforcement, transportation, and environmental preservation. A solution will be found; however, if my evaluation is correct, neither patients nor physicians are going to be very happy with it. If the myths and maxims are valid, both groups will be disappointed. Rationing in some form will be inevitable as will be control and regulation of doctors' practice styles and fees.(ABSTRACT TRUNCATED AT 250 WORDS)
A Simulated Annealing method to solve a generalized maximal covering location problem
Directory of Open Access Journals (Sweden)
M. Saeed Jabalameli
2011-04-01
Full Text Available The maximal covering location problem (MCLP seeks to locate a predefined number of facilities in order to maximize the number of covered demand points. In a classical sense, MCLP has three main implicit assumptions: all or nothing coverage, individual coverage, and fixed coverage radius. By relaxing these assumptions, three classes of modelling formulations are extended: the gradual cover models, the cooperative cover models, and the variable radius models. In this paper, we develop a special form of MCLP which combines the characteristics of gradual cover models, cooperative cover models, and variable radius models. The proposed problem has many applications such as locating cell phone towers. The model is formulated as a mixed integer non-linear programming (MINLP. In addition, a simulated annealing algorithm is used to solve the resulted problem and the performance of the proposed method is evaluated with a set of randomly generated problems.
CAN AEROBIC AND ANAEROBIC POWER BE MEASURED IN A 60-SECOND MAXIMAL TEST?
Directory of Open Access Journals (Sweden)
Daniel G. Carey
2003-12-01
Full Text Available The primary objective of this study was to assess the efficacy of measuring both aerobic and anaerobic power in a 60-second, maximal effort test. It was hypothesized that oxygen consumption increases rapidly during maximal effort and maximal oxygen consumption (VO2 max may be reached in one minute. Fifteen United States Cycling Federation competitive cyclists performed the following tests: 1 practice 60-second maximal exertion test; 2 standard incremental workload VO2 max test; 3 Wingate anaerobic power test (WAT; 4 VO2 measured during 60-second maximal exertion test (60-SEC; and 5 VO2 measured during 75-second maximal exertion test (75-SEC. All tests were performed on an electrically-braked cycle ergometer. Hydrostatic weighing was performed to determine percent body fat. Peak oxygen consumption values for the 60-SEC (53.4 ml·kg-1·min-1, 92% VO2 max, and 75-SEC (52.6 ml·kg-1·min-1, 91% VO2 max tests were significantly lower than VO2 max (58.1 ml·kg-1·min-1. During the 75-SEC test, there was no significant difference in percentage VO2max from 30 seconds to 75 seconds, demonstrating a plateau effect. There were no significant differences in peak power or relative peak power between the Wingate, 60-SEC, and 75 SEC tests while, as expected, mean power, relative mean power, and fatigue index were significantly different between these tests. Power measures were highly correlated among all three tests. It was concluded that VO2 max was not attained during either the 60-SEC nor 75-SEC tests. Furthermore, high correlations in power output for WAT, 60-SEC, and 75-SEC precludes the necessity for anaerobic tests longer than the 30-second WAT.
Sahraeian, Sayed Mohammad Ebrahim; Yoon, Byung-Jun
2011-07-01
In this article, we introduce PicXAA-Web, a web-based platform for accurate probabilistic alignment of multiple biological sequences. The core of PicXAA-Web consists of PicXAA, a multiple protein/DNA sequence alignment algorithm, and PicXAA-R, an extension of PicXAA for structural alignment of RNA sequences. Both PicXAA and PicXAA-R are probabilistic non-progressive alignment algorithms that aim to find the optimal alignment of multiple biological sequences by maximizing the expected accuracy. PicXAA and PicXAA-R greedily build up the alignment from sequence regions with high local similarity, thereby yielding an accurate global alignment that effectively captures local similarities among sequences. PicXAA-Web integrates these two algorithms in a user-friendly web platform for accurate alignment and analysis of multiple protein, DNA and RNA sequences. PicXAA-Web can be freely accessed at http://gsp.tamu.edu/picxaa/.
DEFF Research Database (Denmark)
issues of theoretical algorithmics and applications in various fields including graph algorithms, computational geometry, scheduling, approximation algorithms, network algorithms, data storage and manipulation, combinatorics, sorting, searching, online algorithms, optimization, etc.......This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...
Expected utility with lower probabilities
DEFF Research Database (Denmark)
Hendon, Ebbe; Jacobsen, Hans Jørgen; Sloth, Birgitte
1994-01-01
An uncertain and not just risky situation may be modeled using so-called belief functions assigning lower probabilities to subsets of outcomes. In this article we extend the von Neumann-Morgenstern expected utility theory from probability measures to belief functions. We use this theory...
Education: Expectation and the Unexpected
Fulford, Amanda
2016-01-01
This paper considers concepts of expectation and responsibility, and how these drive dialogic interactions between tutor and student in an age of marketised Higher Education. In thinking about such interactions in terms of different forms of exchange, the paper considers the philosophy of Martin Buber and Emmanuel Levinas on dialogic…
Privacy Expectations in Online Contexts
Pure, Rebekah Abigail
2013-01-01
Advances in digital networked communication technology over the last two decades have brought the issue of personal privacy into sharper focus within contemporary public discourse. In this dissertation, I explain the Fourth Amendment and the role that privacy expectations play in the constitutional protection of personal privacy generally, and…
Expectations and retail profit margins
R.G.J. den Hertog; A.R. Thurik (Roy)
1992-01-01
textabstractIn this study expectations and prediction errors are introduced in the context of retail price setting. A new model and a new data set are used to examine whether prediction errors influence retail price setting, whether prediction errors cause only limited price changes to maintain
A Faster Algorithm for Computing Motorcycle Graphs
Vigneron, Antoine E.
2014-08-29
We present a new algorithm for computing motorcycle graphs that runs in (Formula presented.) time for any (Formula presented.), improving on all previously known algorithms. The main application of this result is to computing the straight skeleton of a polygon. It allows us to compute the straight skeleton of a non-degenerate polygon with (Formula presented.) holes in (Formula presented.) expected time. If all input coordinates are (Formula presented.)-bit rational numbers, we can compute the straight skeleton of a (possibly degenerate) polygon with (Formula presented.) holes in (Formula presented.) expected time. In particular, it means that we can compute the straight skeleton of a simple polygon in (Formula presented.) expected time if all input coordinates are (Formula presented.)-bit rationals, while all previously known algorithms have worst-case running time (Formula presented.). © 2014 Springer Science+Business Media New York.
A Joint Land Cover Mapping and Image Registration Algorithm Based on a Markov Random Field Model
Directory of Open Access Journals (Sweden)
Apisit Eiumnoh
2013-10-01
Full Text Available Traditionally, image registration of multi-modal and multi-temporal images is performed satisfactorily before land cover mapping. However, since multi-modal and multi-temporal images are likely to be obtained from different satellite platforms and/or acquired at different times, perfect alignment is very difficult to achieve. As a result, a proper land cover mapping algorithm must be able to correct registration errors as well as perform an accurate classification. In this paper, we propose a joint classification and registration technique based on a Markov random field (MRF model to simultaneously align two or more images and obtain a land cover map (LCM of the scene. The expectation maximization (EM algorithm is employed to solve the joint image classification and registration problem by iteratively estimating the map parameters and approximate posterior probabilities. Then, the maximum a posteriori (MAP criterion is used to produce an optimum land cover map. We conducted experiments on a set of four simulated images and one pair of remotely sensed images to investigate the effectiveness and robustness of the proposed algorithm. Our results show that, with proper selection of a critical MRF parameter, the resulting LCMs derived from an unregistered image pair can achieve an accuracy that is as high as when images are perfectly aligned. Furthermore, the registration error can be greatly reduced.
Inferring the structure of latent class models using a genetic algorithm.
van der Maas, Han L J; Raijmakers, Maartje E J; Visser, Ingmar
2005-05-01
Present optimization techniques in latent class analysis apply the expectation maximization algorithm or the Newton-Raphson algorithm for optimizing the parameter values of a prespecified model. These techniques can be used to find maximum likelihood estimates of the parameters, given the specified structure of the model, which is defined by the number of classes and, possibly, fixation and equality constraints. The model structure is usually chosen on theoretical grounds. A large variety of structurally different latent class models can be compared using goodness-of-fit indices of the chi-square family, Akaike's information criterion, the Bayesian information criterion, and various other statistics. However, finding the optimal structure for a given goodness-of-fit index often requires a lengthy search in which all kinds of model structures are tested. Moreover, solutions may depend on the choice of initial values for the parameters. This article presents a new method by which one can simultaneously infer the model structure from the data and optimize the parameter values. The method consists of a genetic algorithm in which any goodness-of-fit index can be used as a fitness criterion. In a number of test cases in which data sets from the literature were used, it is shown that this method provides models that fit equally well as or better than the models suggested in the original articles.
An Energy Aware Adaptive Sampling Algorithm for Energy Harvesting WSN with Energy Hungry Sensors.
Srbinovski, Bruno; Magno, Michele; Edwards-Murphy, Fiona; Pakrashi, Vikram; Popovici, Emanuel
2016-03-28
Wireless sensor nodes have a limited power budget, though they are often expected to be functional in the field once deployed for extended periods of time. Therefore, minimization of energy consumption and energy harvesting technology in Wireless Sensor Networks (WSN) are key tools for maximizing network lifetime, and achieving self-sustainability. This paper proposes an energy aware Adaptive Sampling Algorithm (ASA) for WSN with power hungry sensors and harvesting capabilities, an energy management technique that can be implemented on any WSN platform with enough processing power to execute the proposed algorithm. An existing state-of-the-art ASA developed for wireless sensor networks with power hungry sensors is optimized and enhanced to adapt the sampling frequency according to the available energy of the node. The proposed algorithm is evaluated using two in-field testbeds that are supplied by two different energy harvesting sources (solar and wind). Simulation and comparison between the state-of-the-art ASA and the proposed energy aware ASA (EASA) in terms of energy durability are carried out using in-field measured harvested energy (using both wind and solar sources) and power hungry sensors (ultrasonic wind sensor and gas sensors). The simulation results demonstrate that using ASA in combination with an energy aware function on the nodes can drastically increase the lifetime of a WSN node and enable self-sustainability. In fact, the proposed EASA in conjunction with energy harvesting capability can lead towards perpetual WSN operation and significantly outperform the state-of-the-art ASA.
POLITENESS MAXIM OF MAIN CHARACTER IN SECRET FORGIVEN
Directory of Open Access Journals (Sweden)
Sang Ayu Isnu Maharani
2017-06-01
Full Text Available Maxim of Politeness is an interesting subject to be discussed, since politeness has been criticized from our childhood. We are obliques to be polite to anyone either in speaking or in acting. Somehow we are manage to show politeness in our spoken expression though our intention might be not so polite. For example we must appriciate others opinion although we feel objection toward the opinion. In this article the analysis of politeness is based on maxim proposes by Leech. He proposed six types of politeness maxim. The discussion shows that the main character (Kristen and Kami use all types of maxim in their conversation. The most commonly used are approbation maxim and agreement maxim
Maximally entangled states in pseudo-telepathy games
Mančinska, Laura
2015-01-01
A pseudo-telepathy game is a nonlocal game which can be won with probability one using some finite-dimensional quantum strategy but not using a classical one. Our central question is whether there exist two-party pseudo-telepathy games which cannot be won with probability one using a maximally entangled state. Towards answering this question, we develop conditions under which maximally entangled states suffice. In particular, we show that maximally entangled states suffice for weak projection...
Maximality-Based Structural Operational Semantics for Petri Nets
Saīdouni, Djamel Eddine; Belala, Nabil; Bouneb, Messaouda
2009-03-01
The goal of this work is to exploit an implementable model, namely the maximality-based labeled transition system, which permits to express true-concurrency in a natural way without splitting actions on their start and end events. One can do this by giving a maximality-based structural operational semantics for the model of Place/Transition Petri nets in terms of maximality-based labeled transition systems structures.
Shareholder, stakeholder-owner or broad stakeholder maximization
DEFF Research Database (Denmark)
Mygind, Niels
2004-01-01
including the shareholders of a company. Although it may be the ultimate goal for Corporate Social Responsibility to achieve this kind of maximization, broad stakeholder maximization is quite difficult to give a precise definition. There is no one-dimensional measure to add different stakeholder benefits...... not traded on the mar-ket, and therefore there is no possibility for practical application. Broad stakeholder maximization instead in practical applications becomes satisfying certain stakeholder demands, so that the practical application will be stakeholder-owner maximization un-der constraints defined...
A web portal for classification of expression data using maximal margin linear programming.
Antonov, Alexey V; Tetko, Igor V; Prokopenko, Volodymyr V; Kosykh, Denis; Mewes, Hans W
2004-11-22
The Maximal Margin (MAMA) linear programming classification algorithm has recently been proposed and tested for cancer classification based on expression data. It demonstrated sound performance on publicly available expression datasets. We developed a web interface to allow potential users easy access to the MAMA classification tool. Basic and advanced options provide flexibility in exploitation. The input data format is the same as that used in most publicly available datasets. This makes the web resource particularly convenient for non-expert machine learning users working in the field of expression data analysis.
Smooth paths of conditional expectations
Andruchow, Esteban; Larotonda, Gabriel
2010-01-01
Let A be a von Neumann algebra with a finite trace $\\tau$, represented in $H=L^2(A,\\tau)$, and let $B_t\\subset A$ be sub-algebras, for $t$ in an interval $I$. Let $E_t:A\\to B_t$ be the unique $\\tau$-preserving conditional expectation. We say that the path $t\\mapsto E_t$ is smooth if for every $a\\in A$ and $v \\in H$, the map $$ I\
Grade Expectations: Rationality and Overconfidence
Magnus, Jan R.; Peresetsky, Anatoly A.
2018-01-01
Confidence and overconfidence are essential aspects of human nature, but measuring (over)confidence is not easy. Our approach is to consider students' forecasts of their exam grades. Part of a student's grade expectation is based on the student's previous academic achievements; what remains can be interpreted as (over)confidence. Our results are based on a sample of about 500 second-year undergraduate students enrolled in a statistics course in Moscow. The course contains three exams and each student produces a forecast for each of the three exams. Our models allow us to estimate overconfidence quantitatively. Using these models we find that students' expectations are not rational and that most students are overconfident, in agreement with the general literature. Less obvious is that overconfidence helps: given the same academic achievement students with larger confidence obtain higher exam grades. Female students are less overconfident than male students, their forecasts are more rational, and they are also faster learners in the sense that they adjust their expectations more rapidly. PMID:29375449
Grade Expectations: Rationality and Overconfidence
Directory of Open Access Journals (Sweden)
Jan R. Magnus
2018-01-01
Full Text Available Confidence and overconfidence are essential aspects of human nature, but measuring (overconfidence is not easy. Our approach is to consider students' forecasts of their exam grades. Part of a student's grade expectation is based on the student's previous academic achievements; what remains can be interpreted as (overconfidence. Our results are based on a sample of about 500 second-year undergraduate students enrolled in a statistics course in Moscow. The course contains three exams and each student produces a forecast for each of the three exams. Our models allow us to estimate overconfidence quantitatively. Using these models we find that students' expectations are not rational and that most students are overconfident, in agreement with the general literature. Less obvious is that overconfidence helps: given the same academic achievement students with larger confidence obtain higher exam grades. Female students are less overconfident than male students, their forecasts are more rational, and they are also faster learners in the sense that they adjust their expectations more rapidly.
Directory of Open Access Journals (Sweden)
Riediger Michael L. B.
2005-01-01
Full Text Available In this paper, we consider the issue of blind detection of Alamouti-type differential space-time (ST modulation in static Rayleigh fading channels. We focus our attention on a π / 2 -shifted BPSK constellation, introducing a novel transformation to the received signal such that this binary ST modulation, which has a second-order transmit diversity, is equivalent to QPSK modulation with second-order receive diversity. This equivalent representation allows us to apply a low-complexity detection technique specifically designed for receive diversity, namely, scalar multiple-symbol differential detection (MSDD. To further increase receiver performance, we apply an iterative expectation-maximization (EM algorithm which performs joint channel estimation and sequence detection. This algorithm uses minimum mean square estimation to obtain channel estimates and the maximum-likelihood principle to detect the transmitted sequence, followed by differential decoding. With receiver complexity proportional to the observation window length, our receiver can achieve the performance of a coherent maximal ratio combining receiver (with differential decoding in as few as a single EM receiver iteration, provided that the window size of the initial MSDD is sufficiently long. To further demonstrate that the MSDD is a vital part of this receiver setup, we show that an initial ST conventional differential detector would lead to strange convergence behavior in the EM algorithm.
Maximizing exposure therapy: an inhibitory learning approach.
Craske, Michelle G; Treanor, Michael; Conway, Christopher C; Zbozinek, Tomislav; Vervliet, Bram
2014-07-01
Exposure therapy is an effective approach for treating anxiety disorders, although a substantial number of individuals fail to benefit or experience a return of fear after treatment. Research suggests that anxious individuals show deficits in the mechanisms believed to underlie exposure therapy, such as inhibitory learning. Targeting these processes may help improve the efficacy of exposure-based procedures. Although evidence supports an inhibitory learning model of extinction, there has been little discussion of how to implement this model in clinical practice. The primary aim of this paper is to provide examples to clinicians for how to apply this model to optimize exposure therapy with anxious clients, in ways that distinguish it from a 'fear habituation' approach and 'belief disconfirmation' approach within standard cognitive-behavior therapy. Exposure optimization strategies include (1) expectancy violation, (2) deepened extinction, (3) occasional reinforced extinction, (4) removal of safety signals, (5) variability, (6) retrieval cues, (7) multiple contexts, and (8) affect labeling. Case studies illustrate methods of applying these techniques with a variety of anxiety disorders, including obsessive-compulsive disorder, posttraumatic stress disorder, social phobia, specific phobia, and panic disorder. Copyright © 2014 Elsevier Ltd. All rights reserved.
An Efficient Estimator for the Expected Value of Sample Information.
Menzies, Nicolas A
2016-04-01
Conventional estimators for the expected value of sample information (EVSI) are computationally expensive or limited to specific analytic scenarios. I describe a novel approach that allows efficient EVSI computation for a wide range of study designs and is applicable to models of arbitrary complexity. The posterior parameter distribution produced by a hypothetical study is estimated by reweighting existing draws from the prior distribution. EVSI can then be estimated using a conventional probabilistic sensitivity analysis, with no further model evaluations and with a simple sequence of calculations (Algorithm 1). A refinement to this approach (Algorithm 2) uses smoothing techniques to improve accuracy. Algorithm performance was compared with the conventional EVSI estimator (2-level Monte Carlo integration) and an alternative developed by Brennan and Kharroubi (BK), in a cost-effectiveness case study. Compared with the conventional estimator, Algorithm 2 exhibited a root mean square error (RMSE) 8%-17% lower, with far fewer model evaluations (3-4 orders of magnitude). Algorithm 1 produced results similar to those of the conventional estimator when study evidence was weak but underestimated EVSI when study evidence was strong. Compared with the BK estimator, the proposed algorithms reduced RSME by 18%-38% in most analytic scenarios, with 40 times fewer model evaluations. Algorithm 1 performed poorly in the context of strong study evidence. All methods were sensitive to the number of samples in the outer loop of the simulation. The proposed algorithms remove two major challenges for estimating EVSI--the difficulty of estimating the posterior parameter distribution given hypothetical study data and the need for many model evaluations to obtain stable and unbiased results. These approaches make EVSI estimation feasible for a wide range of analytic scenarios. © The Author(s) 2015.
A Comparison of Heuristics with Modularity Maximization Objective using Biological Data Sets
Directory of Open Access Journals (Sweden)
Pirim Harun
2016-01-01
Full Text Available Finding groups of objects exhibiting similar patterns is an important data analytics task. Many disciplines have their own terminologies such as cluster, group, clique, community etc. defining the similar objects in a set. Adopting the term community, many exact and heuristic algorithms are developed to find the communities of interest in available data sets. Here, three heuristic algorithms to find communities are compared using five gene expression data sets. The heuristics have a common objective function of maximizing the modularity that is a quality measure of a partition and a reflection of objects’ relevance in communities. Partitions generated by the heuristics are compared with the real ones using the adjusted rand index, one of the most commonly used external validation measures. The paper discusses the results of the partitions on the mentioned biological data sets.
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
Stationary algorithmic probability
National Research Council Canada - National Science Library
Müller, Markus
2010-01-01
...,sincetheiractualvaluesdependonthechoiceoftheuniversal referencecomputer.Inthispaper,weanalyzeanaturalapproachtoeliminatethismachine- dependence. Our method is to assign algorithmic probabilities to the different...
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
This article reflects the kinds of situations and spaces where people and algorithms meet. In what situations do people become aware of algorithms? How do they experience and make sense of these algorithms, given their often hidden and invisible nature? To what extent does an awareness....... Examining how algorithms make people feel, then, seems crucial if we want to understand their social power....
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Algorithms for radio networks with dynamic topology
Shacham, Nachum; Ogier, Richard; Rutenburg, Vladislav V.; Garcia-Luna-Aceves, Jose
1991-08-01
The objective of this project was the development of advanced algorithms and protocols that efficiently use network resources to provide optimal or nearly optimal performance in future communication networks with highly dynamic topologies and subject to frequent link failures. As reflected by this report, we have achieved our objective and have significantly advanced the state-of-the-art in this area. The research topics of the papers summarized include the following: efficient distributed algorithms for computing shortest pairs of disjoint paths; minimum-expected-delay alternate routing algorithms for highly dynamic unreliable networks; algorithms for loop-free routing; multipoint communication by hierarchically encoded data; efficient algorithms for extracting the maximum information from event-driven topology updates; methods for the neural network solution of link scheduling and other difficult problems arising in communication networks; and methods for robust routing in networks subject to sophisticated attacks.
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Accurate colon residue detection algorithm with partial volume segmentation
Li, Xiang; Liang, Zhengrong; Zhang, PengPeng; Kutcher, Gerald J.
2004-05-01
Colon cancer is the second leading cause of cancer-related death in the United States. Earlier detection and removal of polyps can dramatically reduce the chance of developing malignant tumor. Due to some limitations of optical colonoscopy used in clinic, many researchers have developed virtual colonoscopy as an alternative technique, in which accurate colon segmentation is crucial. However, partial volume effect and existence of residue make it very challenging. The electronic colon cleaning technique proposed by Chen et al is a very attractive method, which is also kind of hard segmentation method. As mentioned in their paper, some artifacts were produced, which might affect the accurate colon reconstruction. In our paper, instead of labeling each voxel with a unique label or tissue type, the percentage of different tissues within each voxel, which we call a mixture, was considered in establishing a maximum a posterior probability (MAP) image-segmentation framework. A Markov random field (MRF) model was developed to reflect the spatial information for the tissue mixtures. The spatial information based on hard segmentation was used to determine which tissue types are in the specific voxel. Parameters of each tissue class were estimated by the expectation-maximization (EM) algorithm during the MAP tissue-mixture segmentation. Real CT experimental results demonstrated that the partial volume effects between four tissue types have been precisely detected. Meanwhile, the residue has been electronically removed and very smooth and clean interface along the colon wall has been obtained.
Development of Automatic Cluster Algorithm for Microcalcification in Digital Mammography
Energy Technology Data Exchange (ETDEWEB)
Choi, Seok Yoon [Dept. of Medical Engineering, Korea University, Seoul (Korea, Republic of); Kim, Chang Soo [Dept. of Radiological Science, College of Health Sciences, Catholic University of Pusan, Pusan (Korea, Republic of)
2009-03-15
Digital Mammography is an efficient imaging technique for the detection and diagnosis of breast pathological disorders. Six mammographic criteria such as number of cluster, number, size, extent and morphologic shape of microcalcification, and presence of mass, were reviewed and correlation with pathologic diagnosis were evaluated. It is very important to find breast cancer early when treatment can reduce deaths from breast cancer and breast incision. In screening breast cancer, mammography is typically used to view the internal organization. Clusterig microcalcifications on mammography represent an important feature of breast mass, especially that of intraductal carcinoma. Because microcalcification has high correlation with breast cancer, a cluster of a microcalcification can be very helpful for the clinical doctor to predict breast cancer. For this study, three steps of quantitative evaluation are proposed : DoG filter, adaptive thresholding, Expectation maximization. Through the proposed algorithm, each cluster in the distribution of microcalcification was able to measure the number calcification and length of cluster also can be used to automatically diagnose breast cancer as indicators of the primary diagnosis.
On Horowitz and Shelah's Borel maximal eventually different family
DEFF Research Database (Denmark)
Schrittesser, David
We give an exposition of Horowitz and Shelah’s proof that there exists an effectively Borel maximal eventually different family (working in ZF or less) and announce two related theorems.......We give an exposition of Horowitz and Shelah’s proof that there exists an effectively Borel maximal eventually different family (working in ZF or less) and announce two related theorems....
Muscle mitochondrial capacity exceeds maximal oxygen delivery in humans
DEFF Research Database (Denmark)
Boushel, Robert Christopher; Gnaiger, Erich; Calbet, Jose A L
2011-01-01
Across a wide range of species and body mass a close matching exists between maximal conductive oxygen delivery and mitochondrial respiratory rate. In this study we investigated in humans how closely in-vivo maximal oxygen consumption (VO(2) max) is matched to state 3 muscle mitochondrial respira...
Maximal regularity for non-autonomous stochastic evolution ...
Indian Academy of Sciences (India)
Tôn Vi?t T?
2017-11-17
Nov 17, 2017 ... We construct unique strict solutions to the equation and show their maximal regularity. The abstract results are then applied to a stochastic partial differential equation. Keywords. Evolution operators; stochastic linear evolution equations; strict solutions; maximal regularity; UMD Banach spaces of type 2.
The Negative Consequences of Maximizing in Friendship Selection.
Newman, David B; Schug, Joanna; Yuki, Masaki; Yamada, Junko; Nezlek, John B
2017-02-27
Previous studies have shown that the maximizing orientation, reflecting a motivation to select the best option among a given set of choices, is associated with various negative psychological outcomes. In the present studies, we examined whether these relationships extend to friendship selection and how the number of options for friends moderated these effects. Across 5 studies, maximizing in selecting friends was negatively related to life satisfaction, positive affect, and self-esteem, and was positively related to negative affect and regret. In Study 1, a maximizing in selecting friends scale was created, and regret mediated the relationships between maximizing and well-being. In a naturalistic setting in Studies 2a and 2b, the tendency to maximize among those who participated in the fraternity and sorority recruitment process was negatively related to satisfaction with their selection, and positively related to regret and negative affect. In Study 3, daily levels of maximizing were negatively related to daily well-being, and these relationships were mediated by daily regret. In Study 4, we extended the findings to samples from the U.S. and Japan. When participants who tended to maximize were faced with many choices, operationalized as the daily number of friends met (Study 3) and relational mobility (Study 4), the opportunities to regret a decision increased and further diminished well-being. These findings imply that, paradoxically, attempts to maximize when selecting potential friends is detrimental to one's well-being. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Detrimental Relations of Maximization with Academic and Career Attitudes
Dahling, Jason J.; Thompson, Mindi N.
2013-01-01
Maximization refers to a decision-making style that involves seeking the single best option when making a choice, which is generally dysfunctional because people are limited in their ability to rationally evaluate all options and identify the single best outcome. The vocational consequences of maximization are examined in two samples, college…
Directory of Open Access Journals (Sweden)
Seunghyun Moon
Full Text Available We present a customized high content (image-based and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells. Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50 of the anti-T. cruzi activity.
Moon, Seunghyun; Siqueira-Neto, Jair L; Moraes, Carolina Borsoi; Yang, Gyongseon; Kang, Myungjoo; Freitas-Junior, Lucio H; Hansen, Michael A E
2014-01-01
We present a customized high content (image-based) and high throughput screening algorithm for the quantification of Trypanosoma cruzi infection in host cells. Based solely on DNA staining and single-channel images, the algorithm precisely segments and identifies the nuclei and cytoplasm of mammalian host cells as well as the intracellular parasites infecting the cells. The algorithm outputs statistical parameters including the total number of cells, number of infected cells and the total number of parasites per image, the average number of parasites per infected cell, and the infection ratio (defined as the number of infected cells divided by the total number of cells). Accurate and precise estimation of these parameters allow for both quantification of compound activity against parasites, as well as the compound cytotoxicity, thus eliminating the need for an additional toxicity-assay, hereby reducing screening costs significantly. We validate the performance of the algorithm using two known drugs against T.cruzi: Benznidazole and Nifurtimox. Also, we have checked the performance of the cell detection with manual inspection of the images. Finally, from the titration of the two compounds, we confirm that the algorithm provides the expected half maximal effective concentration (EC50) of the anti-T. cruzi activity.
Maximal information transfer and behavior diversity in Random Threshold Networks.
Andrecut, M; Foster, D; Carteret, H; Kauffman, S A
2009-07-01
Random Threshold Networks (RTNs) are an idealized model of diluted, non-symmetric spin glasses, neural networks or gene regulatory networks. RTNs also serve as an interesting general example of any coordinated causal system. Here we study the conditions for maximal information transfer and behavior diversity in RTNs. These conditions are likely to play a major role in physical and biological systems, perhaps serving as important selective traits in biological systems. We show that the pairwise mutual information is maximized in dynamically critical networks. Also, we show that the correlated behavior diversity is maximized for slightly chaotic networks, close to the critical region. Importantly, critical networks maximize coordinated, diverse dynamical behavior across the network and across time: the information transmission between source and receiver nodes and the diversity of dynamical behaviors, when measured with a time delay between the source and receiver, are maximized for critical networks.
Absolutely Maximally Entangled States of Seven Qubits Do Not Exist.
Huber, Felix; Gühne, Otfried; Siewert, Jens
2017-05-19
Pure multiparticle quantum states are called absolutely maximally entangled if all reduced states obtained by tracing out at least half of the particles are maximally mixed. We provide a method to characterize these states for a general multiparticle system. With that, we prove that a seven-qubit state whose three-body marginals are all maximally mixed, or equivalently, a pure ((7,1,4))_{2} quantum error correcting code, does not exist. Furthermore, we obtain an upper limit on the possible number of maximally mixed three-body marginals and identify the state saturating the bound. This solves the seven-particle problem as the last open case concerning maximally entangled states of qubits.
Montero, David; Díaz-Cañestro, Candela
2016-05-01
The increase in maximal oxygen consumption (VO2max) with endurance training is associated with that of maximal cardiac output (Qmax), but not oxygen extraction, in young individuals. Whether such a relationship is altered with ageing remains unclear. Therefore, we sought systematically to review and determine the effect of endurance training on and the associations among VO2max, Qmax and arteriovenous oxygen difference at maximal exercise (Ca-vO2max) in healthy aged individuals. We conducted a systematic search of MEDLINE, Scopus and Web of Science, from their inceptions until May 2015 for articles assessing the effect of endurance training lasting 3 weeks or longer on VO2max and Qmax and/or Ca-vO2max in healthy middle-aged and/or older individuals (mean age ≥40 years). Meta-analyses were performed to determine the standardised mean difference (SMD) in VO2max, Qmax and Ca-vO2max between post and pre-training measurements. Subgroup and meta-regression analyses were used to evaluate the associations among SMDs and potential moderating factors. Sixteen studies were included after systematic review, comprising a total of 153 primarily untrained healthy middle-aged and older subjects (mean age 42-71 years). Endurance training programmes ranged from 8 to 52 weeks of duration. After data pooling, VO2max (SMD 0.89; P endurance training; no heterogeneity among studies was detected. Ca-vO2max was only increased with endurance training interventions lasting more than 12 weeks (SMD 0.62; P = 0.001). In meta-regression, the SMD in Qmax was positively associated with the SMD in VO2max (B = 0.79, P = 0.04). The SMD in Ca-vO2max was not associated with the SMD in VO2max (B = 0.09, P = 0.84). The improvement in VO2max following endurance training is a linear function of Qmax, but not Ca-vO2max, through healthy ageing. © The European Society of Cardiology 2015.
Teaching learning based optimization algorithm and its engineering applications
Rao, R Venkata
2016-01-01
Describing a new optimization algorithm, the “Teaching-Learning-Based Optimization (TLBO),” in a clear and lucid style, this book maximizes reader insights into how the TLBO algorithm can be used to solve continuous and discrete optimization problems involving single or multiple objectives. As the algorithm operates on the principle of teaching and learning, where teachers influence the quality of learners’ results, the elitist version of TLBO algorithm (ETLBO) is described along with applications of the TLBO algorithm in the fields of electrical engineering, mechanical design, thermal engineering, manufacturing engineering, civil engineering, structural engineering, computer engineering, electronics engineering, physics and biotechnology. The book offers a valuable resource for scientists, engineers and practitioners involved in the development and usage of advanced optimization algorithms.
Primary Care Clinician Expectations Regarding Aging
Davis, Melinda M.; Bond, Lynne A.; Howard, Alan; Sarkisian, Catherine A.
2011-01-01
Purpose: Expectations regarding aging (ERA) in community-dwelling older adults are associated with personal health behaviors and health resource usage. Clinicians' age expectations likely influence patients' expectations and care delivery patterns; yet, limited research has explored clinicians' age expectations. The Expectations Regarding Aging…
Self-Averaging Expectation Propagation
DEFF Research Database (Denmark)
Cakmak, Burak; Opper, Manfred; Fleury, Bernard Henri
We investigate the problem of approximate inference using Expectation Propagation (EP) for large systems under some statistical assumptions. Our approach tries to overcome the numerical bottleneck of EP caused by the inversion of large matrices. Assuming that the measurement matrices...... are realizations of specific types of random matrix ensembles – called invariant ensembles – the EP cavity variances have an asymptotic self-averaging property. They can be pre-computed using specific generating functions which do not require matrix inversions. We demonstrate the performance of our approach...
Genetic algorithms with permutation coding for multiple sequence alignment.
Ben Othman, Mohamed Tahar; Abdel-Azim, Gamil
2013-08-01
Multiple sequence alignment (MSA) is one of the topics of bio informatics that has seriously been researched. It is known as NP-complete problem. It is also considered as one of the most important and daunting tasks in computational biology. Concerning this a wide number of heuristic algorithms have been proposed to find optimal alignment. Among these heuristic algorithms are genetic algorithms (GA). The GA has mainly two major weaknesses: it is time consuming and can cause local minima. One of the significant aspects in the GA process in MSA is to maximize the similarities between sequences by adding and shuffling the gaps of Solution Coding (SC). Several ways for SC have been introduced. One of them is the Permutation Coding (PC). We propose a hybrid algorithm based on genetic algorithms (GAs) with a PC and 2-opt algorithm. The PC helps to code the MSA solution which maximizes the gain of resources, reliability and diversity of GA. The use of the PC opens the area by applying all functions over permutations for MSA. Thus, we suggest an algorithm to calculate the scoring function for multiple alignments based on PC, which is used as fitness function. The time complexity of the GA is reduced by using this algorithm. Our GA is implemented with different selections strategies and different crossovers. The probability of crossover and mutation is set as one strategy. Relevant patents have been probed in the topic.
Skvortsova, Vasilisa; Palminteri, Stefano; Pessiglione, Mathias
2014-11-19
The mechanisms of reward maximization have been extensively studied at both the computational and neural levels. By contrast, little is known about how the brain learns to choose the options that minimize action cost. In principle, the brain could have evolved a general mechanism that applies the same learning rule to the different dimensions of choice options. To test this hypothesis, we scanned healthy human volunteers while they performed a probabilistic instrumental learning task that varied in both the physical effort and the monetary outcome associated with choice options. Behavioral data showed that the same computational rule, using prediction errors to update expectations, could account for both reward maximization and effort minimization. However, these learning-related variables were encoded in partially dissociable brain areas. In line with previous findings, the ventromedial prefrontal cortex was found to positively represent expected and actual rewards, regardless of effort. A separate network, encompassing the anterior insula, the dorsal anterior cingulate, and the posterior parietal cortex, correlated positively with expected and actual efforts. These findings suggest that the same computational rule is applied by distinct brain systems, depending on the choice dimension-cost or benefit-that has to be learned. Copyright © 2014 the authors 0270-6474/14/3415621-10$15.00/0.