Learning Hidden Markov Models using Non-Negative Matrix Factorization
Cybenko, George
2008-01-01
The Baum-Welsh algorithm together with its derivatives and variations has been the main technique for learning Hidden Markov Models (HMM) from observational data. We present an HMM learning algorithm based on the non-negative matrix factorization (NMF) of higher order Markovian statistics that is structurally different from the Baum-Welsh and its associated approaches. The described algorithm supports estimation of the number of recurrent states of an HMM and iterates the non-negative matrix factorization (NMF) algorithm to improve the learned HMM parameters. Numerical examples are provided as well.
Efficient non-negative constrained model-based inversion in optoacoustic tomography
Ding, Lu; Luís Deán-Ben, X.; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis
2015-09-01
The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency.
Exploring Mixed Membership Stochastic Block Models via Non-negative Matrix Factorization
Peng, Chengbin
2014-12-01
Many real-world phenomena can be modeled by networks in which entities and connections are represented by nodes and edges respectively. When certain nodes are highly connected with each other, those nodes forms a cluster, which is called community in our context. It is usually assumed that each node belongs to one community only, but evidences in biology and social networks reveal that the communities often overlap with each other. In other words, one node can probably belong to multiple communities. In light of that, mixed membership stochastic block models (MMB) have been developed to model those networks with overlapping communities. Such a model contains three matrices: two incidence matrices indicating in and out connections and one probability matrix. When the probability of connections for nodes between communities are significantly small, the parameter inference problem to this model can be solved by a constrained non-negative matrix factorization (NMF) algorithm. In this paper, we explore the connection between the two models and propose an algorithm based on NMF to infer the parameters of MMB. The proposed algorithms can detect overlapping communities regardless of knowing or not the number of communities. Experiments show that our algorithm can achieve a better community detection performance than the traditional NMF algorithm. © 2014 IEEE.
Modelling sequentially scored item responses
Akkermans, W.
2000-01-01
The sequential model can be used to describe the variable resulting from a sequential scoring process. In this paper two more item response models are investigated with respect to their suitability for sequential scoring: the partial credit model and the graded response model. The investigation is c
Slawski, Martin
2012-01-01
Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization. For a broad range of designs with Gram matrix having non-negative entries, we establish bounds on the $\\ell_2$-prediction error of non-negative least squares (NNLS) whose form qualitatively matches corresponding results for $\\ell_1$-regularization. Under slightly stronger conditions, it is established that NNLS followed by hard thresholding performs excellently in terms of support recovery of an (approximately) sparse target, in some cases improving over $\\ell_1$-regularization. A substantial advantage of NNLS over r...
DEFF Research Database (Denmark)
Nielsen, Søren Føns Vind; Mørup, Morten
2014-01-01
of the component matrices. We examine three gene expression prediction scenarios based on data missing at random, whole genes missing and whole areas missing within a subject. We find that the column-wise updating approach also known as HALS performs the most efficient when fitting the model. We further observe...... that the non-negativity constrained CP model is able to predict gene expressions better than predicting by the subject average when data is missing at random. When whole genes and whole areas are missing it is in general better to predict by subject averages. However, we find that when whole genes are missing...... missing in our problem. Our analysis is based on the non-negativity constrained Canonical Polyadic (CP) decomposition where we handle the missing data using marginalization considering three prominent alternating least squares procedures; multiplicative updates, column-wise, and row-wise updating...
Modeling Polio Data Using the First Order Non-Negative Integer-Valued Autoregressive, INAR(1), Model
Vazifedan, Turaj; Shitan, Mahendran
Time series data may consists of counts, such as the number of road accidents, the number of patients in a certain hospital, the number of customers waiting for service at a certain time and etc. When the value of the observations are large it is usual to use Gaussian Autoregressive Moving Average (ARMA) process to model the time series. However if the observed counts are small, it is not appropriate to use ARMA process to model the observed phenomenon. In such cases we need to model the time series data by using Non-Negative Integer valued Autoregressive (INAR) process. The modeling of counts data is based on the binomial thinning operator. In this paper we illustrate the modeling of counts data using the monthly number of Poliomyelitis data in United States between January 1970 until December 1983. We applied the AR(1), Poisson regression model and INAR(1) model and the suitability of these models were assessed by using the Index of Agreement(I.A.). We found that INAR(1) model is more appropriate in the sense it had a better I.A. and it is natural since the data are counts.
Dai, Yimian; Wu, Yiquan; Song, Yu; Guo, Jun
2017-03-01
To further enhance the small targets and suppress the heavy clutters simultaneously, a robust non-negative infrared patch-image model via partial sum minimization of singular values is proposed. First, the intrinsic reason behind the undesirable performance of the state-of-the-art infrared patch-image (IPI) model when facing extremely complex backgrounds is analyzed. We point out that it lies in the mismatching of IPI model's implicit assumption of a large number of observations with the reality of deficient observations of strong edges. To fix this problem, instead of the nuclear norm, we adopt the partial sum of singular values to constrain the low-rank background patch-image, which could provide a more accurate background estimation and almost eliminate all the salient residuals in the decomposed target image. In addition, considering the fact that the infrared small target is always brighter than its adjacent background, we propose an additional non-negative constraint to the sparse target patch-image, which could not only wipe off more undesirable components ulteriorly but also accelerate the convergence rate. Finally, an algorithm based on inexact augmented Lagrange multiplier method is developed to solve the proposed model. A large number of experiments are conducted demonstrating that the proposed model has a significant improvement over the other nine competitive methods in terms of both clutter suppressing performance and convergence rate.
Directory of Open Access Journals (Sweden)
Martin R L Paine
Full Text Available High-grade serous carcinoma (HGSC is the most common and deadliest form of ovarian cancer. Yet it is largely asymptomatic in its initial stages. Studying the origin and early progression of this disease is thus critical in identifying markers for early detection and screening purposes. Tissue-based mass spectrometry imaging (MSI can be employed as an unbiased way of examining localized metabolic changes between healthy and cancerous tissue directly, at the onset of disease. In this study, we describe MSI results from Dicer-Pten double-knockout (DKO mice, a mouse model faithfully reproducing the clinical nature of human HGSC. By using non-negative matrix factorization (NMF for the unsupervised analysis of desorption electrospray ionization (DESI datasets, tissue regions are segregated based on spectral components in an unbiased manner, with alterations related to HGSC highlighted. Results obtained by combining NMF with DESI-MSI revealed several metabolic species elevated in the tumor tissue and/or surrounding blood-filled cyst including ceramides, sphingomyelins, bilirubin, cholesterol sulfate, and various lysophospholipids. Multiple metabolites identified within the imaging study were also detected at altered levels within serum in a previous metabolomic study of the same mouse model. As an example workflow, features identified in this study were used to build an oPLS-DA model capable of discriminating between DKO mice with early-stage tumors and controls with up to 88% accuracy.
Kim, Jaeyeon; Bennett, Rachel V.; Parry, R. Mitchell; Gaul, David A.; Wang, May D.; Matzuk, Martin M.; Fernández, Facundo M.
2016-01-01
High-grade serous carcinoma (HGSC) is the most common and deadliest form of ovarian cancer. Yet it is largely asymptomatic in its initial stages. Studying the origin and early progression of this disease is thus critical in identifying markers for early detection and screening purposes. Tissue-based mass spectrometry imaging (MSI) can be employed as an unbiased way of examining localized metabolic changes between healthy and cancerous tissue directly, at the onset of disease. In this study, we describe MSI results from Dicer-Pten double-knockout (DKO) mice, a mouse model faithfully reproducing the clinical nature of human HGSC. By using non-negative matrix factorization (NMF) for the unsupervised analysis of desorption electrospray ionization (DESI) datasets, tissue regions are segregated based on spectral components in an unbiased manner, with alterations related to HGSC highlighted. Results obtained by combining NMF with DESI-MSI revealed several metabolic species elevated in the tumor tissue and/or surrounding blood-filled cyst including ceramides, sphingomyelins, bilirubin, cholesterol sulfate, and various lysophospholipids. Multiple metabolites identified within the imaging study were also detected at altered levels within serum in a previous metabolomic study of the same mouse model. As an example workflow, features identified in this study were used to build an oPLS-DA model capable of discriminating between DKO mice with early-stage tumors and controls with up to 88% accuracy. PMID:27159635
DEFF Research Database (Denmark)
Nielsen, Søren Føns Vind; Mørup, Morten
2014-01-01
forms a promising framework for imputing missing values and characterizing gene expression in the human brain. However, care also has to be taken in particular when predicting the genetic expression levels at a whole region of the brain missing as our analysis indicates that this requires a substantial......Non-negative Tensor Factorization (NTF) has become a prominent tool for analyzing high dimensional multi-way structured data. In this paper we set out to analyze gene expression across brain regions in multiple subjects based on data from the Allen Human Brain Atlas [1] with more than 40 % data...
On affine non-negative matrix factorization
DEFF Research Database (Denmark)
Laurberg, Hans; Hansen, Lars Kai
2007-01-01
We generalize the non-negative matrix factorization (NMF) generative model to incorporate an explicit offset. Multiplicative estimation algorithms are provided for the resulting sparse affine NMF model. We show that the affine model has improved uniqueness properties and leads to more accurate...
A Parallel Programming Model With Sequential Semantics
1996-01-01
Parallel programming is more difficult than sequential programming in part because of the complexity of reasoning, testing, and debugging in the...context of concurrency. In the thesis, we present and investigate a parallel programming model that provides direct control of parallelism in a notation
The sequential propensity household projection model
Directory of Open Access Journals (Sweden)
Tom Wilson
2013-04-01
Full Text Available BACKGROUND The standard method of projecting living arrangements and households in Australia and New Zealand is the 'propensity model', a type of extended headship rate model. Unfortunately it possesses a number of serious shortcomings, including internal inconsistencies, difficulties in setting living arrangement assumptions, and very limited scenario creation capabilities. Data allowing the application of more sophisticated dynamic household projection models are unavailable in Australia. OBJECTIVE The aim was create a projection model to overcome these shortcomings whilst minimising input data requirements and costs, and retaining the projection outputs users are familiar with. METHODS The sequential propensity household projection model is proposed. Living arrangement projections take place in a sequence of calculations, with progressively more detailed living arrangement categories calculated in each step. In doing so the model largely overcomes the three serious deficiencies of the standard propensity model noted above. RESULTS The model is illustrated by three scenarios produced for one case study State, Queensland. They are: a baseline scenario in which all propensities are held constant to demonstrate the effects of population growth and ageing, a housing crisis scenario where housing affordability declines, and a prosperity scenario where families and individuals enjoy greater real incomes. A sensitivity analysis in which assumptions are varied one by one is also presented. CONCLUSIONS The sequential propensity model offers a more effective method of producing household and living arrangement projections than the standard propensity model, and is a practical alternative to dynamic projection models for countries and regions where the data and resources to apply such models are unavailable.
Vesselinov, V. V.; Alexandrov, B.
2014-12-01
The identification of the physical sources causing spatial and temporal fluctuations of state variables such as river stage levels and aquifer hydraulic heads is challenging. The fluctuations can be caused by variations in natural and anthropogenic sources such as precipitation events, infiltration, groundwater pumping, barometric pressures, etc. The source identification and separation can be crucial for conceptualization of the hydrological conditions and characterization of system properties. If the original signals that cause the observed state-variable transients can be successfully "unmixed", decoupled physics models may then be applied to analyze the propagation of each signal independently. We propose a new model-free inverse analysis of transient data based on Non-negative Matrix Factorization (NMF) method for Blind Source Separation (BSS) coupled with k-means clustering algorithm, which we call NMFk. NMFk is capable of identifying a set of unique sources from a set of experimentally measured mixed signals, without any information about the sources, their transients, and the physical mechanisms and properties controlling the signal propagation through the system. A classical BSS conundrum is the so-called "cocktail-party" problem where several microphones are recording the sounds in a ballroom (music, conversations, noise, etc.). Each of the microphones is recording a mixture of the sounds. The goal of BSS is to "unmix'" and reconstruct the original sounds from the microphone records. Similarly to the "cocktail-party" problem, our model-freee analysis only requires information about state-variable transients at a number of observation points, m, where m > r, and r is the number of unknown unique sources causing the observed fluctuations. We apply the analysis on a dataset from the Los Alamos National Laboratory (LANL) site. We identify and estimate the impact and sources are barometric pressure and water-supply pumping effects. We also estimate the
Isoscaling in Statistical Sequential Decay Model
Institute of Scientific and Technical Information of China (English)
TIAN Wen-Dong; SU Qian-Min; WANG Hong-Wei; WANG Kun; YAN Ting-ZHi; MA Yu-Gang; CAI Xiang-Zhou; FANG De-Qing; GUO Wei; MA Chun-Wang; LIU Gui-Hua; SHEN Wen-Qing; SHI Yu
2007-01-01
A sequential decay model is used to study isoscaling, I.e. The factorization of the isotope ratios from sources of different isospins and sizes over a broad range of excitation energies, into fugacity terms of proton and neutron number, R21(N, Z) = Y2(N, Z)/Y1(N, Z) = Cexp(αN +βZ). It is found that the isoscaling parameters α and β have a strong dependence on the isospin difference of equilibrated source and excitation energy, no significant influence of the source size on α andβ has been observed. It is found that α and β decrease with the excitation energy and are linear functions of 1/T and △(Z/A)2 or △(N/A)2 of the sources. Symmetry energy coefficient Csym is constrained from the relationship of α and source △(Z/A)2, β and source △(N/A)2.
Ligand Binding to Macromolecules: Allosteric and Sequential Models of Cooperativity.
Hess, V. L.; Szabo, Attila
1979-01-01
A simple model is described for the binding of ligands to macromolecules. The model is applied to the cooperative binding by hemoglobin and aspartate transcarbamylase. The sequential and allosteric models of cooperative binding are considered. (BB)
How quantum are non-negative wavefunctions?
Energy Technology Data Exchange (ETDEWEB)
Hastings, M. B. [Station Q, Microsoft Research, Santa Barbara, California 93106-6105, USA and Quantum Architectures and Computation Group, Microsoft Research, Redmond, Washington 98052 (United States)
2016-01-15
We consider wavefunctions which are non-negative in some tensor product basis. We study what possible teleportation can occur in such wavefunctions, giving a complete answer in some cases (when one system is a qubit) and partial answers elsewhere. We use this to show that a one-dimensional wavefunction which is non-negative and has zero correlation length can be written in a “coherent Gibbs state” form, as explained later. We conjecture that such holds in higher dimensions. Additionally, some results are provided on possible teleportation in general wavefunctions, explaining how Schmidt coefficients before measurement limit the possible Schmidt coefficients after measurement, and on the absence of a “generalized area law” [D. Aharonov et al., in Proceedings of Foundations of Computer Science (FOCS) (IEEE, 2014), p. 246; e-print arXiv.org:1410.0951] even for Hamiltonians with no sign problem. One of the motivations for this work is an attempt to prove a conjecture about ground state wavefunctions which have an “intrinsic” sign problem that cannot be removed by any quantum circuit. We show a weaker version of this, showing that the sign problem is intrinsic for commuting Hamiltonians in the same phase as the double semion model under the technical assumption that TQO-2 holds [S. Bravyi et al., J. Math. Phys. 51, 093512 (2010)].
Sparse Non-negative Matrix Factor 2-D Deconvolution
DEFF Research Database (Denmark)
Mørup, Morten; Schmidt, Mikkel N.
2006-01-01
We introduce the non-negative matrix factor 2-D deconvolution (NMF2D) model, which decomposes a matrix into a 2-dimensional convolution of two factor matrices. This model is an extension of the non-negative matrix factor deconvolution (NMFD) recently introduced by Smaragdis (2004). We derive...... and prove the convergence of two algorithms for NMF2D based on minimizing the squared error and the Kullback-Leibler divergence respectively. Next, we introduce a sparse non-negative matrix factor 2-D deconvolution model that gives easy interpretable decompositions and devise two algorithms for computing...... this form of factorization. The developed algorithms have been used for source separation and music transcription....
GEOMETRIC METHOD OF SEQUENTIAL ESTIMATION RELATED TO MULTINOMIAL DISTRIBUTION MODELS
Institute of Scientific and Technical Information of China (English)
WEIBOCHENG; LISHOUYE
1995-01-01
In 1980's differential geometric methods are successfully used to study curved expomential families and normal nonlinear regression models.This paper presents a new geometric structure to study multinomial distribution models which contain a set of nonlinear parameters.Based on this geometric structure,the suthors study several asymptotic properties for sequential estimation.The bias,the variance and the information loss of the sequential estimates are given from geomentric viewpoint,and a limit theorem connected with the observed and expected Fisher information is obtained in terms of curvatvre measures.The results show that the sequential estimation procednce has some better properties which are generally impossible for nonsequential estimation procedures.
Algorithms for Sparse Non-negative Tucker Decompositions
DEFF Research Database (Denmark)
Mørup, Morten; Hansen, Lars Kai
2008-01-01
for Tucker decompositions when indeed the data and interactions can be considered non-negative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is the most appropriate for the data as well as to select the number of components by turning off excess components...
Swallow, Ben; Buckland, Stephen T; King, Ruth; Toms, Mike P
2016-03-01
The development of methods for dealing with continuous data with a spike at zero has lagged behind those for overdispersed or zero-inflated count data. We consider longitudinal ecological data corresponding to an annual average of 26 weekly maximum counts of birds, and are hence effectively continuous, bounded below by zero but also with a discrete mass at zero. We develop a Bayesian hierarchical Tweedie regression model that can directly accommodate the excess number of zeros common to this type of data, whilst accounting for both spatial and temporal correlation. Implementation of the model is conducted in a Markov chain Monte Carlo (MCMC) framework, using reversible jump MCMC to explore uncertainty across both parameter and model spaces. This regression modelling framework is very flexible and removes the need to make strong assumptions about mean-variance relationships a priori. It can also directly account for the spike at zero, whilst being easily applicable to other types of data and other model formulations. Whilst a correlative study such as this cannot prove causation, our results suggest that an increase in an avian predator may have led to an overall decrease in the number of one of its prey species visiting garden feeding stations in the United Kingdom. This may reflect a change in behaviour of house sparrows to avoid feeding stations frequented by sparrowhawks, or a reduction in house sparrow population size as a result of sparrowhawk increase.
Sequential pole dominance model and decay of new mesons
Chaichian, Masud
1976-01-01
The sequential pole dominance model recently proposed by Freund and Nambu (1975) allows predictions to be made about the decay processes which violate the Zweig-Iizuka rule. Detailed comparison of the model with recent experimental data on the decay modes of psi (3095) and psi '(3684) reveals some quantitative disagreement. A possible decay mechanism which can account for this discrepancy is discussed. (7 refs).
Joint cluster and non-negative least squares analysis for aerosol mass spectrum data
Energy Technology Data Exchange (ETDEWEB)
Zhang, T; Zhu, W [Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794-3600 (United States); McGraw, R [Environmental Sciences Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)], E-mail: zhu@ams.sunysb.edu
2008-07-15
Aerosol mass spectrum (AMS) data contain hundreds of mass to charge ratios and their corresponding intensities from air collected through the mass spectrometer. The observations are usually taken sequentially in time to monitor the air composition, quality and temporal change in an area of interest. An important goal of AMS data analysis is to reduce the dimensionality of the original data yielding a small set of representing tracers for various atmospheric and climatic models. In this work, we present an approach to jointly apply the cluster analysis and the non-negative least squares method towards this goal. Application to a relevant study demonstrates the effectiveness of this new approach. Comparisons are made to other relevant multivariate statistical techniques including the principal component analysis and the positive matrix factorization method, and guidelines are provided.
A continuous-time neural model for sequential action.
Kachergis, George; Wyatte, Dean; O'Reilly, Randall C; de Kleijn, Roy; Hommel, Bernhard
2014-11-01
Action selection, planning and execution are continuous processes that evolve over time, responding to perceptual feedback as well as evolving top-down constraints. Existing models of routine sequential action (e.g. coffee- or pancake-making) generally fall into one of two classes: hierarchical models that include hand-built task representations, or heterarchical models that must learn to represent hierarchy via temporal context, but thus far lack goal-orientedness. We present a biologically motivated model of the latter class that, because it is situated in the Leabra neural architecture, affords an opportunity to include both unsupervised and goal-directed learning mechanisms. Moreover, we embed this neurocomputational model in the theoretical framework of the theory of event coding (TEC), which posits that actions and perceptions share a common representation with bidirectional associations between the two. Thus, in this view, not only does perception select actions (along with task context), but actions are also used to generate perceptions (i.e. intended effects). We propose a neural model that implements TEC to carry out sequential action control in hierarchically structured tasks such as coffee-making. Unlike traditional feedforward discrete-time neural network models, which use static percepts to generate static outputs, our biological model accepts continuous-time inputs and likewise generates non-stationary outputs, making short-timescale dynamic predictions.
Algorithms for Sparse Non-negative Tucker Decompositions
DEFF Research Database (Denmark)
Mørup, Morten; Hansen, Lars Kai
2008-01-01
There is a increasing interest in analysis of large scale multi-way data. The concept of multi-way data refers to arrays of data with more than two dimensions, i.e., taking the form of tensors. To analyze such data, decomposition techniques are widely used. The two most common decompositions...... decompositions). To reduce ambiguities of this type of decomposition we develop updates that can impose sparseness in any combination of modalities, hence, proposed algorithms for sparse non-negative Tucker decompositions (SN-TUCKER). We demonstrate how the proposed algorithms are superior to existing algorithms...... for Tucker decompositions when indeed the data and interactions can be considered non-negative. We further illustrate how sparse coding can help identify what model (PARAFAC or Tucker) is the most appropriate for the data as well as to select the number of components by turning off excess components...
Determining state-space models from sequential output data
Lin, Jiguan Gene
1988-01-01
This talk focuses on the determination of state-space models for large space systems using only the output data. The output data could be generated by the unknown or deliberate initial conditions of the space structure in question. We shall review some relevant fundamental work on the state-space modeling of sequential output data that is potentially applicable to large space structures. If formulated in terms of some generalized Markov parameters, this approach is in some sense similar to, but much simpler than, the Juang-Pappa Eigensystem Realization Algorithm (ERA) and the Ho-Kalman construction procedure.
Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release
DEFF Research Database (Denmark)
Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan
2015-01-01
In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates...... sequential release – a feature in the new Danish interlocking systems. The generic model and safety properties can be instantiated with interlocking configuration data, resulting in a concrete model in the form of a Kripke structure, and in high-level safety properties expressed as state invariants. Using...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....
Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release
DEFF Research Database (Denmark)
Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan
2014-01-01
In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates...... sequential release - a feature in the new Danish interlocking systems. The generic model and safety properties can be instantiated with interlocking configuration data, resulting in a concrete model in the form of a Kripke structure, and in high-level safety properties expressed as state invariants. Using...... SMT based bounded model checking (BMC) and inductive reasoning, we are able to verify the properties for model instances corresponding to railway networks of industrial size. Experiments also show that BMC is efficient for finding bugs in the railway interlocking designs....
Non-negative Matrix Factorization for Binary Data
DEFF Research Database (Denmark)
Larsen, Jacob Søgaard; Clemmensen, Line Katrine Harder
We propose the Logistic Non-negative Matrix Factorization for decomposition of binary data. Binary data are frequently generated in e.g. text analysis, sensory data, market basket data etc. A common method for analysing non-negative data is the Non-negative Matrix Factorization, though this is in......We propose the Logistic Non-negative Matrix Factorization for decomposition of binary data. Binary data are frequently generated in e.g. text analysis, sensory data, market basket data etc. A common method for analysing non-negative data is the Non-negative Matrix Factorization, though...
Evolutionary Sequential Monte Carlo Samplers for Change-Point Models
Directory of Open Access Journals (Sweden)
Arnaud Dufays
2016-03-01
Full Text Available Sequential Monte Carlo (SMC methods are widely used for non-linear filtering purposes. However, the SMC scope encompasses wider applications such as estimating static model parameters so much that it is becoming a serious alternative to Markov-Chain Monte-Carlo (MCMC methods. Not only do SMC algorithms draw posterior distributions of static or dynamic parameters but additionally they provide an estimate of the marginal likelihood. The tempered and time (TNT algorithm, developed in this paper, combines (off-line tempered SMC inference with on-line SMC inference for drawing realizations from many sequential posterior distributions without experiencing a particle degeneracy problem. Furthermore, it introduces a new MCMC rejuvenation step that is generic, automated and well-suited for multi-modal distributions. As this update relies on the wide heuristic optimization literature, numerous extensions are readily available. The algorithm is notably appropriate for estimating change-point models. As an example, we compare several change-point GARCH models through their marginal log-likelihoods over time.
Evolving MultiAlgebras unify all usual sequential computation models
Grigorieff, Serge
2010-01-01
It is well-known that Abstract State Machines (ASMs) can simulate "step-by-step" any type of machines (Turing machines, RAMs, etc.). We aim to overcome two facts: 1) simulation is not identification, 2) the ASMs simulating machines of some type do not constitute a natural class among all ASMs. We modify Gurevich's notion of ASM to that of EMA ("Evolving MultiAlgebra") by replacing the program (which is a syntactic object) by a semantic object: a functional which has to be very simply definable over the static part of the ASM. We prove that very natural classes of EMAs correspond via "literal identifications" to slight extensions of the usual machine models and also to grammar models. Though we modify these models, we keep their computation approach: only some contingencies are modified. Thus, EMAs appear as the mathematical model unifying all kinds of sequential computation paradigms.
Enforced Sparse Non-Negative Matrix Factorization
2016-01-23
league electrons album jewish party electron band jews war atoms albums judaism elections hydrogen israel president isotopes hebrew Figure 5. Top five...find the NMF topics sequentially by converging one topic at a time. We can do this by considering the NMF using block matrices A ≈ UV T = [U1 U2 ][V1 V2...T = U1V T1 + U2V T2 (5.8) where U1 and V1 are matrices whose column vectors consist of previously converged NMF topics, and U2 and V2 are single
A model study of sequential enzyme reactions and electrostatic channeling.
Eun, Changsun; Kekenes-Huskey, Peter M; Metzger, Vincent T; McCammon, J Andrew
2014-03-14
We study models of two sequential enzyme-catalyzed reactions as a basic functional building block for coupled biochemical networks. We investigate the influence of enzyme distributions and long-range molecular interactions on reaction kinetics, which have been exploited in biological systems to maximize metabolic efficiency and signaling effects. Specifically, we examine how the maximal rate of product generation in a series of sequential reactions is dependent on the enzyme distribution and the electrostatic composition of its participant enzymes and substrates. We find that close proximity between enzymes does not guarantee optimal reaction rates, as the benefit of decreasing enzyme separation is countered by the volume excluded by adjacent enzymes. We further quantify the extent to which the electrostatic potential increases the efficiency of transferring substrate between enzymes, which supports the existence of electrostatic channeling in nature. Here, a major finding is that the role of attractive electrostatic interactions in confining intermediate substrates in the vicinity of the enzymes can contribute more to net reactive throughput than the directional properties of the electrostatic fields. These findings shed light on the interplay of long-range interactions and enzyme distributions in coupled enzyme-catalyzed reactions, and their influence on signaling in biological systems.
Wind Noise Reduction using Non-negative Sparse Coding
DEFF Research Database (Denmark)
Schmidt, Mikkel N.; Larsen, Jan; Hsiao, Fu-Tien
2007-01-01
We introduce a new speaker independent method for reducing wind noise in single-channel recordings of noisy speech. The method is based on non-negative sparse coding and relies on a wind noise dictionary which is estimated from an isolated noise recording. We estimate the parameters of the model...... and discuss their sensitivity. We then compare the algorithm with the classical spectral subtraction method and the Qualcomm-ICSI-OGI noise reduction method. We optimize the sound quality in terms of signal-to-noise ratio and provide results on a noisy speech recognition task....
Sequential logic model deciphers dynamic transcriptional control of gene expressions.
Directory of Open Access Journals (Sweden)
Zhen Xuan Yeo
Full Text Available BACKGROUND: Cellular signaling involves a sequence of events from ligand binding to membrane receptors through transcription factors activation and the induction of mRNA expression. The transcriptional-regulatory system plays a pivotal role in the control of gene expression. A novel computational approach to the study of gene regulation circuits is presented here. METHODOLOGY: Based on the concept of finite state machine, which provides a discrete view of gene regulation, a novel sequential logic model (SLM is developed to decipher control mechanisms of dynamic transcriptional regulation of gene expressions. The SLM technique is also used to systematically analyze the dynamic function of transcriptional inputs, the dependency and cooperativity, such as synergy effect, among the binding sites with respect to when, how much and how fast the gene of interest is expressed. PRINCIPAL FINDINGS: SLM is verified by a set of well studied expression data on endo16 of Strongylocentrotus purpuratus (sea urchin during the embryonic midgut development. A dynamic regulatory mechanism for endo16 expression controlled by three binding sites, UI, R and Otx is identified and demonstrated to be consistent with experimental findings. Furthermore, we show that during transition from specification to differentiation in wild type endo16 expression profile, SLM reveals three binary activities are not sufficient to explain the transcriptional regulation of endo16 expression and additional activities of binding sites are required. Further analyses suggest detailed mechanism of R switch activity where indirect dependency occurs in between UI activity and R switch during specification to differentiation stage. CONCLUSIONS/SIGNIFICANCE: The sequential logic formalism allows for a simplification of regulation network dynamics going from a continuous to a discrete representation of gene activation in time. In effect our SLM is non-parametric and model-independent, yet
Popularity Modeling for Mobile Apps: A Sequential Approach.
Zhu, Hengshu; Liu, Chuanren; Ge, Yong; Xiong, Hui; Chen, Enhong
2015-07-01
The popularity information in App stores, such as chart rankings, user ratings, and user reviews, provides an unprecedented opportunity to understand user experiences with mobile Apps, learn the process of adoption of mobile Apps, and thus enables better mobile App services. While the importance of popularity information is well recognized in the literature, the use of the popularity information for mobile App services is still fragmented and under-explored. To this end, in this paper, we propose a sequential approach based on hidden Markov model (HMM) for modeling the popularity information of mobile Apps toward mobile App services. Specifically, we first propose a popularity based HMM (PHMM) to model the sequences of the heterogeneous popularity observations of mobile Apps. Then, we introduce a bipartite based method to precluster the popularity observations. This can help to learn the parameters and initial values of the PHMM efficiently. Furthermore, we demonstrate that the PHMM is a general model and can be applicable for various mobile App services, such as trend based App recommendation, rating and review spam detection, and ranking fraud detection. Finally, we validate our approach on two real-world data sets collected from the Apple Appstore. Experimental results clearly validate both the effectiveness and efficiency of the proposed popularity modeling approach.
Havinga, Gosse Tjipke; van den Boogaard, Antonius H.; Klaseboer, G.
2013-01-01
Surrogate models are used within the sequential optimization strategy for forming processes. A sequential improvement (SI) scheme is used to refine the surrogate model in the optimal region. One of the popular surrogate modeling methods for SI is Kriging. However, the global response of Kriging mode
Non-negative matrix factorization and term structure of interest rates
Takada, Hellinton H.; Stern, Julio M.
2015-01-01
Non-Negative Matrix Factorization (NNMF) is a technique for dimensionality reduction with a wide variety of applications from text mining to identification of concentrations in chemistry. NNMF deals with non-negative data and results in non-negative factors and factor loadings. Consequently, it is a natural choice when studying the term structure of interest rates. In this paper, NNMF is applied to obtain factors from the term structure of interest rates and the procedure is compared with other very popular techniques: principal component analysis and Nelson-Siegel model. The NNMF approximation for the term structure of interest rates is better in terms of fitting. From a practitioner point of view, the NNMF factors and factor loadings obtained possess straightforward financial interpretations due to their non-negativeness.
Shifted Non-negative Matrix Factorization
DEFF Research Database (Denmark)
Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai
2007-01-01
where a shift in onset of frequency profile can be induced by the Doppler effect. However, the model is also relevant for biomedical data analysis where the sources are given by compound intensities over time and the onset of the profiles have different delays to the sensors. A simple algorithm based...
Introducing a Model for Optimal Design of Sequential Objective Structured Clinical Examinations
Mortaz Hejri, Sara; Yazdani, Kamran; Labaf, Ali; Norcini, John J.; Jalili, Mohammad
2016-01-01
In a sequential OSCE which has been suggested to reduce testing costs, candidates take a short screening test and who fail the test, are asked to take the full OSCE. In order to introduce an effective and accurate sequential design, we developed a model for designing and evaluating screening OSCEs. Based on two datasets from a 10-station…
Viral contamination during sequential phacoemulsification surgeries in an experimental model
Directory of Open Access Journals (Sweden)
Roberto Pinto Coelho
2012-06-01
Full Text Available PURPOSE: To determine the incidence of Piry virus contamination among surgical instruments used with disposable accessories for phacoemulsification during sequential surgeries. METHODS: An experimental model was created with 4 pigs' eyes that were contaminated with Piry virus and 4 pigs' eyes that were not contaminated. Phacoemulsification was performed on the eyes, alternating between the contaminated and non-contaminated eyes. From one surgery to another, the operating fields, gloves, scalpel, tweezers, needles, syringes, tips and bag collector from the phacoemulsification machine were exchanged; only the hand piece and the irrigation and aspiration systems were maintained. RESULTS: In the collector bag, three samples from the contaminated eyes (3/4 were positive, and two samples from the non-contaminated (2/4 eyes were also positive; at the tip, one sample from the contaminated eyes (1/4 and two samples of the non-contaminated eyes (2/4 yielded positive results. In the irrigation system, one sample from a non-contaminated eye (1/4 was positive, and in the aspiration system, two samples from contaminated eyes (2/4 and two samples from non-contaminated eyes (2/4 were positive. In the gloves, the samples were positive in two samples from the non-contaminated eyes (2/4 and in two samples from the contaminated eyes (2/4. In the scalpel samples, three contaminated eyes (3/4 and none of the non-contaminated eyes (0/4 were positive; finally, two samples from the anterior chambers of the non-contaminated eyes gathered after surgery were positive. CONCLUSIONS: In two non-contaminated eyes, the presence of genetic material was detected after phacoemulsification surgery, demonstrating that the transmission of the genetic material of the Piry virus occurred at some point during the surgery on these non-contaminated eyes when the hand piece and irrigation and aspiration systems were reused between surgeries.
A Sequential Model of Host Cell Killing and Phagocytosis by Entamoeba histolytica
Directory of Open Access Journals (Sweden)
Adam Sateriale
2011-01-01
Full Text Available The protozoan parasite Entamoeba histolytica is responsible for invasive intestinal and extraintestinal amebiasis. The virulence of Entamoeba histolytica is strongly correlated with the parasite's capacity to effectively kill and phagocytose host cells. The process by which host cells are killed and phagocytosed follows a sequential model of adherence, cell killing, initiation of phagocytosis, and engulfment. This paper presents recent advances in the cytolytic and phagocytic processes of Entamoeba histolytica in context of the sequential model.
A Sequential Model of Host Cell Killing and Phagocytosis by Entamoeba histolytica
Sateriale, Adam; Huston, Christopher D.
2011-01-01
The protozoan parasite Entamoeba histolytica is responsible for invasive intestinal and extraintestinal amebiasis. The virulence of Entamoeba histolytica is strongly correlated with the parasite's capacity to effectively kill and phagocytose host cells. The process by which host cells are killed and phagocytosed follows a sequential model of adherence, cell killing, initiation of phagocytosis, and engulfment. This paper presents recent advances in the cytolytic and phagocytic processes of Entamoeba histolytica in context of the sequential model. PMID:21331284
Reduction of Non-stationary Noise using a Non-negative Latent Variable Decomposition
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard; Larsen, Jan
2008-01-01
We present a method for suppression of non-stationary noise in single channel recordings of speech. The method is based on a non-negative latent variable decomposition model for the speech and noise signals, learned directly from a noisy mixture. In non-speech regions an over complete basis...
NON-NEGATIVE RADIAL SOLUTION FOR AN ELLIPTIC EQUATION
Institute of Scientific and Technical Information of China (English)
Yang Guoying; Guo Zongming
2005-01-01
We study the structure and behavior of non-negative radial solution for the following elliptic equation △u = uv, x ∈ Rn with 0 ＜ v ＜ 1. We also obtain the detailed asymptotic expansion of u near infinity.
The probit choice model under sequential search with an application to online retailing
Bronnenberg, Bart; Kim, Jun B.; Albuquerque, P.
2016-01-01
We develop a probit choice model under optimal sequential search and apply it to the study of aggregate demand of consumer durable goods. In our joint model of search and choice, we derive a semi-closed form expression for the probability of choice that obeys the full set of restrictions imposed by
A spherical wave expansion model of sequentially rotated phased arrays with arbitrary elements
DEFF Research Database (Denmark)
Larsen, Niels Vesterdal; Breinbjerg, Olav
2007-01-01
An analytical model of sequentially rotated phased arrays with arbitrary antenna elements is presented. It is applied to different arrays and the improvements of axial ratio bandwidth and copolar directivity are investigated. It is compared to a numerical method of auxiliary Sources model to asce...
Piecewise multivariate modelling of sequential metabolic profiling data
Directory of Open Access Journals (Sweden)
Nicholson Jeremy K
2008-02-01
Full Text Available Abstract Background Modelling the time-related behaviour of biological systems is essential for understanding their dynamic responses to perturbations. In metabolic profiling studies, the sampling rate and number of sampling points are often restricted due to experimental and biological constraints. Results A supervised multivariate modelling approach with the objective to model the time-related variation in the data for short and sparsely sampled time-series is described. A set of piecewise Orthogonal Projections to Latent Structures (OPLS models are estimated, describing changes between successive time points. The individual OPLS models are linear, but the piecewise combination of several models accommodates modelling and prediction of changes which are non-linear with respect to the time course. We demonstrate the method on both simulated and metabolic profiling data, illustrating how time related changes are successfully modelled and predicted. Conclusion The proposed method is effective for modelling and prediction of short and multivariate time series data. A key advantage of the method is model transparency, allowing easy interpretation of time-related variation in the data. The method provides a competitive complement to commonly applied multivariate methods such as OPLS and Principal Component Analysis (PCA for modelling and analysis of short time-series data.
Formal Modeling and Verification of Interlocking Systems Featuring Sequential Release
DEFF Research Database (Denmark)
Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan
2015-01-01
In this paper, we present a method and an associated tool suite for formal verification of the new ETCS level 2 based Danish railway interlocking systems. We have made a generic and reconfigurable model of the system behavior and generic high-level safety properties. This model accommodates seque...
Formal modelling and verification of interlocking systems featuring sequential release
DEFF Research Database (Denmark)
Vu, Linh Hong; Haxthausen, Anne Elisabeth; Peleska, Jan
2016-01-01
checking (BMC) and inductive reasoning, it is verified that the generated model instance satisfies the generated safety properties. Using this method, we are able to verify the safety properties for model instances corresponding to railway networks of industrial size. Experiments show that BMC is also...
Sequential Effects in Essay Ratings: Evidence of Assimilation Effects Using Cross-Classified Models.
Zhao, Haiyan; Andersson, Björn; Guo, Boliang; Xin, Tao
2017-01-01
Writing assessments are an indispensable part of most language competency tests. In our research, we used cross-classified models to study rater effects in the real essay rating process of a large-scale, high-stakes educational examination administered in China in 2011. Generally, four cross-classified models are suggested for investigation of rater effects: (1) the existence of sequential effects, (2) the direction of the sequential effects, and (3) differences in raters by their individual characteristics. We applied these models to the data to account for possible cluster effects caused by the application of multiple rating strategies. The results of our research showed that raters demonstrated sequential effects during the rating process. In contrast to many other studies on rater effects, our study found that raters exhibited assimilation effects. The more experienced, lenient, and qualified raters were less susceptible to assimilation effects. In addition, our research demonstrated the feasibility and appropriateness of using cross-classified models in assessing rater effects for such data structures. This paper also discusses the implications for educators and practitioners who are interested in reducing sequential effects in the rating process, and suggests directions for future research.
Cuevas Rivera, Dario; Bitzer, Sebastian; Kiebel, Stefan J.
2015-01-01
The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an ‘intelligent coincidence detector’, which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena. PMID:26451888
Directory of Open Access Journals (Sweden)
Dario Cuevas Rivera
2015-10-01
Full Text Available The olfactory information that is received by the insect brain is encoded in the form of spatiotemporal patterns in the projection neurons of the antennal lobe. These dense and overlapping patterns are transformed into a sparse code in Kenyon cells in the mushroom body. Although it is clear that this sparse code is the basis for rapid categorization of odors, it is yet unclear how the sparse code in Kenyon cells is computed and what information it represents. Here we show that this computation can be modeled by sequential firing rate patterns using Lotka-Volterra equations and Bayesian online inference. This new model can be understood as an 'intelligent coincidence detector', which robustly and dynamically encodes the presence of specific odor features. We found that the model is able to qualitatively reproduce experimentally observed activity in both the projection neurons and the Kenyon cells. In particular, the model explains mechanistically how sparse activity in the Kenyon cells arises from the dense code in the projection neurons. The odor classification performance of the model proved to be robust against noise and time jitter in the observed input sequences. As in recent experimental results, we found that recognition of an odor happened very early during stimulus presentation in the model. Critically, by using the model, we found surprising but simple computational explanations for several experimental phenomena.
Hierarchical subtask discovery with non-negative matrix factorization
CSIR Research Space (South Africa)
Earle, AC
2017-08-01
Full Text Available . Donoho, D. and Stodden, V. When does non-negative matrix factorization give a correct decomposition into parts? Proc. Advances in Neural Information Processing Systems 16, pp. 1141–1148, 2004. Hennequin, R., David, B., and Badeau, R. Beta-divergence as a... with Linearly Solvable Markov Decision Processes. arXiv, 2016. S¸ims¸ek, Ö. and Barto, A.S. Skill Characterization Based on Be- tweenness. Advances in Neural Information Processing Systems, pp. 1497–1504, 2009. Solway, A., Diuk, C., Córdova, N., Yee, D., Barto...
Representations of non-negative polynomials via critical ideals
Hiep, Dang Tuan
2011-01-01
This paper studies the representations of a non-negative polynomial $f$ on a non-compact semi-algebraic set $K$ modulo its critical ideal. Under the assumptions that the semi-algebraic set $K$ is regular and $f$ satisfies the boundary Hessian conditions (BHC) at each zero of $f$ in $K$, we show that $f$ can be represented as a sum of squares (SOS) of real polynomials modulo its critical ideal if $f\\ge 0$ on $K$. In particular, we focus on the polynomial ring $\\mathbb R[x]$.
1991-04-29
22217-5000 1 1 1 11. TITLE (incde Securiy Clasicaton) A MODEL FOR SEQUENTIAL FIRST ORDER PHAGE TRANSITIONS OCCURRING IN THE UNDERPOTENTIAL DEPOSITION ...block number) FIELD GROUP SUB-GROUP 3 RACT (Continue on reverse if necessary and identify by block number) A model for the underpotential deposition of...this application we study the underpotential deposition of Cu on a Au(III) surface in the presence of sulfate ions. The voltammogram of the
Instability of elliptic equations on compact Riemannian manifolds with non-negative Ricci curvature
Directory of Open Access Journals (Sweden)
Arnaldo S. Nascimento
2010-05-01
Full Text Available We prove the nonexistence of nonconstant local minimizers for a class of functionals, which typically appear in scalar two-phase field models, over smooth N-dimensional Riemannian manifolds without boundary and non-negative Ricci curvature. Conversely, for a class of surfaces possessing a simple closed geodesic along which the Gauss curvature is negative, we prove the existence of nonconstant local minimizers for the same class of functionals.
Non-negative submodular stochastic probing via stochastic contention resolution schemes
Adamczyk, Marek
2015-01-01
The abstract model of stochastic probing was presented by Gupta and Nagarajan (IPCO'13), and provides a unified view of a number of problems. Adamczyk, Sviridenko, Ward (STACS'14) gave better approximation for matroid environments and linear objectives. At the same time this method was easily extendable to settings, where the objective function was monotone submodular. However, the case of non-negative submodular function could not be handled by previous techniques. In this paper we address t...
A Real Option Model with Uncertain, Sequential Investment and with Time to Build
Directory of Open Access Journals (Sweden)
Marcos Eugênio da Silva
2005-12-01
Full Text Available This article develops a real option model with uncertain and sequential investment and with time to build. The model includes options to entry and to exit the activity and addresses the maximization problem of a company in view of the investment opportunity. The differential equation of the asset is obtained by using dynamic programming and risk neutral evaluation. Particularly, for the construction period, the differential equation is partial and elliptical, which demands the use of numeric methods. The main results of the article are that (i with uncertain and sequential investment and with time to build, the waiting value, which creates a gap between the investment decision rule based on NPV and that based on a real option model, may not very significant and (ii the increase in uncertainty may anticipate the decision to investment.
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
2012-01-01
point’ an ‘independent cluster point’ or a ‘dependent cluster point’. The background and independent cluster points are thought to exhibit ‘complete spatial randomness’, whereas the dependent cluster points are likely to occur close to previous cluster points. We demonstrate the flexibility of the model......We introduce a flexible spatial point process model for spatial point patterns exhibiting linear structures, without incorporating a latent line process. The model is given by an underlying sequential point process model. Under this model, the points can be of one of three types: a ‘background...
Single-channel source separation using non-negative matrix factorization
DEFF Research Database (Denmark)
Schmidt, Mikkel Nørgaard
, in which a number of methods for single-channel source separation based on non-negative matrix factorization are presented. In the papers, the methods are applied to separating audio signals such as speech and musical instruments and separating different types of tissue in chemical shift imaging.......Single-channel source separation problems occur when a number of sources emit signals that are mixed and recorded by a single sensor, and we are interested in estimating the original source signals based on the recorded mixture. This problem, which occurs in many sciences, is inherently under......-determined and its solution relies on making appropriate assumptions concerning the sources. This dissertation is concerned with model-based probabilistic single-channel source separation based on non-negative matrix factorization, and consists of two parts: i) three introductory chapters and ii) five published...
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
Automated Discovery and Modeling of Sequential Patterns Preceding Events of Interest
Rohloff, Kurt
2010-01-01
The integration of emerging data manipulation technologies has enabled a paradigm shift in practitioners' abilities to understand and anticipate events of interest in complex systems. Example events of interest include outbreaks of socio-political violence in nation-states. Rather than relying on human-centric modeling efforts that are limited by the availability of SMEs, automated data processing technologies has enabled the development of innovative automated complex system modeling and predictive analysis technologies. We introduce one such emerging modeling technology - the sequential pattern methodology. We have applied the sequential pattern methodology to automatically identify patterns of observed behavior that precede outbreaks of socio-political violence such as riots, rebellions and coups in nation-states. The sequential pattern methodology is a groundbreaking approach to automated complex system model discovery because it generates easily interpretable patterns based on direct observations of sampled factor data for a deeper understanding of societal behaviors that is tolerant of observation noise and missing data. The discovered patterns are simple to interpret and mimic human's identifications of observed trends in temporal data. Discovered patterns also provide an automated forecasting ability: we discuss an example of using discovered patterns coupled with a rich data environment to forecast various types of socio-political violence in nation-states.
A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS
Wakhid Slamet Ciptono
2006-01-01
This study extends the prior research (Zahra and Das 1993) by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs) of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment) that may ...
Indian Academy of Sciences (India)
S Jayanthy; M C Bhuvaneswari
2015-02-01
In this paper, a fuzzy delay model based crosstalk delay fault simulator is proposed. As design trends move towards nanometer technologies, more number of new parameters affects the delay of the component. Fuzzy delay models are ideal for modelling the uncertainty found in the design and manufacturing steps. The fault simulator based on fuzzy delay detects unstable states, oscillations and non-confluence of settling states in asynchronous sequential circuits. The fuzzy delay model based fault simulator is used to validate the test patterns produced by Elitist Non-dominated sorting Genetic Algorithm (ENGA) based test generator, for detecting crosstalk delay faults in asynchronous sequential circuits. The multi-objective genetic algorithm, ENGA targets two objectives of maximizing fault coverage and minimizing number of transitions. Experimental results are tabulated for SIS benchmark circuits for three gate delay models, namely unit delay model, rise/fall delay model and fuzzy delay model. Experimental results indicate that test validation using fuzzy delay model is more accurate than unit delay model and rise/fall delay model.
A sequential approach to calibrate ecosystem models with multiple time series data
Oliveros-Ramos, Ricardo; Verley, Philippe; Echevin, Vincent; Shin, Yunne-Jai
2017-02-01
When models are aimed to support decision-making, their credibility is essential to consider. Model fitting to observed data is one major criterion to assess such credibility. However, due to the complexity of ecosystem models making their calibration more challenging, the scientific community has given more attention to the exploration of model behavior than to a rigorous comparison to observations. This work highlights some issues related to the comparison of complex ecosystem models to data and proposes a methodology for a sequential multi-phases calibration (or parameter estimation) of ecosystem models. We first propose two criteria to classify the parameters of a model: the model dependency and the time variability of the parameters. Then, these criteria and the availability of approximate initial estimates are used as decision rules to determine which parameters need to be estimated, and their precedence order in the sequential calibration process. The end-to-end (E2E) ecosystem model ROMS-PISCES-OSMOSE applied to the Northern Humboldt Current Ecosystem is used as an illustrative case study. The model is calibrated using an evolutionary algorithm and a likelihood approach to fit time series data of landings, abundance indices and catch at length distributions from 1992 to 2008. Testing different calibration schemes regarding the number of phases, the precedence of the parameters' estimation, and the consideration of time varying parameters, the results show that the multiple-phase calibration conducted under our criteria allowed to improve the model fit.
Directory of Open Access Journals (Sweden)
Inyong Shin
2016-05-01
Full Text Available In spite of the increasing importance of corporate social responsibility (CSR and employee job performance, little is still known about the links between the socially responsible actions of organizations and the job performance of their members. In order to explain how employees’ perceptions of CSR influence their job performance, this study first examines the relationships between perceived CSR, organizational identification, job satisfaction, and job performance, and then develops a sequential mediation model by fully integrating these links. The results of structural equation modeling analyses conducted for 250 employees at hotels in South Korea offered strong support for the proposed model. We found that perceived CSR was indirectly and positively associated with job performance sequentially mediated first through organizational identification and then job satisfaction. This study theoretically contributes to the CSR literature by revealing the sequential mechanism through which employees’ perceptions of CSR affect their job performance, and offers practical implications by stressing the importance of employees’ perceptions of CSR. Limitations of this study and future research directions are discussed.
Devaluation and sequential decisions: linking goal-directed and model-based behaviour
Directory of Open Access Journals (Sweden)
Eva eFriedel
2014-08-01
Full Text Available In experimental psychology different experiments have been developed to assess goal–directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed versus habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favour of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans.
Devaluation and sequential decisions: linking goal-directed and model-based behavior.
Friedel, Eva; Koch, Stefan P; Wendt, Jean; Heinz, Andreas; Deserno, Lorenz; Schlagenhauf, Florian
2014-01-01
In experimental psychology different experiments have been developed to assess goal-directed as compared to habitual control over instrumental decisions. Similar to animal studies selective devaluation procedures have been used. More recently sequential decision-making tasks have been designed to assess the degree of goal-directed vs. habitual choice behavior in terms of an influential computational theory of model-based compared to model-free behavioral control. As recently suggested, different measurements are thought to reflect the same construct. Yet, there has been no attempt to directly assess the construct validity of these different measurements. In the present study, we used a devaluation paradigm and a sequential decision-making task to address this question of construct validity in a sample of 18 healthy male human participants. Correlational analysis revealed a positive association between model-based choices during sequential decisions and goal-directed behavior after devaluation suggesting a single framework underlying both operationalizations and speaking in favor of construct validity of both measurement approaches. Up to now, this has been merely assumed but never been directly tested in humans.
On the equivalence between standard and sequentially ordered hidden Markov models
Chopin, Nicolas
2012-01-01
Chopin (2007) introduced a sequentially ordered hidden Markov model, for which states are ordered according to their order of appearance, and claimed that such a model is a re-parametrisation of a standard Markov model. This note gives a formal proof that this equivalence holds in Bayesian terms, as both formulations generate equivalent posterior distributions, but does not hold in Frequentist terms, as both formulations generate incompatible likelihood functions. Perhaps surprisingly, this shows that Bayesian re-parametrisation and Frequentist re-parametrisation are not identical concepts.
Recchia, Gabriel; Sahlgren, Magnus; Kanerva, Pentti; Jones, Michael N
2015-01-01
Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, "noisy" permutations in which units are mapped to other units arbitrarily (no one-to-one mapping) perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.
Directory of Open Access Journals (Sweden)
Gabriel Recchia
2015-01-01
Full Text Available Circular convolution and random permutation have each been proposed as neurally plausible binding operators capable of encoding sequential information in semantic memory. We perform several controlled comparisons of circular convolution and random permutation as means of encoding paired associates as well as encoding sequential information. Random permutations outperformed convolution with respect to the number of paired associates that can be reliably stored in a single memory trace. Performance was equal on semantic tasks when using a small corpus, but random permutations were ultimately capable of achieving superior performance due to their higher scalability to large corpora. Finally, “noisy” permutations in which units are mapped to other units arbitrarily (no one-to-one mapping perform nearly as well as true permutations. These findings increase the neurological plausibility of random permutations and highlight their utility in vector space models of semantics.
Directory of Open Access Journals (Sweden)
S. J. Noh
2011-04-01
Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
Multiplicative algorithms for constrained non-negative matrix factorization
Peng, Chengbin
2012-12-01
Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.
Kusev, Petko; Tsaneva-Atanasova, Krasimira; van Schaik, Paul; Chater, Nick
2012-12-01
In a series of experiments, Kusev et al. (Journal of Experimental Psychology: Human Perception and Performance 37:1874-1886, 2011) studied relative-frequency judgments of items drawn from two distinct categories. The experiments showed that the judged frequencies of categories of sequentially encountered stimuli are affected by the properties of the experienced sequences. Specifically, a first-run effect was observed, whereby people overestimated the frequency of a given category when that category was the first repeated category to occur in the sequence. Here, we (1) interpret these findings as reflecting the operation of a judgment heuristic sensitive to sequential patterns, (2) present mathematical definitions of the sequences used in Kusev et al. (Journal of Experimental Psychology: Human Perception and Performance 37:1874-1886, 2011), and (3) present a mathematical formalization of the first-run effect-the judgments-relative-to-patterns model-to account for the judged frequencies of sequentially encountered stimuli. The model parameter w accounts for the effect of the length of the first run on frequency estimates, given the total sequence length. We fitted data from Kusev et al. (Journal of Experimental Psychology: Human Perception and Performance 37:1874-1886, 2011) to the model parameters, so that with increasing values of w, subsequent items in the first run have less influence on judgments. We see the role of the model as essential for advancing knowledge in the psychology of judgments, as well as in other disciplines, such as computer science, cognitive neuroscience, artificial intelligence, and human-computer interaction.
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
We introduce a flexible spatial point process model for spatial point patterns exhibiting linear structures, without incorporating a latent line process. The model is given by an underlying sequential point process model, i.e. each new point is generated given the previous points. Under this model...... the points can be of one of three types: a ‘background point’, an ‘independent cluster point’, or a ‘dependent cluster point’. The background and independent cluster points are thought to exhibit ‘complete spatial randomness’, while the conditional distribution of a dependent cluster point given the previous...... points is such that the dependent cluster point is likely to occur closely to a previous cluster point. We demonstrate the flexibility of the model for producing point patterns with linear structures, and propose to use the model as the likelihood in a Bayesian setting when analyzing a spatial point...
A sequential threshold cure model for genetic analysis of time-to-event data
DEFF Research Database (Denmark)
Ødegård, J; Madsen, Per; Labouriau, Rodrigo S.
2011-01-01
pathogens, which is a common procedure in aquaculture breeding schemes. A cure model is a survival model accounting for a fraction of nonsusceptible individuals in the population. This study proposes a mixed cure model for time-to-event data, measured as sequential binary records. In a simulation study......In analysis of time-to-event data, classical survival models ignore the presence of potential nonsusceptible (cured) individuals, which, if present, will invalidate the inference procedures. Existence of nonsusceptible individuals is particularly relevant under challenge testing with specific...... survival data were generated through 2 underlying traits: susceptibility and endurance (risk of dying per time-unit), associated with 2 sets of underlying liabilities. Despite considerable phenotypic confounding, the proposed model was largely able to distinguish the 2 traits. Furthermore, if selection...
DEFF Research Database (Denmark)
Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo;
2016-01-01
considered combination treatments. The current study modeled bacterial growth in the intestine of pigs after intramuscular combination treatment (i.e. using two antibiotics simultaneously) and sequential treatments (i.e. alternating between two antibiotics) in order to identify the factors that favor...... the sensitive fraction of the commensal flora.Growth parameters for competing bacterial strains were estimated from the combined in vitro pharmacodynamic effect of two antimicrobials using the relationship between concentration and net bacterial growth rate. Predictions of in vivo bacterial growth were...
A PARALLEL NUMERICAL MODEL OF SOLVING N-S EQUATIONS BY USING SEQUENTIAL REGULARIZATION METHOD
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
A parallel numerical model was established for solving the Navier-Stokes equations by using Sequential Regularization Method (SRM). The computational domain is decomposed into P sub-domains in which the difference formulae were obtained from the governing equations. The data were exchannged at the virtual boundary of sub-domains in parallel computation. The close-channel cavity flow was solved by the implicit method. The driven square cavity flow was solved by the explicit method. The results were compared well those given by Ghia.
A SEQUENTIAL MODEL OF INNOVATION STRATEGY—COMPANY NON-FINANCIAL PERFORMANCE LINKS
Directory of Open Access Journals (Sweden)
Wakhid Slamet Ciptono
2006-05-01
Full Text Available This study extends the prior research (Zahra and Das 1993 by examining the association between a company’s innovation strategy and its non-financial performance in the upstream and downstream strategic business units (SBUs of oil and gas companies. The sequential model suggests a causal sequence among six dimensions of innovation strategy (leadership orientation, process innovation, product/service innovation, external innovation source, internal innovation source, and investment that may lead to higher company non-financial performance (productivity and operational reliability. The study distributed a questionnaire (by mail, e-mailed web system, and focus group discussion to three levels of managers (top, middle, and first-line of 49 oil and gas companies with 140 SBUs in Indonesia. These qualified samples fell into 47 upstream (supply-chain companies with 132 SBUs, and 2 downstream (demand-chain companies with 8 SBUs. A total of 1,332 individual usable questionnaires were returned thus qualified for analysis, representing an effective response rate of 50.19 percent. The researcher conducts structural equation modeling (SEM and hierarchical multiple regression analysis to assess the goodness-of-fit between the research models and the sample data and to test whether innovation strategy mediates the impact of leadership orientation on company non-financial performance. SEM reveals that the models have met goodness-of-fit criteria, thus the interpretation of the sequential models fits with the data. The results of SEM and hierarchical multiple regression: (1 support the importance of innovation strategy as a determinant of company non-financial performance, (2 suggest that the sequential model is appropriate for examining the relationships between six dimensions of innovation strategy and company non-financial performance, and (3 show that the sequential model provides additional insights into the indirect contribution of the individual
Bishara, Anthony J.; Kruschke, John K.; Stout, Julie C.; Bechara, Antoine; McCabe, David P.; Busemeyer, Jerome R.
2010-01-01
The Wisconsin Card Sort Task (WCST) is a commonly used neuropsychological test of executive or frontal lobe functioning. Traditional behavioral measures from the task (e.g., perseverative errors) distinguish healthy controls from clinical populations, but such measures can be difficult to interpret. In an attempt to supplement traditional measures, we developed and tested a family of sequential learning models that allowed for estimation of processes at the individual subject level in the WCST. Testing the model with substance dependent individuals and healthy controls, the model parameters significantly predicted group membership even when controlling for traditional behavioral measures from the task. Substance dependence was associated with a) slower attention shifting following punished trials and b) reduced decision consistency. Results suggest that model parameters may offer both incremental content validity and incremental predictive validity. PMID:20495607
Directory of Open Access Journals (Sweden)
Chen Yidong
2011-10-01
Full Text Available Abstract Background Transcriptional regulation by transcription factor (TF controls the time and abundance of mRNA transcription. Due to the limitation of current proteomics technologies, large scale measurements of protein level activities of TFs is usually infeasible, making computational reconstruction of transcriptional regulatory network a difficult task. Results We proposed here a novel Bayesian non-negative factor model for TF mediated regulatory networks. Particularly, the non-negative TF activities and sample clustering effect are modeled as the factors from a Dirichlet process mixture of rectified Gaussian distributions, and the sparse regulatory coefficients are modeled as the loadings from a sparse distribution that constrains its sparsity using knowledge from database; meantime, a Gibbs sampling solution was developed to infer the underlying network structure and the unknown TF activities simultaneously. The developed approach has been applied to simulated system and breast cancer gene expression data. Result shows that, the proposed method was able to systematically uncover TF mediated transcriptional regulatory network structure, the regulatory coefficients, the TF protein level activities and the sample clustering effect. The regulation target prediction result is highly coordinated with the prior knowledge, and sample clustering result shows superior performance over previous molecular based clustering method. Conclusions The results demonstrated the validity and effectiveness of the proposed approach in reconstructing transcriptional networks mediated by TFs through simulated systems and real data.
Accardi, Antonio; Barth, Ingo; Kühn, Oliver; Manz, Jörn
2010-10-28
Quantum dynamics simulations of double proton transfer (DPT) in the model porphine, starting from a nonequilibrium initial state, demonstrate that a switch from synchronous (or concerted) to sequential (or stepwise or successive) breaking and making of two bonds is possible. For this proof of principle, we employ the simple model of Smedarchina, Z.; Siebrand, W.; Fernández-Ramos, A. J. Chem. Phys. 2007, 127, 174513, with reasonable definition for the domains D for the reactant R, the product P, the saddle point SP2 which is crossed during synchronous DPT, and two intermediates I = I(1) + I(2) for two alternative routes of sequential DPT. The wavepacket dynamics is analyzed in terms of various properties, from qualitative conclusions based on the patterns of the densities and flux densities, until quantitative results for the time evolutions of the populations or probabilities P(D)(t) of the domains D = R, P, SP2, and I, and the associated net fluxes F(D)(t) as well as the domain-to-domain (DTD) fluxes F(D1,D2) between neighboring domains D1 and D2. Accordingly, the initial synchronous mechanism of the first forward reaction is due to the directions of various momenta, which are imposed on the wavepacket by the L-shaped part of the steep repulsive wall of the potential energy surface (PES), close to the minimum for the reactant. At the same time, these momenta cause initial squeezing followed by rapid dispersion of the representative wavepacket. The switch from the synchronous to sequential mechanism is called indirect, because it is mediated by two effects: First, the wavepacket dispersion; second, relief reflections of the broadened wavepacket from wide regions of the inverse L-shaped steep repulsive wall of the PES close to the minimum for the product, preferably to the domains I = I(1) + I(2) for the sequential DPT during the first back reaction, and also during the second forward reaction, etc. Our analysis also discovers a variety of minor effects, such as
Modeling and Predicting AD Progression by Regression Analysis of Sequential Clinical Data
Xie, Qing
2016-02-23
Alzheimer\\'s Disease (AD) is currently attracting much attention in elders\\' care. As the increasing availability of massive clinical diagnosis data, especially the medical images of brain scan, it is highly significant to precisely identify and predict the potential AD\\'s progression based on the knowledge in the diagnosis data. In this paper, we follow a novel sequential learning framework to model the disease progression for AD patients\\' care. Different from the conventional approaches using only initial or static diagnosis data to model the disease progression for different durations, we design a score-involved approach and make use of the sequential diagnosis information in different disease stages to jointly simulate the disease progression. The actual clinical scores are utilized in progress to make the prediction more pertinent and reliable. We examined our approach by extensive experiments on the clinical data provided by the Alzheimer\\'s Disease Neuroimaging Initiative (ADNI). The results indicate that the proposed approach is more effective to simulate and predict the disease progression compared with the existing methods.
Modeling Search Behaviors during the Acquisition of Expertise in a Sequential Decision-Making Task
Moënne-Loccoz, Cristóbal; Vergara, Rodrigo C.; López, Vladimir; Mery, Domingo; Cosmelli, Diego
2017-01-01
Our daily interaction with the world is plagued of situations in which we develop expertise through self-motivated repetition of the same task. In many of these interactions, and especially when dealing with computer and machine interfaces, we must deal with sequences of decisions and actions. For instance, when drawing cash from an ATM machine, choices are presented in a step-by-step fashion and a specific sequence of choices must be performed in order to produce the expected outcome. But, as we become experts in the use of such interfaces, is it possible to identify specific search and learning strategies? And if so, can we use this information to predict future actions? In addition to better understanding the cognitive processes underlying sequential decision making, this could allow building adaptive interfaces that can facilitate interaction at different moments of the learning curve. Here we tackle the question of modeling sequential decision-making behavior in a simple human-computer interface that instantiates a 4-level binary decision tree (BDT) task. We record behavioral data from voluntary participants while they attempt to solve the task. Using a Hidden Markov Model-based approach that capitalizes on the hierarchical structure of behavior, we then model their performance during the interaction. Our results show that partitioning the problem space into a small set of hierarchically related stereotyped strategies can potentially capture a host of individual decision making policies. This allows us to follow how participants learn and develop expertise in the use of the interface. Moreover, using a Mixture of Experts based on these stereotyped strategies, the model is able to predict the behavior of participants that master the task. PMID:28943847
Non-negative constraint for image-based breathing gating in ultrasound hepatic perfusion data
Wu, Kaizhi; Ding, Mingyue; Chen, Xi; Deng, Wenjie; Zhang, Zhijun
2015-12-01
Images acquired during free breathing using contrast enhanced ultrasound hepatic perfusion imaging exhibits a periodic motion pattern. It needs to be compensated for if a further accurate quantification of the hepatic perfusion analysis is to be executed. To reduce the impact of respiratory motion, image-based breathing gating algorithm was used to compensate the respiratory motion in contrast enhanced ultrasound. The algorithm contains three steps of which respiratory kinetics extracted, image subsequences determined and image subsequences registered. The basic performance of the algorithm was to extract the respiratory kinetics of the ultrasound hepatic perfusion image sequences accurately. In this paper, we treated the kinetics extracted model as a non-negative matrix factorization (NMF) problem. We extracted the respiratory kinetics of the ultrasound hepatic perfusion image sequences by non-negative matrix factorization (NMF). The technique involves using the NMF objective function to accurately extract respiratory kinetics. It was tested on simulative phantom and used to analyze 6 liver CEUS hepatic perfusion image sequences. The experimental results show the effectiveness of our proposed method in quantitative and qualitative.
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine;
In order to move beyond simplified covariance based a priori models, which are typically used for inverse problems, more complex multiple-point-based a priori models have to be considered. By means of marginal probability distributions ‘learned’ from a training image, sequential simulation has pr...... in order to improve the pattern reproducibility while maintaining the efficiency of the sequential Gibbs sampling strategy. We compare realizations of three types of a priori models. Finally, the results are exemplified through crosshole travel time tomography....
Convergence, Non-negativity and Stability of a New Milstein Scheme with Applications to Finance
Higham, Desmond J; Szpruch, Lukasz
2012-01-01
We propose and analyse a new Milstein type scheme for simulating stochastic differential equations (SDEs) with highly nonlinear coefficients. Our work is motivated by the need to justify multi-level Monte Carlo simulations for mean-reverting financial models with polynomial growth in the diffusion term. We introduce a double implicit Milstein scheme and show that it possesses desirable properties. It converges strongly and preserves non-negativity for a rich family of financial models and can reproduce linear and nonlinear stability behaviour of the underlying SDE without severe restriction on the time step. Although the scheme is implicit, we point out examples of financial models where an explicit formula for the solution to the scheme can be found.
PredSTP: a highly accurate SVM based model to predict sequential cystine stabilized peptides.
Islam, S M Ashiqul; Sajed, Tanvir; Kearney, Christopher Michel; Baker, Erich J
2015-07-05
Numerous organisms have evolved a wide range of toxic peptides for self-defense and predation. Their effective interstitial and macro-environmental use requires energetic and structural stability. One successful group of these peptides includes a tri-disulfide domain arrangement that offers toxicity and high stability. Sequential tri-disulfide connectivity variants create highly compact disulfide folds capable of withstanding a variety of environmental stresses. Their combination of toxicity and stability make these peptides remarkably valuable for their potential as bio-insecticides, antimicrobial peptides and peptide drug candidates. However, the wide sequence variation, sources and modalities of group members impose serious limitations on our ability to rapidly identify potential members. As a result, there is a need for automated high-throughput member classification approaches that leverage their demonstrated tertiary and functional homology. We developed an SVM-based model to predict sequential tri-disulfide peptide (STP) toxins from peptide sequences. One optimized model, called PredSTP, predicted STPs from training set with sensitivity, specificity, precision, accuracy and a Matthews correlation coefficient of 94.86%, 94.11%, 84.31%, 94.30% and 0.86, respectively, using 200 fold cross validation. The same model outperforms existing prediction approaches in three independent out of sample testsets derived from PDB. PredSTP can accurately identify a wide range of cystine stabilized peptide toxins directly from sequences in a species-agnostic fashion. The ability to rapidly filter sequences for potential bioactive peptides can greatly compress the time between peptide identification and testing structural and functional properties for possible antimicrobial and insecticidal candidates. A web interface is freely available to predict STP toxins from http://crick.ecs.baylor.edu/.
DEFF Research Database (Denmark)
Ahmad, Amais; Zachariasen, Camilla; Christiansen, Lasse Engbo;
2016-01-01
Background: Combination treatment is increasingly used to fight infections caused by bacteria resistant to two or more antimicrobials. While multiple studies have evaluated treatment strategies to minimize the emergence of resistant strains for single antimicrobial treatment, fewer studies have...... generated by a mathematical model of the competitive growth of multiple strains of Escherichia coli.Results: Simulation studies showed that sequential use of tetracycline and ampicillin reduced the level of double resistance, when compared to the combination treatment. The effect of the cycling frequency...... frequency did not play a role in suppressing the growth of resistant strains, but the specific order of the two antimicrobials did. Predictions made from the study could be used to redesign multidrug treatment strategies not only for intramuscular treatment in pigs, but also for other dosing routes....
Sequential box models for indoor air quality: Application to airliner cabin air quality
Ryan, P. Barry; Spengler, John D.; Halfpenny, Paul F.
In this paper we present the development and application of a model for indoor air quality. The model represents a departure from the standard box models typically used for indoor environments which has applicability in residences and office buildings. The model has been developed for a physical system consisting of sequential compartments which communicate only with adjacent compartments. Each compartment may contain various source and sink terms for a pollutant as well as leakage, and air transfer from adjacent compartments. The mathematical derivation affords rapid calculation of equilibrium concentrations in an essentially unlimited number of compartments. The model has been applied to air quality in the passenger cabin of three commercial aircraft. Simulations have been performed for environmental tobacco smoke (ETS) under two scenarios, CO 2 and water vapor. Additionally, concentrations in one aircraft have been simulated under conditions different from the standard configuration. Results of the simulations suggest the potential for elevated concentrations of ETS in smoking sections of non-air-recirculating aircraft and throughout the aircraft when air is recirculated. Concentrations of CO 2 and water vapor are consistent with expected results. We conclude that this model may be a useful tool in understanding indoor air quality in general and on aircraft in particular.
The combination of satellite observation techniques for sequential ionosphere VTEC modeling
Erdogan, Eren; Limberger, Marco; Schmidt, Michael; Seitz, Florian; Dettmering, Denise; Börger, Klaus; Brandert, Sylvia; Görres, Barbara; Kersten, Wilhelm F.; Bothmer, Volker; Hinrichs, Johannes; Venzmer, Malte; Mrotzek, Niclas
2016-04-01
The project OPTIMAP is a joint initiative by the Bundeswehr GeoInformation Centre (BGIC), the German Space Situational Awareness Centre (GSSAC), the German Geodetic Research Institute of the Technical University of Munich (DGFI-TUM) and the Institute for Astrophysics at the University of Göttingen (IAG). The main goal is to develop an operational tool for ionospheric mapping and prediction (OPTIMAP). A key feature of the project is the combination of different satellite observation techniques to improve the spatio-temporal data coverage and the sensitivity for selected target parameters. In the current status, information about the vertical total electron content (VTEC) is derived from the dual frequency signal processing of four techniques: (1) Terrestrial observations of GPS and GLONASS ensure the high-resolution coverage of continental regions, (2) the satellite altimetry mission Jason-2 is taken into account to provide VTEC in nadir direction along the satellite tracks over the oceans, (3) GPS radio occultations to Formosat-3/COSMIC are exploited for the retrieval of electron density profiles that are integrated to obtain VTEC and (4) Jason-2 carrier-phase observations tracked by the on-board DORIS receiver are processed to determine the relative VTEC. All measurements are sequentially pre-processed in hourly batches serving as input data of a Kalman filter (KF) for modeling the global VTEC distribution. The KF runs in a predictor-corrector mode allowing for the sequential processing of the measurements where update steps are performed with one-minute sampling in the current configuration. The spatial VTEC distribution is represented by B-spline series expansions, i.e., the corresponding B-spline series coefficients together with additional technique-dependent unknowns such as Differential Code Biases and Intersystem Biases are estimated by the KF. As a preliminary solution, the prediction model to propagate the filter state through time is defined by a random
Song Recommendation with Non-Negative Matrix Factorization and Graph Total Variation
Benzi, Kirell; Bresson, Xavier; Vandergheynst, Pierre
2016-01-01
This work formulates a novel song recommender system as a matrix completion problem that benefits from collaborative filtering through Non-negative Matrix Factorization (NMF) and content-based filtering via total variation (TV) on graphs. The graphs encode both playlist proximity information and song similarity, using a rich combination of audio, meta-data and social features. As we demonstrate, our hybrid recommendation system is very versatile and incorporates several well-known methods while outperforming them. Particularly, we show on real-world data that our model overcomes w.r.t. two evaluation metrics the recommendation of models solely based on low-rank information, graph-based information or a combination of both.
Dynamic Modelling of Aquifer Level Using Space-Time Kriging and Sequential Gaussian Simulation
Varouchakis, Emmanouil A.; Hristopulos, Dionisis T.
2016-04-01
Geostatistical models are widely used in water resources management projects to represent and predict the spatial variability of aquifer levels. In addition, they can be applied as surrogate to numerical hydrological models if the hydrogeological data needed to calibrate the latter are not available. For space-time data, spatiotemporal geostatistical approaches can model the aquifer level variability by incorporating complex space-time correlations. A major advantage of such models is that they can improve the reliability of predictions compared to purely spatial or temporal models in areas with limited spatial and temporal data availability. The identification and incorporation of a spatiotemporal trend model can further increase the accuracy of groundwater level predictions. Our goal is to derive a geostatistical model of dynamic aquifer level changes in a sparsely gauged basin on the island of Crete (Greece). The available data consist of bi-annual (dry and wet hydrological period) groundwater level measurements at 11 monitoring locations for the time period 1981 to 2010. We identify a spatiotemporal trend function that follows the overall drop of the aquifer level over the study period. The correlation of the residuals is modeled using a non-separable space-time variogram function based on the Spartan covariance family. The space-time Residual Kriging (STRK) method is then applied to combine the estimated trend and the residuals into dynamic predictions of groundwater level. Sequential Gaussian Simulation is also employed to determine the uncertainty of the spatiotemporal model (trend and covariance) parameters. This stochastic modelling approach produces multiple realizations, ranks the prediction results on the basis of specified criteria, and captures the range of the uncertainty. The model projections recommend that in 2032 a part of the basin will be under serious threat as the aquifer level will approximate the sea level boundary.
DEFF Research Database (Denmark)
Cordua, Knud Skou; Hansen, Thomas Mejer; Lange, Katrine;
proven to be an efficient way of obtaining multiple realizations that honor the same multiple-point statistics as the training image. The frequency matching method provides an alternative way of formulating multiple-point-based a priori models. In this strategy the pattern frequency distributions (i.......e. marginals) of the training image and a subsurface model are matched in order to obtain a solution with the same multiple-point statistics as the training image. Sequential Gibbs sampling is a simulation strategy that provides an efficient way of applying sequential simulation based algorithms as a priori...... information in probabilistic inverse problems. Unfortunately, when this strategy is applied with the multiple-point-based simulation algorithm SNESIM the reproducibility of training image patterns is violated. In this study we suggest to combine sequential simulation with the frequency matching method...
Institute of Scientific and Technical Information of China (English)
王晖; 刘大有; 等
1994-01-01
In this paper we consider the problem of sequential processing and present a sequential model based on the back-propagation algorithm.This model is intended to deal with intrinsically sequential problems,such as word recognition,speech recognition,natural language understanding.This model can be used to train a network to learn the sequence of input patterns,in a fixed order or a random order.Besides,this model is open- and partial-associative,characterized as “resognizing while accumulating”, which, as we argue, is mental cognition process oriented.
Institute of Scientific and Technical Information of China (English)
郭恒昌
1991-01-01
Sequential diagnosis is a very useful strategy for system-level fault identification because of its lower cost of hardware.In this paper,the characterization of sequentially t-diagnosable system is given,and a universal algorithm to seek faulty units in the system is developed.
SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code
Directory of Open Access Journals (Sweden)
Cockx Johan
2007-01-01
Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.
SPRINT: A Tool to Generate Concurrent Transaction-Level Models from Sequential Code
Directory of Open Access Journals (Sweden)
Richard Stahl
2007-01-01
Full Text Available A high-level concurrent model such as a SystemC transaction-level model can provide early feedback during the exploration of implementation alternatives for state-of-the-art signal processing applications like video codecs on a multiprocessor platform. However, the creation of such a model starting from sequential code is a time-consuming and error-prone task. It is typically done only once, if at all, for a given design. This lack of exploration of the design space often leads to a suboptimal implementation. To support our systematic C-based design flow, we have developed a tool to generate a concurrent SystemC transaction-level model for user-selected task boundaries. Using this tool, different parallelization alternatives have been evaluated during the design of an MPEG-4 simple profile encoder and an embedded zero-tree coder. Generation plus evaluation of an alternative was possible in less than six minutes. This is fast enough to allow extensive exploration of the design space.
A reduced order model based on Kalman filtering for sequential data assimilation of turbulent flows
Meldi, M.; Poux, A.
2017-10-01
A Kalman filter based sequential estimator is presented in this work. The estimator is integrated in the structure of segregated solvers for the analysis of incompressible flows. This technique provides an augmented flow state integrating available observation in the CFD model, naturally preserving a zero-divergence condition for the velocity field. Because of the prohibitive costs associated with a complete Kalman Filter application, two model reduction strategies have been proposed and assessed. These strategies dramatically reduce the increase in computational costs of the model, which can be quantified in an augmentation of 10%- 15% with respect to the classical numerical simulation. In addition, an extended analysis of the behavior of the numerical model covariance Q has been performed. Optimized values are strongly linked to the truncation error of the discretization procedure. The estimator has been applied to the analysis of a number of test cases exhibiting increasing complexity, including turbulent flow configurations. The results show that the augmented flow successfully improves the prediction of the physical quantities investigated, even when the observation is provided in a limited region of the physical domain. In addition, the present work suggests that these Data Assimilation techniques, which are at an embryonic stage of development in CFD, may have the potential to be pushed even further using the augmented prediction as a powerful tool for the optimization of the free parameters in the numerical simulation.
A Rough Sets Partitioning Model for Mining Sequential Patterns with Time Constraint
Bisaria, Jigyasa; Pardasani, K R
2009-01-01
Now a days, data mining and knowledge discovery methods are applied to a variety of enterprise and engineering disciplines to uncover interesting patterns from databases. The study of Sequential patterns is an important data mining problem due to its wide applications to real world time dependent databases. Sequential patterns are inter-event patterns ordered over a time-period associated with specific objects under study. Analysis and discovery of frequent sequential patterns over a predetermined time-period are interesting data mining results, and can aid in decision support in many enterprise applications. The problem of sequential pattern mining poses computational challenges as a long frequent sequence contains enormous number of frequent subsequences. Also useful results depend on the right choice of event window. In this paper, we have studied the problem of sequential pattern mining through two perspectives, one the computational aspect of the problem and the other is incorporation and adjustability o...
Modeling of Non-Coherent Sequential Acquisition Process for DS/SS Signals
Institute of Scientific and Technical Information of China (English)
李艳; 马雨出; 张中兆
2003-01-01
A modified non-coherent sequential detection decision logic based on continuous accumulation to achieve fast PN code acquisition is proposed. To simplify the design and analysis, the equivalent relationship between the likelihood ratio of the current sample and that of all the previous samples is deduced. The scheme is proved to be an optimum sequential detection under certain assumptions. Because the average sample number (ASN) can not be calculated through the methods applied to the conventional sequential detection, an algorithm is also provided, which can estimate both the probability density function (pdf) and the upper threshole of ASN. The desired probabilities of false alarm and detection,as well as faster PN code acquisition compared to the conventional sequential detection can be achieved by employing this structure . In addition, Rayeigh-faded reception case is also taken into consideration. Performances of the proposed schemes are obtained, which suggest that the proposed non-coherent sequential detection is more desirable.
Khaki, M.
2017-07-06
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Khaki, M.; Hoteit, I.; Kuhn, M.; Awange, J.; Forootan, E.; van Dijk, A. I. J. M.; Schumacher, M.; Pattiaratchi, C.
2017-09-01
The time-variable terrestrial water storage (TWS) products from the Gravity Recovery And Climate Experiment (GRACE) have been increasingly used in recent years to improve the simulation of hydrological models by applying data assimilation techniques. In this study, for the first time, we assess the performance of the most popular data assimilation sequential techniques for integrating GRACE TWS into the World-Wide Water Resources Assessment (W3RA) model. We implement and test stochastic and deterministic ensemble-based Kalman filters (EnKF), as well as Particle filters (PF) using two different resampling approaches of Multinomial Resampling and Systematic Resampling. These choices provide various opportunities for weighting observations and model simulations during the assimilation and also accounting for error distributions. Particularly, the deterministic EnKF is tested to avoid perturbing observations before assimilation (that is the case in an ordinary EnKF). Gaussian-based random updates in the EnKF approaches likely do not fully represent the statistical properties of the model simulations and TWS observations. Therefore, the fully non-Gaussian PF is also applied to estimate more realistic updates. Monthly GRACE TWS are assimilated into W3RA covering the entire Australia. To evaluate the filters performances and analyze their impact on model simulations, their estimates are validated by independent in-situ measurements. Our results indicate that all implemented filters improve the estimation of water storage simulations of W3RA. The best results are obtained using two versions of deterministic EnKF, i.e. the Square Root Analysis (SQRA) scheme and the Ensemble Square Root Filter (EnSRF), respectively, improving the model groundwater estimations errors by 34% and 31% compared to a model run without assimilation. Applying the PF along with Systematic Resampling successfully decreases the model estimation error by 23%.
Three-dimensional modeling of Mount Vesuvius with sequential integrated inversion
Tondi, Rosaria; de Franco, Roberto
2003-05-01
A new image of Mount Vesuvius and the surrounding area is recovered from the tomographic inversion of 693 first P wave arrivals recorded by 314 receivers deployed along five profiles which intersect the crater, and gravity data collected in 17,598 stations on land and offshore. The final three-dimensional (3-D) velocity model presented here is determined by interpolation of five 2-D velocity sections obtained from sequential integrated inversion (SII) of seismic and gravity data. The inversion procedure adopts the "maximum likelihood" scheme in order to jointly optimize seismic velocities and densities. In this way we recover velocity and density models both consistent with seismic and gravity data information. The model parameterization of these 2-D models is chosen in order to keep the diagonal elements of the seismic resolution matrix in the order of 0.2-0.8. The highest values of resolution are detected under the volcano edifice. The imaged 6-km-thick crustal volume underlies a 25 × 45 km2 area. The interpolation is performed by choosing the right grid for a smoothing algorithm which prepares optimum models for asymptotic ray theory methods. Hence this model can be used as a reference model for a 3-D tomographic inversion of seismic data. The 3-D gravity modeling is straightforward. The results of this study clearly image the continuous structure of the Mesozoic carbonate basement top and the connection of the volcano conduit structure to two shallow depressions, which in terms of hazard prevention are the regions through which magma may more easily flow toward the surface and cause possible eruptions.
Directory of Open Access Journals (Sweden)
S. Skachko
2008-12-01
Full Text Available This study focuses on an accurate estimation of ocean circulation via assimilation of satellite measurements of ocean dynamical topography into the global finite-element ocean model (FEOM. The dynamical topography data are derived from a complex analysis of multi-mission altimetry data combined with a referenced earth geoid. The assimilation is split into two parts. First, the mean dynamic topography is adjusted. To this end an adiabatic pressure correction method is used which reduces model divergence from the real evolution. Second, a sequential assimilation technique is applied to improve the representation of thermodynamical processes by assimilating the time varying dynamic topography. A method is used according to which the temperature and salinity are updated following the vertical structure of the first baroclinic mode. It is shown that the method leads to a partially successful assimilation approach reducing the rms difference between the model and data from 16 cm to 2 cm. This improvement of the mean state is accompanied by significant improvement of temporal variability in our analysis. However, it remains suboptimal, showing a tendency in the forecast phase of returning toward a free run without data assimilation. Both the mean difference and standard deviation of the difference between the forecast and observation data are reduced as the result of assimilation.
MPSLIB: A C++ class for sequential simulation of multiple-point statistical models
Hansen, Thomas Mejer; Vu, Le Thanh; Bach, Torben
Geostatistical simulation methods allow simulation of spatial structures and patterns based on a choice of statistical model. In the last few decades multiple-point statistics (MPS) has been developed that allows inferring the statistical model from a training image. This allows for a simpler quantification of the statistical model, and simulation of more realistic (Earth) structures. A number of different algorithms for MPS based simulation have been proposed, each associated with a unique set of pros or cons. MPSLIB is a C++ class that provides a framework for implementing most of the currently proposed multiple-point simulation methods based on sequential simulation. A number of the most widely used methods are provided as an example. The single normal equation simulation (SNESIM) method is implemented using both a tree and a list structure. A new generalized ENESIM (GENESIM) algorithm is proposed that can act as (in one extreme) the ENESIM algorithm, and (in another extreme) similar to the direct sampling algorithm. MPSLIB aims to be easy to compile on most platforms (standard C++11 is the only requirement) and is released under the Open Source LGPLv3 License to encourage reuse and further development.
Bifurcation of non-negative solutions for an elliptic system
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
In the paper,we consider a nonlinear elliptic system coming from the predator-prey model with diffusion.Predator growth-rate is treated as bifurcation parameter.The range of parameter is found for which there exists nontrivial solution via the theory of bifurcation from infinity,local bifurcation and global bifurcation.
The Economic Efficiency of Urban Land Use with a Sequential Slack-Based Model in Korea
Directory of Open Access Journals (Sweden)
Yongrok Choi
2017-01-01
Full Text Available Since the inauguration of the government-led five year economic plans in the 1960s, Korea has achieved remarkable economic development. Korea’s economic strategy, known as ‘The Miracle on the Han River’, focused on heavy and chemical industries such as ship building and petrochemicals and was based on resource intensive urbanization. This rapid urban development caused a series of problems, such as over-development in urban areas, bottlenecks in utilities, and environmental degradation. Nevertheless, the Korean government has recently moved toward deregulation of the greenbelts of major city areas. Since very few studies have analyzed the urban land use economic efficiency (ULUEE in Korea, this paper assesses the feasibility of recent deregulation policy concerning the greenbelts utilizing the sequential slack-based measure (SSBM model under environmental constraints across 16 South Korean cities from 2006 to 2013. Our research makes three significant contributions to urbanization research. First, this paper uses an SSBM model to analyze the dynamic changes of urban land use economic efficiency in Korea at the regional level; Second, this paper analyzes factors influencing ULUEE in Korea, and the feasibility of the deregulation policies on the greenbelts; Third, this paper suggests more performance-oriented policy alternatives to improve the ULUEE and implement sustainable greenbelt management.
Three-Verb Clusters in Interference Frisian: A Stochastic Model over Sequential Syntactic Input.
Hoekstra, Eric; Versloot, Arjen
2016-03-01
Abstract Interference Frisian (IF) is a variety of Frisian, spoken by mostly younger speakers, which is heavily influenced by Dutch. IF exhibits all six logically possible word orders in a cluster of three verbs. This phenomenon has been researched by Koeneman and Postma (2006), who argue for a parameter theory, which leaves frequency differences between various orders unexplained. Rejecting Koeneman and Postma's parameter theory, but accepting their conclusion that Dutch (and Frisian) data are input for the grammar of IF, we will argue that the word order preferences of speakers of IF are determined by frequency and similarity. More specifically, three-verb clusters in IF are sensitive to: their linear left-to-right similarity to two-verb clusters and three-verb clusters in Frisian and in Dutch; the (estimated) frequency of two- and three-verb clusters in Frisian and Dutch. The model will be shown to work best if Dutch and Frisian, and two- and three-verb clusters, have equal impact factors. If different impact factors are taken, the model's predictions do not change substantially, testifying to its robustness. This analysis is in line with recent ideas that the sequential nature of human speech is more important to syntactic processes than commonly assumed, and that less burden need be put on the hierarchical dimension of syntactic structure.
Inferring Sequential Order of Somatic Mutations during Tumorgenesis based on Markov Chain Model.
Kang, Hao; Cho, Kwang-Hyun; Zhang, Xiaohua Douglas; Zeng, Tao; Chen, Luonan
2015-01-01
Tumors are developed and worsen with the accumulated mutations on DNA sequences during tumorigenesis. Identifying the temporal order of gene mutations in cancer initiation and development is a challenging topic. It not only provides a new insight into the study of tumorigenesis at the level of genome sequences but also is an effective tool for early diagnosis of tumors and preventive medicine. In this paper, we develop a novel method to accurately estimate the sequential order of gene mutations during tumorigenesis from genome sequencing data based on Markov chain model as TOMC (Temporal Order based on Markov Chain), and also provide a new criterion to further infer the order of samples or patients, which can characterize the severity or stage of the disease. We applied our method to the analysis of tumors based on several high-throughput datasets. Specifically, first, we revealed that tumor suppressor genes (TSG) tend to be mutated ahead of oncogenes, which are considered as important events for key functional loss and gain during tumorigenesis. Second, the comparisons of various methods demonstrated that our approach has clear advantages over the existing methods due to the consideration on the effect of mutation dependence among genes, such as co-mutation. Third and most important, our method is able to deduce the ordinal sequence of patients or samples to quantitatively characterize their severity of tumors. Therefore, our work provides a new way to quantitatively understand the development and progression of tumorigenesis based on high throughput sequencing data.
Approximate L0 constrained Non-negative Matrix and Tensor Factorization
DEFF Research Database (Denmark)
Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai
2008-01-01
Non-negative matrix factorization (NMF), i.e. V = WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However...... path for the L1 norm regularized least squares NMF for fixed W can be calculated at the cost of an ordinary least squares solution based on a modification of the Least Angle Regression and Selection (LARS) algorithm forming a non-negativity constrained LARS (NLARS). With the full regularization path...
Normative personality trait development in adulthood: A 6-year cohort-sequential growth model.
Milojev, Petar; Sibley, Chris G
2017-03-01
The present study investigated patterns of normative change in personality traits across the adult life span (19 through 74 years of age). We examined change in extraversion, agreeableness, conscientiousness, neuroticism, openness to experience and honesty-humility using data from the first 6 annual waves of the New Zealand Attitudes and Values Study (N = 10,416; 61.1% female, average age = 49.46). We present a cohort-sequential latent growth model assessing patterns of mean-level change due to both aging and cohort effects. Extraversion decreased as people aged, with the most pronounced declines occurring in young adulthood, and then again in old age. Agreeableness, indexed with a measure focusing on empathy, decreased in young adulthood and remained relatively unchanged thereafter. Conscientiousness increased among young adults then leveled off and remained fairly consistent for the rest of the adult life span. Neuroticism and openness to experience decreased as people aged. However, the models suggest that these latter effects may also be partially due to cohort differences, as older people showed lower levels of neuroticism and openness to experience more generally. Honesty-humility showed a pronounced and consistent increase across the adult life span. These analyses of large-scale longitudinal national probability panel data indicate that different dimensions of personality follow distinct developmental processes throughout adulthood. Our findings also highlight the importance of young adulthood (up to about the age of 30) in personality trait development, as well as continuing change throughout the adult life span. (PsycINFO Database Record
Monte Carlo Algorithm for Least Dependent Non-Negative Mixture Decomposition
Astakhov, S A; Kraskov, A; Grassberger, P; Astakhov, Sergey A.; St\\"ogbauer, Harald; Kraskov, Alexander; Grassberger, Peter
2006-01-01
We propose a simulated annealing algorithm (called SNICA for "stochastic non-negative independent component analysis") for blind decomposition of linear mixtures of non-negative sources with non-negative coefficients. The de-mixing is based on a Metropolis type Monte Carlo search for least dependent components, with the mutual information between recovered components as a cost function and their non-negativity as a hard constraint. Elementary moves are shears in two-dimensional subspaces and rotations in three-dimensional subspaces. The algorithm is geared at decomposing signals whose probability densities peak at zero, the case typical in analytical spectroscopy and multivariate curve resolution. The decomposition performance on large samples of synthetic mixtures and experimental data is much better than that of traditional blind source separation methods based on principal component analysis (MILCA, FastICA, RADICAL) and chemometrics techniques (SIMPLISMA, ALS, BTEM) The source codes of SNICA, MILCA and th...
Suh, Chang-Won; Lee, Joong-Won; Hong, Yoon-Seok Timothy; Shin, Hang-Sik
2009-01-01
We propose an evolutionary process model induction system that is based on the grammar-based genetic programming to automatically discover multivariate dynamic inference models that are able to predict fecal coliform bacteria removals using common process variables instead of directly measuring fecal coliform bacteria concentration in a full-scale municipal activated-sludge wastewater treatment plant. A sequential modeling paradigm is also proposed to derive multivariate dynamic models of fecal coliform removals in the evolutionary process model induction system. It is composed of two parts, the process estimator and the process predictor. The process estimator acts as an intelligent software sensor to achieve a good estimation of fecal coliform bacteria concentration in the influent. Then the process predictor yields sequential prediction of the effluent fecal coliform bacteria concentration based on the estimated fecal coliform bacteria concentration in the influent from the process estimator with other process variables. The results show that the evolutionary process model induction system with a sequential modeling paradigm has successfully evolved multivariate dynamic models of fecal coliform removals in the form of explicit mathematical formulas with high levels of accuracy and good generalization. The evolutionary process model induction system with sequential modeling paradigm proposed here provides a good alternative to develop cost-effective dynamic process models for a full-scale wastewater treatment plant and is readily applicable to a variety of other complex treatment processes.
Robust and Non-Negative Collective Matrix Factorization for Text-to-Image Transfer Learning.
Yang, Liu; Jing, Liping; Ng, Michael K
2015-12-01
Heterogeneous transfer learning has recently gained much attention as a new machine learning paradigm in which the knowledge can be transferred from source domains to target domains in different feature spaces. Existing works usually assume that source domains can provide accurate and useful knowledge to be transferred to target domains for learning. In practice, there may be noise appearing in given source (text) and target (image) domains data, and thus, the performance of transfer learning can be seriously degraded. In this paper, we propose a robust and non-negative collective matrix factorization model to handle noise in text-to-image transfer learning, and make a reliable bridge to transfer accurate and useful knowledge from the text domain to the image domain. The proposed matrix factorization model can be solved by an efficient iterative method, and the convergence of the iterative method can be shown. Extensive experiments on real data sets suggest that the proposed model is able to effectively perform transfer learning in noisy text and image domains, and it is superior to the popular existing methods for text-to-image transfer learning.
Bootstrap Sequential Determination of the Co-integration Rank in VAR Models
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Rahbek, Anders; Taylor, A. M. Robert
with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... in the literature by proposing a bootstrap sequential algorithm which we demonstrate delivers consistent cointegration rank estimation for general I(1) processes. Finite sample Monte Carlo simulations show the proposed procedure performs well in practice....
Directory of Open Access Journals (Sweden)
João Vieira Neto
2007-09-01
Full Text Available This research established models for the construction of plans of binomial sequential sampling for the tan-miteDichopelmus notus Keifer (Acari, Eriophyidae in mate-tea orchards. The study was carried out in a ten years old orchard, locatedin Chapecó, Santa Catarina state, Brazil. In three areas of approximately 2,500 m2, 30 plants had been selected randomly. Fortnightly,from January to December, 2004, infestation of D. notus in 18 mature leaves of ten plants in each area were evaluated. Theevaluations were executed directly in the orchard, using lenses (10x and 1 cm2 of fixed field. The lines of the sequential plans wereconstructed using the methodology based on the confidence interval of Iwao (1975, considering the models of Normal Approach withCorrection of Continuity, Normal Approach of Blyth (1986, Approach of Hall (1982 modified by Blyth (1986, Normal Approach ofMolenaar (1973, Normal Approach of Pratt (1968 and Leemis & Trivedi (1996 methodology. The models were evaluatedconsidering amplitude analysis of the confidence intervals. The results had evidenced that the Model of Normal Approach withCorrection of Continuity must preferentiably be used in the elaboration of plans of binomial sequential sampling for the tan-mite inmate-tea orchards.
García-Pintado, Javier; Neal, Jeff C.; Mason, David C.; Dance, Sarah L.; Bates, Paul D.
2013-04-01
Satellite-based imagery has proved useful for obtaining information on water levels in flood events. Microwave frequencies are generally more useful for flood detection than visible-band sensors because of its all-weather day-night capability. Specifically, the future SWOT mission, with Ka-band interferometry, will be able to provide direct Water Level Observations (WLOs), and current and future Synthetic Aperture Radar (SAR) sensors can provide information of flood extent, which, when intersected with a Digital Elevation Model (DEM) of the floodplain, provides indirect WLOs. By either means, satellite-based WLOs can be assimilated into a hydrodynamic model to decrease forecast uncertainty and further to estimate river discharge into the flooded domain. Operational scenarios can even make a combined use of imagery from different uncoordinated missions to sequentially estimate river discharge. Thus, with an increasing number of operational satellites with WLO capability, information on the relationship between satellite first visit, revisit times, and forecast performance is required to optimise the operational scheduling of satellite imagery. By using an Ensemble Transform Kalman Filter (ETKF) and a synthetic analysis with the 2D hydrodynamic model LISFLOOD-FP based on a real flooding case affecting an urban area (summer 2007, Tewkesbury, Southwest UK), we evaluate the sensitivity of the forecast performance to visit parameters. As an example, we use different scenarios of revisit times and observational errors expected from the current COSMO-Skymed (CSK) constellation, with X-band SAR. We emulate a generic hydrologic-hydrodynamic modelling cascade by imposing a bias and spatiotemporal correlations to the inflow error ensemble into the hydrodynamic domain. First, in agreement with previous research, estimation and correction for this bias leads to a clear improvement in keeping the forecast on track. Second, imagery obtained early in the flood is shown to have a
Wald, Abraham
2013-01-01
In 1943, while in charge of Columbia University's Statistical Research Group, Abraham Wald devised Sequential Design, an innovative statistical inference system. Because the decision to terminate an experiment is not predetermined, sequential analysis can arrive at a decision much sooner and with substantially fewer observations than equally reliable test procedures based on a predetermined number of observations. The system's immense value was immediately recognized, and its use was restricted to wartime research and procedures. In 1945, it was released to the public and has since revolutio
Ou, Hua-Se; Wei, Chao-Hai; Wu, Hai-Zhen; Mo, Ce-Hui; He, Bao-Yan
2015-10-01
This study proposed a sequential modeling approach using an artificial neural network (ANN) to develop four independent models which were able to predict biotreatment effluent variables of a full-scale coking wastewater treatment plant (CWWTP). Suitable structure and transfer function of ANN were optimized by genetic algorithm. The sequential approach, which included two parts, an influent estimator and an effluent predictor, was used to develop dynamic models. The former parts of models estimated the variations of influent COD, volatile phenol, cyanide, and NH4 (+)-N. The later parts of models predicted effluent COD, volatile phenol, cyanide, and NH4 (+)-N using the estimated values and other parameters. The performance of these models was evaluated by statistical parameters (such as coefficient of determination (R (2) ), etc.). Obtained results indicated that the estimator developed dynamic models for influent COD (R (2) = 0.871), volatile phenol (R (2) = 0.904), cyanide (R (2) = 0.846), and NH4 (+)-N (R (2) = 0.777), while the predictor developed feasible models for effluent COD (R (2) = 0.852) and cyanide (R (2) = 0.844), with slightly worse models for effluent volatile phenol (R (2) = 0.752) and NH4 (+)-N (R (2) = 0.764). Thus, the proposed modeling processes can be used as a tool for the prediction of CWWTP performance.
Geiges, A.; Nowak, W.; Rubin, Y.
2013-12-01
Stochastic models of sub-surface systems generally suffer from parametric and conceptual uncertainty. To reduce the model uncertainty, model parameters are calibrated using additional collected data. These data often come from costly data acquisition campaigns that need to be optimized to collect the data with the highest data utility (DU) or value of information. In model-based approaches, the DU is evaluated based on the uncertain model itself and is therefore uncertain as well. Additionally, for non-linear models, data utility depends on the yet unobserved measurement values and can only be estimated as an expected value over an assumed distribution of possible measurement values. Both factors introduce uncertainty into the optimization of field campaigns. We propose and investigate a sequential interaction scheme between campaign optimization, data collection and model calibration. The field campaign is split in individual segments. Each segment consists of optimization, segment-wise data collection, and successive model calibration or data assimilation. By doing so, (1) the expected data utility for the newly collected data is replaced by their actual one, (2) the calibration restricts both conceptual and parametric model uncertainty, and thus (3) the distribution of possible future data values for the subsequent campaign segments also changes. Hence, the model to describe the real system improves successively with each collected data segment, and so does the estimate of the yet remaining data requirements to achieve the overall investigation goals. We will show that using the sequentially improved model for the optimal design (OD) of the remaining field campaign leads to superior and more targeted designs.However, this traditional sequential OD optimizes small data segments one-by-one. In such a strategy, possible mutual dependencies with the possible data values and the optimization of data values collection in later segments are neglected. This allows a
DEFF Research Database (Denmark)
Walter, Alexander M; da Silva Pinheiro, Paulo César; Verhage, Matthijs;
2013-01-01
identified. We here propose a Sequential Pool Model (SPM), assuming a novel Ca(2+)-dependent action: a Ca(2+)-dependent catalyst that accelerates both forward and reverse priming reactions. While both models account for fast fusion from the Readily-Releasable Pool (RRP) under control of synaptotagmin-1...... that the elusive 'alternative Ca(2+) sensor' for slow release might be the upstream priming catalyst, and that a sequential model effectively explains Ca(2+)-dependent properties of secretion without assuming parallel pools or sensors....
Brief Lags in Interrupted Sequential Performance: Evaluating a Model and Model Evaluation Method
2015-01-05
Task interruption Sequence errors Cognitive modeling Goodness-of-fit testing a b s t r a c t We examined effects of adding brief (1 second ) lags...should decay faster than pred2, such that after a lag there is increased probability of an intrusion by pred2 and thus an error at offset 1. The second ...For example, language production requires that words be produced in the correct order, and research in this domain has examined sequence errors at the
Non-negative Matrix Factorization as a Method for Studying Coronal Heating
Barnes, Will; Bradshaw, Stephen
2015-04-01
Many theoretical efforts have been made to model the response of coronal loops to nanoflare heating, but the theory has long suffered from a lack of direct observations. Nanoflares, originally proposed by Parker (1988), heat the corona through short, impulsive bursts of energy. Because of their short duration and comparatively low amplitude, emission signatures from nanoflare heating events are often difficult to detect. Past algorithms (e.g. Ugarte-Urra and Warren, 2014) for measuring the frequency of transient brightenings in active region cores have provided only a lower bound for such measurements. We present the use of non-negative matrix factorization (NMF) to analyze spectral data in active region cores in order to provide more accurate determinations of nanoflare heating properties. NMF, a matrix deconvolution technique, has a variety of applications , ranging from Raman spectroscopy to face recognition, but, to our knowledge, has not been applied in the field of solar physics. The strength of NMF lies in its ability to estimate sources (heating events) from measurements (observed spectral emission) without any knowledge of the mixing process (Cichocki et al., 2009). We apply our NMF algorithm to forward-modeled emission representative of that produced by nanoflare heating events in an active region core. The heating events are modeled using a state-of-the-art hydrodynamics code (Bradshaw and Cargill, 2013) and the emission and active regions are synthesized using advanced forward modeling and visualization software (Bradshaw and Klimchuk, 2011; Reep et al., 2013). From these active region visualizations, our NMF algorithm is then able to predict the heating event frequency and amplitudes. Improved methods of nanoflare detection will help to answer fundamental questions regarding the frequency of energy release in the solar corona and how the corona responds to such impulsive heating. Additionally, development of reliable, automated nanoflare detection
Bootstrap Sequential Determination of the Co-integration Rank in VAR Models
DEFF Research Database (Denmark)
Guiseppe, Cavaliere; Rahbæk, Anders; Taylor, A.M. Robert
with empirical rejection frequencies often very much in excess of the nominal level. As a consequence, bootstrap versions of these tests have been developed. To be useful, however, sequential procedures for determining the co-integrating rank based on these bootstrap tests need to be consistent, in the sense...... that the probability of selecting a rank smaller than (equal to) the true co-integrating rank will converge to zero (one minus the marginal significance level), as the sample size diverges, for general I(1) processes. No such likelihood-based procedure is currently known to be available. In this paper we fill this gap...
Kim, W G; Park, J J; Oh, S I
2001-01-01
We report a reliable chronic heart failure model in sheep using sequential ligation of the homonymous artery and its diagonal branch. After a left anterior thoracotomy in Corridale sheep, the homonymous artery was ligated at a point approximately 40% of the distance from the apex to the base of the heart, and after 1 hour, the diagonal vessel was ligated at a point at the same level. Hemodynamic measurements were done preligation, 30 minutes after the homonymous artery ligation, and 1 hour after diagonal branch ligation. The electrocardiograms were obtained as needed, and cardiac function was also evaluated with ultrasonography. After a predetermined interval (2 months for five animals and 3 months for two animals), the animals were reevaluated in the same way as before, and were killed for postmortem examination of their hearts. All seven animals survived the experimental procedures. Statistically significant decreases in systemic arterial blood pressure and cardiac output and increases in pulmonary artery capillary wedge pressure were observed 1 hour after sequential ligation of the homonymous artery and its diagonal branch. Untrasonographic analyses demonstrated variable degrees of anteroseptal dyskinesia and akinesia in all animals. The data from animals at 2 months after coronary artery ligation showed significant increases in central venous pressure, pulmonary artery pressure, and pulmonary artery capillary wedge pressure. Left ventricular enddiastolic dimension and left ventricular end-systolic dimension on ultrasonographic studies were also increased. Electrocardiography showed severe ST elevation immediately after the ligation and pathologic Q waves were found at 2 months after ligation. The thin walled infarcted areas with chamber enlargement were clearly seen in the hearts removed at 2 and 3 months after ligation. In conclusion, we could achieve a reliable ovine model of chronic heart failure using a simple concept of sequential ligation of the
Gragne, A. S.; Sharma, A.; Mehrotra, R.; Alfredsen, K. T.
2012-12-01
Accuracy of reservoir inflow forecasts is instrumental for maximizing value of water resources and influences operation of hydropower reservoirs significantly. Improving hourly reservoir inflow forecasts over a 24 hours lead-time is considered with the day-ahead (Elspot) market of the Nordic exchange market in perspectives. The procedure presented comprises of an error model added on top of an un-alterable constant parameter conceptual model, and a sequential data assimilation routine. The structure of the error model was investigated using freely available software for detecting mathematical relationships in a given dataset (EUREQA) and adopted to contain minimum complexity for computational reasons. As new streamflow data become available the extra information manifested in the discrepancies between measurements and conceptual model outputs are extracted and assimilated into the forecasting system recursively using Sequential Monte Carlo technique. Besides improving forecast skills significantly, the probabilistic inflow forecasts provided by the present approach entrains suitable information for reducing uncertainty in decision making processes related to hydropower systems operation. The potential of the current procedure for improving accuracy of inflow forecasts at lead-times unto 24 hours and its reliability in different seasons of the year will be illustrated and discussed thoroughly.
Energy Technology Data Exchange (ETDEWEB)
Chen, W.-Y. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Tsai, J.-W. [Institute of Ecology and Evolutionary Ecology, China Medical University, Taichung 40402, Taiwan (China); Ju, Y.-R. [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China); Liao, C.-M., E-mail: cmliao@ntu.edu.t [Department of Bioenvironmental Systems Engineering, National Taiwan University, Taipei 10617, Taiwan (China)
2010-05-15
The purpose of this paper was to use quantitative systems-level approach employing biotic ligand model based threshold damage model to examine physiological responses of tilapia and freshwater clam to sequential pulsed and fluctuating arsenic concentrations. We tested present model and triggering mechanisms by carrying out a series of modeling experiments where we used periodic pulses and sine-wave as featured exposures. Our results indicate that changes in the dominant frequencies and pulse timing can shift the safe rate distributions for tilapia, but not for that of freshwater clam. We found that tilapia increase bioenergetic costs to maintain the acclimation during pulsed and sine-wave exposures. Our ability to predict the consequences of physiological variation under time-varying exposure patterns has also implications for optimizing species growing, cultivation strategies, and risk assessment in realistic situations. - Systems-level modeling the pulsed and fluctuating arsenic exposures.
Non-negative matrix analysis in x-ray spectromicroscopy: choosing regularizers
Mak, Rachel; Wild, Stefan M.; Jacobsen, Chris
2016-01-01
In x-ray spectromicroscopy, a set of images can be acquired across an absorption edge to reveal chemical speciation. We previously described the use of non-negative matrix approximation methods for improved classification and analysis of these types of data. We present here an approach to find appropriate values of regularization parameters for this optimization approach. PMID:27041779
Existence of non-negative solutions for nonlinear equations in the semi-positone case
Directory of Open Access Journals (Sweden)
Naji Yebari
2006-09-01
Full Text Available Using the fibring method we prove the existence of non-negative solution of the p-Laplacian boundary value problem $-Delta_pu=lambda f(u$, for any $lambda >0$ on any regular bounded domain of $mathbb{R}^N$, in the special case $f(t=t^q-1$.
Non-negatively curved 5-manifolds with almost maximal symmetry rank
Galaz-Garcia, Fernando
2011-01-01
We show that a closed, simply-connected, non-negatively curved 5-manifold admitting an effective, isometric $T^2$ action is diffeomorphic to one of $S^5$, $S^3\\times S^2$, $S^3\\tilde{\\times} S^2$ (the non-trivial $S^3$-bundle over $S^2$) or the Wu manifold $SU(3)/SO(3)$.
Directory of Open Access Journals (Sweden)
Gorka Merino
2011-07-01
Full Text Available Global marine fisheries production has reached a maximum and may even be declining. Underlying this trend is a well-understood sequence of development, overexploitation, depletion and in some instances collapse of individual fish stocks, a pattern that can sequentially link geographically distant populations. Ineffective governance, economic considerations and climate impacts are often responsible for this sequence, although the relative contribution of each factor is contentious. In this paper we use a global bioeconomic model to explore the synergistic effects of climate variability, economic pressures and management measures in causing or avoiding this sequence. The model shows how a combination of climate-induced variability in the underlying fish population production, particular patterns of demand for fish products and inadequate management is capable of driving the world’s fisheries into development, overexploitation, collapse and recovery phases consistent with observations. Furthermore, it demonstrates how a sequential pattern of overexploitation can emerge as an endogenous property of the interaction between regional environmental fluctuations and a globalized trade system. This situation is avoidable through adaptive management measures that ensure the sustainability of regional production systems in the face of increasing global environmental change and markets. It is concluded that global management measures are needed to ensure that global food supply from marine products is optimized while protecting long-term ecosystem services across the world’s oceans.
Technique for computing the PDFs and CDFs of non-negative infinitely divisible random variables
Veillette, Mark S
2010-01-01
We present a method for computing the PDF and CDF of a non-negative infinitely divisible random variable $X$. Our method uses the L\\'{e}vy-Khintchine representation of the Laplace transform $\\mathbb{E} e^{-\\lambda X} = e^{-\\phi(\\lambda)}$, where $\\phi$ is the Laplace exponent. We apply the Post-Widder method for Laplace transform inversion combined with a sequence convergence accelerator to obtain accurate results. We demonstrate this technique on several examples including the stable distribution, mixtures thereof, and integrals with respect to non-negative L\\'{e}vy processes. Software to implement this method is available from the authors and we illustrate its use at the end of the paper.
Expanding solitons with non-negative curvature operator coming out of cones
Schulze, Felix
2010-01-01
We show that a Ricci flow of any complete Riemannian manifold without boundary with bounded non-negative curvature operator and non-zero asymptotic volume ratio exists for all time and has constant asymptotic volume ratio. We show that there is a limit solution, obtained by scaling down this solution at a fixed point in space, which is an expanding soliton coming out of the asymptotic cone at infinity.
Real-time detection of overlapping sound events with non-negative matrix factorization
Dessein, Arnaud; Cont, Arshia; Lemaitre, Guillaume
2013-01-01
International audience; In this paper, we investigate the problem of real-time detection of overlapping sound events by employing non-negative matrix factorization techniques. We consider a setup where audio streams arrive in real-time to the system and are decomposed onto a dictionary of event templates learned off-line prior to the decomposition. An important drawback of existing approaches in this context is the lack of controls on the decomposition. We propose and compare two provably con...
Supervised non-negative matrix factorization based latent semantic image indexing
Institute of Scientific and Technical Information of China (English)
Dong Liang; Jie Yang; Yuchou Chang
2006-01-01
@@ A novel latent semantic indexing (LSI) approach for content-based image retrieval is presented in this paper. Firstly, an extension of non-negative matrix factorization (NMF) to supervised initialization isdiscussed. Then, supervised NMF is used in LSI to find the relationships between low-level features and high-level semantics. The retrieved results are compared with other approaches and a good performance is obtained.
An elementary proof of the Harnack inequality for non-negative infinity-superharmonic functions
Directory of Open Access Journals (Sweden)
Tilak Bhattacharya
2001-06-01
Full Text Available We present an elementary proof of the Harnack inequality for non-negative viscosity supersolutions of $Delta_{infty}u=0$. This was originally proven by Lindqvist and Manfredi using sequences of solutions of the $p$-Laplacian. We work directly with the $Delta_{infty}$ operator using the distance function as a test function. We also provide simple proofs of the Liouville property, Hopf boundary point lemma and Lipschitz continuity.
Sequential memory: Binding dynamics
Afraimovich, Valentin; Gong, Xue; Rabinovich, Mikhail
2015-10-01
Temporal order memories are critical for everyday animal and human functioning. Experiments and our own experience show that the binding or association of various features of an event together and the maintaining of multimodality events in sequential order are the key components of any sequential memories—episodic, semantic, working, etc. We study a robustness of binding sequential dynamics based on our previously introduced model in the form of generalized Lotka-Volterra equations. In the phase space of the model, there exists a multi-dimensional binding heteroclinic network consisting of saddle equilibrium points and heteroclinic trajectories joining them. We prove here the robustness of the binding sequential dynamics, i.e., the feasibility phenomenon for coupled heteroclinic networks: for each collection of successive heteroclinic trajectories inside the unified networks, there is an open set of initial points such that the trajectory going through each of them follows the prescribed collection staying in a small neighborhood of it. We show also that the symbolic complexity function of the system restricted to this neighborhood is a polynomial of degree L - 1, where L is the number of modalities.
DEFF Research Database (Denmark)
Mørup, Morten; Hansen, Lars Kai; Parnas, Josef;
2006-01-01
generalized to a parallel factor (PARAFAC) model to form a non-negative multi-way factorization (NMWF). While the NMF can examine subject specific activities the NMWF can effectively extract the most similar activities across subjects and or conditions. The methods are tested on a proprioceptive stimulus...... consisting of a weight change in a handheld load. While somatosensory gamma oscillations have previously only been evoked by electrical stimuli we hypothesized that a natural proprioceptive stimulus also would be able to evoke gamma oscillations. ITPC maxima were determined by visual inspection...... contralateral to stimulus side and additionally an unexpected 20Hz activity slightly lateralized in the frontal central region. Consequently, also proprioceptive stimuli are able to elicit evoked gamma activity....
Two pitfalls of BOLD fMRI magnitude-based neuroimage analysis: non-negativity and edge effect.
Chen, Zikuan; Calhoun, Vince D
2011-08-15
BOLD fMRI is accepted as a noninvasive imaging modality for neuroimaging and brain mapping. A BOLD fMRI dataset consists of magnitude and phase components. Currently, only the magnitude is used for neuroimage analysis. In this paper, we show that the fMRI-magnitude-based neuroimage analysis may suffer two pitfalls: one is that the magnitude is non-negative and cannot differentiate positive from negative BOLD activity; the other is an edge effect that may manifest as an edge enhancement or a spatial interior dip artifact at a local uniform BOLD region. We demonstrate these pitfalls via numeric simulations using a BOLD fMRI model and also via a phantom experiment. We also propose a solution by making use of the fMRI phase image, the counterpart of the fMRI magnitude.
Goad, Clyde C.; Chadwell, C. David
1993-01-01
GEODYNII is a conventional batch least-squares differential corrector computer program with deterministic models of the physical environment. Conventional algorithms were used to process differenced phase and pseudorange data to determine eight-day Global Positioning system (GPS) orbits with several meter accuracy. However, random physical processes drive the errors whose magnitudes prevent improving the GPS orbit accuracy. To improve the orbit accuracy, these random processes should be modeled stochastically. The conventional batch least-squares algorithm cannot accommodate stochastic models, only a stochastic estimation algorithm is suitable, such as a sequential filter/smoother. Also, GEODYNII cannot currently model the correlation among data values. Differenced pseudorange, and especially differenced phase, are precise data types that can be used to improve the GPS orbit precision. To overcome these limitations and improve the accuracy of GPS orbits computed using GEODYNII, we proposed to develop a sequential stochastic filter/smoother processor by using GEODYNII as a type of trajectory preprocessor. Our proposed processor is now completed. It contains a correlated double difference range processing capability, first order Gauss Markov models for the solar radiation pressure scale coefficient and y-bias acceleration, and a random walk model for the tropospheric refraction correction. The development approach was to interface the standard GEODYNII output files (measurement partials and variationals) with software modules containing the stochastic estimator, the stochastic models, and a double differenced phase range processing routine. Thus, no modifications to the original GEODYNII software were required. A schematic of the development is shown. The observational data are edited in the preprocessor and the data are passed to GEODYNII as one of its standard data types. A reference orbit is determined using GEODYNII as a batch least-squares processor and the
Directory of Open Access Journals (Sweden)
D. Herckenrath
2013-10-01
Full Text Available Increasingly, ground-based and airborne geophysical data sets are used to inform groundwater models. Recent research focuses on establishing coupling relationships between geophysical and groundwater parameters. To fully exploit such information, this paper presents and compares different hydrogeophysical inversion approaches to inform a field-scale groundwater model with time domain electromagnetic (TDEM and electrical resistivity tomography (ERT data. In a sequential hydrogeophysical inversion (SHI a groundwater model is calibrated with geophysical data by coupling groundwater model parameters with the inverted geophysical models. We subsequently compare the SHI with a joint hydrogeophysical inversion (JHI. In the JHI, a geophysical model is simultaneously inverted with a groundwater model by coupling the groundwater and geophysical parameters to explicitly account for an established petrophysical relationship and its accuracy. Simulations for a synthetic groundwater model and TDEM data showed improved estimates for groundwater model parameters that were coupled to relatively well-resolved geophysical parameters when employing a high-quality petrophysical relationship. Compared to a SHI these improvements were insignificant and geophysical parameter estimates became slightly worse. When employing a low-quality petrophysical relationship, groundwater model parameters improved less for both the SHI and JHI, where the SHI performed relatively better. When comparing a SHI and JHI for a real-world groundwater model and ERT data, differences in parameter estimates were small. For both cases investigated in this paper, the SHI seems favorable, taking into account parameter error, data fit and the complexity of implementing a JHI in combination with its larger computational burden.
Directory of Open Access Journals (Sweden)
Ping-Huei Tsai
Full Text Available BACKGROUND: There is an emerging interest in using magnetic resonance imaging (MRI T2* measurement for the evaluation of degenerative cartilage in osteoarthritis (OA. However, relatively few studies have addressed OA-related changes in adjacent knee structures. This study used MRI T2* measurement to investigate sequential changes in knee cartilage, meniscus, and subchondral bone marrow in a rat OA model induced by anterior cruciate ligament transection (ACLX. MATERIALS AND METHODS: Eighteen male Sprague Dawley rats were randomly separated into three groups (n = 6 each group. Group 1 was the normal control group. Groups 2 and 3 received ACLX and sham-ACLX, respectively, of the right knee. T2* values were measured in the knee cartilage, the meniscus, and femoral subchondral bone marrow of all rats at 0, 4, 13, and 18 weeks after surgery. RESULTS: Cartilage T2* values were significantly higher at 4, 13, and 18 weeks postoperatively in rats of the ACLX group than in rats of the control and sham groups (p<0.001. In the ACLX group (compared to the sham and control groups, T2* values increased significantly first in the posterior horn of the medial meniscus at 4 weeks (p = 0.001, then in the anterior horn of the medial meniscus at 13 weeks (p<0.001, and began to increase significantly in the femoral subchondral bone marrow at 13 weeks (p = 0.043. CONCLUSION: Quantitative MR T2* measurements of OA-related tissues are feasible. Sequential change in T2* over time in cartilage, meniscus, and subchondral bone marrow were documented. This information could be potentially useful for in vivo monitoring of disease progression.
Matgen, P.; Montanari, M.; Hostache, R.; Pfister, L.; Hoffmann, L.; Plaza, D.; Pauwels, V. R. N.; de Lannoy, G. J. M.; de Keyser, R.; Savenije, H. H. G.
2010-09-01
With the onset of new satellite radar constellations (e.g. Sentinel-1) and advances in computational science (e.g. grid computing) enabling the supply and processing of multi-mission satellite data at a temporal frequency that is compatible with real-time flood forecasting requirements, this study presents a new concept for the sequential assimilation of Synthetic Aperture Radar (SAR)-derived water stages into coupled hydrologic-hydraulic models. The proposed methodology consists of adjusting storages and fluxes simulated by a coupled hydrologic-hydraulic model using a Particle Filter-based data assimilation scheme. Synthetic observations of water levels, representing satellite measurements, are assimilated into the coupled model in order to investigate the performance of the proposed assimilation scheme as a function of both accuracy and frequency of water level observations. The use of the Particle Filter provides flexibility regarding the form of the probability densities of both model simulations and remote sensing observations. We illustrate the potential of the proposed methodology using a twin experiment over a widely studied river reach located in the Grand-Duchy of Luxembourg. The study demonstrates that the Particle Filter algorithm leads to significant uncertainty reduction of water level and discharge at the time step of assimilation. However, updating the storages of the model only improves the model forecast over a very short time horizon. A more effective way of updating thus consists in adjusting both states and inputs. The proposed methodology, which consists in updating the biased forcing of the hydraulic model using information on model errors that is inferred from satellite observations, enables persistent model improvement. The present schedule of satellite radar missions is such that it is likely that there will be continuity for SAR-based operational water management services. This research contributes to evolve reactive flood management into
Ascent sequences and upper triangular matrices containing non-negative integers
Dukes, Mark
2009-01-01
This paper presents a bijection between ascent sequences and upper triangular matrices whose non-negative entries are such that all rows and columns contain at least one non-zero entry. We show the equivalence of several natural statistics on these structures under this bijection and prove that some of these statistics are equidistributed. Several special classes of matrices are shown to have simple formulations in terms of ascent sequences. Binary matrices are shown to correspond to ascent sequences with no two adjacent entries the same. Bidiagonal matrices are shown to be related to order-consecutive set partitions and a simple condition on the ascent sequences generate this class.
Institute of Scientific and Technical Information of China (English)
Zheng Zhonglong; Yang Jie
2005-01-01
Many problems in image representation and classification involve some form of dimensionality reduction. Non-negative matrix factorization (NMF) is a recently proposed unsupervised procedure for learning spatially localized, parts-based subspace representation of objects. An improvement of the classical NMF by combining with Log-Gabor wavelets to enhance its part-based learning ability is presented. The new method with principal component analysis (PCA) and locally linear embedding (LLE) proposed recently in Science are compared. Finally, the new method to several real world datasets and achieve good performance in representation and classification is applied.
Fast Bayesian Non-Negative Matrix Factorisation and Tri-Factorisation
DEFF Research Database (Denmark)
Brouwer, Thomas; Frellsen, Jes; Liò, Pietro
We present a fast variational Bayesian algorithm for performing non-negative matrix factorisation and tri-factorisation. We show that our approach achieves faster convergence per iteration and timestep (wall-clock) than Gibbs sampling and non-probabilistic approaches, and do not require additional...... samples to estimate the posterior. We show that in particular for matrix tri-factorisation convergence is difficult, but our variational Bayesian approach offers a fast solution, allowing the tri-factorisation approach to be used more effectively....
Non-negative Ricci curvature on closed manifolds under Ricci flow
Maximo, Davi
2009-01-01
In this short note we show that non-negative Ricci curvature is not preserved under Ricci flow for closed manifolds of dimensions four and above, strengthening a previous result of Knopf in \\cite{K} for complete non-compact manifolds of bounded curvature. This brings down to four dimensions a similar result B\\"ohm and Wilking have for dimensions twelve and above, \\cite{BW}. Moreover, the manifolds constructed here are \\Kahler manifolds and relate to a question raised by Xiuxiong Chen in \\cite{XC}, \\cite{XCL}.
Single-Channel Speech Separation using Sparse Non-Negative Matrix Factorization
DEFF Research Database (Denmark)
Schmidt, Mikkel N.; Olsson, Rasmus Kongsgaard
2007-01-01
We apply machine learning techniques to the problem of separating multiple speech sources from a single microphone recording. The method of choice is a sparse non-negative matrix factorization algorithm, which in an unsupervised manner can learn sparse representations of the data. This is applied...... to the learning of personalized dictionaries from a speech corpus, which in turn are used to separate the audio stream into its components. We show that computational savings can be achieved by segmenting the training data on a phoneme level. To split the data, a conventional speech recognizer is used...
DEFF Research Database (Denmark)
Malaguerra, Flavio; Chambon, Julie Claire Claudia; Bjerg, Poul Løgstrup
2011-01-01
been modeled using modified Michaelis–Menten kinetics and has been implemented in the geochemical code PHREEQC. The model have been calibrated using a Shuffled Complex Evolution Metropolis algorithm to observations of chlorinated solvents, organic acids, and H2 concentrations in laboratory batch...
DEFF Research Database (Denmark)
Møller, Jesper; Rasmussen, Jakob Gulddahl
points is such that the dependent cluster point is likely to occur closely to a previous cluster point. We demonstrate the flexibility of the model for producing point patterns with linear structures, and propose to use the model as the likelihood in a Bayesian setting when analyzing a spatial point...
DEFF Research Database (Denmark)
Malaguerra, Flavio; Chambon, Julie Claire Claudia; Albrechtsen, Hans-Jørgen;
2010-01-01
organic matter / electron donors, presence of specific biomass, etc. Here we develop a new fully-kinetic biogeochemical reactive model able to simulate chlorinated solvents degradation as well as production and consumption of molecular hydrogen. The model is validated using batch experiment data...... and global sensitivity analysis is performed....
Energy Technology Data Exchange (ETDEWEB)
Avelino, Andre F.T.; Hewings, Geoffrey J.D.; Guilhoto, Joaquim J.M. [Universidade de Sao Paulo (FEA/USP), SE (Brazil). Fac. de Administracao e Contabilidade
2010-07-01
The electrical sector is responsible for a considerable amount of greenhouse gases emissions worldwide, but also the one in which modern society depends the most for maintenance of quality of life as well as the functioning of economic and social activities. Invariably, even CO2 emission-free power plants have some indirect environmental impacts due to the economic effects they produce during their life cycle (construction, O and M and decommissioning). Thus, sustainability issues should be always considered in energy planning, by evaluating the balance of positive/negative externalities on different areas of the country. This study aims to introduce a social-environmental economic model, based on a Regional Sequential Inter industry Model (SIM) integrated with geoprocessing data, in order to identify economic, pollution and public health impacts in state level for energy planning analysis. The model is based on the Impact Pathway Approach Methodology, using geoprocessing to locate social-environmental variables for dispersion and health evaluations. The final goal is to provide an auxiliary tool for policy makers to assess energy planning scenarios in Brazil. (author)
Model Predictive Engine Air-Ratio Control Using Online Sequential Relevance Vector Machine
Directory of Open Access Journals (Sweden)
Hang-cheong Wong
2012-01-01
Full Text Available Engine power, brake-specific fuel consumption, and emissions relate closely to air ratio (i.e., lambda among all the engine variables. An accurate and adaptive model for lambda prediction is essential to effective lambda control for long term. This paper utilizes an emerging technique, relevance vector machine (RVM, to build a reliable time-dependent lambda model which can be continually updated whenever a sample is added to, or removed from, the estimated lambda model. The paper also presents a new model predictive control (MPC algorithm for air-ratio regulation based on RVM. This study shows that the accuracy, training, and updating time of the RVM model are superior to the latest modelling methods, such as diagonal recurrent neural network (DRNN and decremental least-squares support vector machine (DLSSVM. Moreover, the control algorithm has been implemented on a real car to test. Experimental results reveal that the control performance of the proposed relevance vector machine model predictive controller (RVMMPC is also superior to DRNNMPC, support vector machine-based MPC, and conventional proportional-integral (PI controller in production cars. Therefore, the proposed RVMMPC is a promising scheme to replace conventional PI controller for engine air-ratio control.
Online multi-modal robust non-negative dictionary learning for visual tracking.
Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.
Online multi-modal robust non-negative dictionary learning for visual tracking.
Directory of Open Access Journals (Sweden)
Xiang Zhang
Full Text Available Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality.
Hyperspectral Image Super-Resolution via Non-Negative Structured Sparse Representation.
Dong, Weisheng; Fu, Fazuo; Shi, Guangming; Cao, Xun; Wu, Jinjian; Li, Guangyu; Li, Guangyu
2016-05-01
Hyperspectral imaging has many applications from agriculture and astronomy to surveillance and mineralogy. However, it is often challenging to obtain high-resolution (HR) hyperspectral images using existing hyperspectral imaging techniques due to various hardware limitations. In this paper, we propose a new hyperspectral image super-resolution method from a low-resolution (LR) image and a HR reference image of the same scene. The estimation of the HR hyperspectral image is formulated as a joint estimation of the hyperspectral dictionary and the sparse codes based on the prior knowledge of the spatial-spectral sparsity of the hyperspectral image. The hyperspectral dictionary representing prototype reflectance spectra vectors of the scene is first learned from the input LR image. Specifically, an efficient non-negative dictionary learning algorithm using the block-coordinate descent optimization technique is proposed. Then, the sparse codes of the desired HR hyperspectral image with respect to learned hyperspectral basis are estimated from the pair of LR and HR reference images. To improve the accuracy of non-negative sparse coding, a clustering-based structured sparse coding method is proposed to exploit the spatial correlation among the learned sparse codes. The experimental results on both public datasets and real LR hypspectral images suggest that the proposed method substantially outperforms several existing HR hyperspectral image recovery techniques in the literature in terms of both objective quality metrics and computational efficiency.
Online Multi-Modal Robust Non-Negative Dictionary Learning for Visual Tracking
Zhang, Xiang; Guan, Naiyang; Tao, Dacheng; Qiu, Xiaogang; Luo, Zhigang
2015-01-01
Dictionary learning is a method of acquiring a collection of atoms for subsequent signal representation. Due to its excellent representation ability, dictionary learning has been widely applied in multimedia and computer vision. However, conventional dictionary learning algorithms fail to deal with multi-modal datasets. In this paper, we propose an online multi-modal robust non-negative dictionary learning (OMRNDL) algorithm to overcome this deficiency. Notably, OMRNDL casts visual tracking as a dictionary learning problem under the particle filter framework and captures the intrinsic knowledge about the target from multiple visual modalities, e.g., pixel intensity and texture information. To this end, OMRNDL adaptively learns an individual dictionary, i.e., template, for each modality from available frames, and then represents new particles over all the learned dictionaries by minimizing the fitting loss of data based on M-estimation. The resultant representation coefficient can be viewed as the common semantic representation of particles across multiple modalities, and can be utilized to track the target. OMRNDL incrementally learns the dictionary and the coefficient of each particle by using multiplicative update rules to respectively guarantee their non-negativity constraints. Experimental results on a popular challenging video benchmark validate the effectiveness of OMRNDL for visual tracking in both quantity and quality. PMID:25961715
Pavement crack detection combining non-negative feature with fast LoG in complex scene
Wang, Wanli; Zhang, Xiuhua; Hong, Hanyu
2015-12-01
Pavement crack detection is affected by much interference in the realistic situation, such as the shadow, road sign, oil stain, salt and pepper noise etc. Due to these unfavorable factors, the exist crack detection methods are difficult to distinguish the crack from background correctly. How to extract crack information effectively is the key problem to the road crack detection system. To solve this problem, a novel method for pavement crack detection based on combining non-negative feature with fast LoG is proposed. The two key novelties and benefits of this new approach are that 1) using image pixel gray value compensation to acquisit uniform image, and 2) combining non-negative feature with fast LoG to extract crack information. The image preprocessing results demonstrate that the method is indeed able to homogenize the crack image with more accurately compared to existing methods. A large number of experimental results demonstrate the proposed approach can detect the crack regions more correctly compared with traditional methods.
Sharp maximal inequalities for the moments of martingales and non-negative submartingales
Osȩkowski, Adam
2012-01-01
In the paper we study sharp maximal inequalities for martingales and non-negative submartingales: if $f$, $g$ are martingales satisfying \\[|\\mathrm{d}g_n|\\leq|\\mathrm{d}f_n|,\\qquad n=0,1,2,...,\\] almost surely, then \\[\\Bigl\\|\\sup_{n\\geq0}|g_n|\\Bigr\\|_p\\leq p\\|f\\|_p,\\qquad p\\geq2,\\] and the inequality is sharp. Furthermore, if $\\alpha\\in[0,1]$, $f$ is a non-negative submartingale and $g$ satisfies \\[|\\mathrm{d}g_n|\\leq|\\mathrm{d}f_n|\\quad and\\quad |\\mathbb{E}(\\mathrm{d}g_{n+1}|\\mathcal {F}_n)|\\leq\\alpha\\mathbb{E}(\\mathrm{d}f_{n+1}|\\mathcal{F}_n),\\qquad n=0,1,2,...,\\] almost surely, then \\[\\Bigl\\|\\sup_{n\\geq0}|g_n|\\Bigr\\|_p\\leq(\\alpha+1)p\\|f\\|_p,\\qquad p\\geq2,\\] and the inequality is sharp. As an application, we establish related estimates for stochastic integrals and It\\^{o} processes. The inequalities strengthen the earlier classical results of Burkholder and Choi.
Aspect-Aided Dynamic Non-Negative Sparse Representation-Based Microwave Image Classification
Directory of Open Access Journals (Sweden)
Xinzheng Zhang
2016-09-01
Full Text Available Classification of target microwave images is an important application in much areas such as security, surveillance, etc. With respect to the task of microwave image classification, a recognition algorithm based on aspect-aided dynamic non-negative least square (ADNNLS sparse representation is proposed. Firstly, an aspect sector is determined, the center of which is the estimated aspect angle of the testing sample. The training samples in the aspect sector are divided into active atoms and inactive atoms by smooth self-representative learning. Secondly, for each testing sample, the corresponding active atoms are selected dynamically, thereby establishing dynamic dictionary. Thirdly, the testing sample is represented with ℓ 1 -regularized non-negative sparse representation under the corresponding dynamic dictionary. Finally, the class label of the testing sample is identified by use of the minimum reconstruction error. Verification of the proposed algorithm was conducted using the Moving and Stationary Target Acquisition and Recognition (MSTAR database which was acquired by synthetic aperture radar. Experiment results validated that the proposed approach was able to capture the local aspect characteristics of microwave images effectively, thereby improving the classification performance.
Non-negative matrix factorization by maximizing correntropy for cancer clustering
Wang, Jim Jing-Yan
2013-03-24
Background: Non-negative matrix factorization (NMF) has been shown to be a powerful tool for clustering gene expression data, which are widely used to classify cancers. NMF aims to find two non-negative matrices whose product closely approximates the original matrix. Traditional NMF methods minimize either the l2 norm or the Kullback-Leibler distance between the product of the two matrices and the original matrix. Correntropy was recently shown to be an effective similarity measurement due to its stability to outliers or noise.Results: We propose a maximum correntropy criterion (MCC)-based NMF method (NMF-MCC) for gene expression data-based cancer clustering. Instead of minimizing the l2 norm or the Kullback-Leibler distance, NMF-MCC maximizes the correntropy between the product of the two matrices and the original matrix. The optimization problem can be solved by an expectation conditional maximization algorithm.Conclusions: Extensive experiments on six cancer benchmark sets demonstrate that the proposed method is significantly more accurate than the state-of-the-art methods in cancer clustering. 2013 Wang et al.; licensee BioMed Central Ltd.
Semi-Supervised Projective Non-Negative Matrix Factorization for Cancer Classification.
Directory of Open Access Journals (Sweden)
Xiang Zhang
Full Text Available Advances in DNA microarray technologies have made gene expression profiles a significant candidate in identifying different types of cancers. Traditional learning-based cancer identification methods utilize labeled samples to train a classifier, but they are inconvenient for practical application because labels are quite expensive in the clinical cancer research community. This paper proposes a semi-supervised projective non-negative matrix factorization method (Semi-PNMF to learn an effective classifier from both labeled and unlabeled samples, thus boosting subsequent cancer classification performance. In particular, Semi-PNMF jointly learns a non-negative subspace from concatenated labeled and unlabeled samples and indicates classes by the positions of the maximum entries of their coefficients. Because Semi-PNMF incorporates statistical information from the large volume of unlabeled samples in the learned subspace, it can learn more representative subspaces and boost classification performance. We developed a multiplicative update rule (MUR to optimize Semi-PNMF and proved its convergence. The experimental results of cancer classification for two multiclass cancer gene expression profile datasets show that Semi-PNMF outperforms the representative methods.
An Efficient Constraint Boundary Sampling Method for Sequential RBDO Using Kriging Surrogate Model
Energy Technology Data Exchange (ETDEWEB)
Kim, Jihoon; Jang, Junyong; Kim, Shinyu; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of); Cho, Sugil; Kim, Hyung Woo; Hong, Sup [Korea Research Institute of Ships and Ocean Engineering, Busan (Korea, Republic of)
2016-06-15
Reliability-based design optimization (RBDO) requires a high computational cost owing to its reliability analysis. A surrogate model is introduced to reduce the computational cost in RBDO. The accuracy of the reliability depends on the accuracy of the surrogate model of constraint boundaries in the surrogated-model-based RBDO. In earlier researches, constraint boundary sampling (CBS) was proposed to approximate accurately the boundaries of constraints by locating sample points on the boundaries of constraints. However, because CBS uses sample points on all constraint boundaries, it creates superfluous sample points. In this paper, efficient constraint boundary sampling (ECBS) is proposed to enhance the efficiency of CBS. ECBS uses the statistical information of a kriging surrogate model to locate sample points on or near the RBDO solution. The efficiency of ECBS is verified by mathematical examples.
Ghahari, Alireza
2009-01-01
Multiview 3D face modeling has attracted increasing attention recently and has become one of the potential avenues in future video systems. We aim to make more reliable and robust automatic feature extraction and natural 3D feature construction from 2D features detected on a pair of frontal and profile view face images. We propose several heuristic algorithms to minimize possible errors introduced by prevalent nonperfect orthogonal condition and noncoherent luminance. In our approach, we first extract the 2D features that are visible to both cameras in both views. Then, we estimate the coordinates of the features in the hidden profile view based on the visible features extracted in the two orthogonal views. Finally, based on the coordinates of the extracted features, we deform a 3D generic model to perform the desired 3D clone modeling. Present study proves the scope of resulted facial models for practical applications like face recognition and facial animation.
Haan, Stanley; Shomsky, Katherine; Danks, Nathan
2011-05-01
In 3D classical modeling of non-sequential double ionization, we find that plots of double ionization yield vs laser intensity show strong dependence on an adjustable nuclear softening parameter. We explore why, and uncover chaotic behavior and strong sensitivity to interaction with nucleus in recollision excitation with subsequent photoionization. This work supported by Calvin College ISRI and by NSF grant No. 0969984.
Theys, Céline; Dobigeon, Nicolas; Richard, Cédric; Tourneret, Jean-Yves; Ferrari, André
2013-01-01
This paper addresses the problem of minimizing a convex cost function under non-negativity and equality constraints, with the aim of solving the linear unmixing problem encountered in hyperspectral imagery. This problem can be formulated as a linear regression problem whose regression coefficients (abundances) satisfy sum-to-one and positivity constraints. A normalized scaled gradient iterative method (NSGM) is proposed for estimating the abundances of the linear mixing model. The positivity constraint is ensured by the Karush Kuhn Tucker conditions whereas the sum-to-one constraint is fulfilled by introducing normalized variables in the algorithm. The convergence is ensured by a one-dimensional search of the step size. Note that NSGM can be applied to any convex cost function with non negativity and flux constraints. In order to compare the NSGM with the well-known fully constraint least squares (FCLS) algorithm, this latter is reformulated in term of a penalized function, which reveals its suboptimality. Si...
Sequential biases in accumulating evidence
Huggins, Richard; Dogo, Samson Henry
2015-01-01
Whilst it is common in clinical trials to use the results of tests at one phase to decide whether to continue to the next phase and to subsequently design the next phase, we show that this can lead to biased results in evidence synthesis. Two new kinds of bias associated with accumulating evidence, termed ‘sequential decision bias’ and ‘sequential design bias’, are identified. Both kinds of bias are the result of making decisions on the usefulness of a new study, or its design, based on the previous studies. Sequential decision bias is determined by the correlation between the value of the current estimated effect and the probability of conducting an additional study. Sequential design bias arises from using the estimated value instead of the clinically relevant value of an effect in sample size calculations. We considered both the fixed‐effect and the random‐effects models of meta‐analysis and demonstrated analytically and by simulations that in both settings the problems due to sequential biases are apparent. According to our simulations, the sequential biases increase with increased heterogeneity. Minimisation of sequential biases arises as a new and important research area necessary for successful evidence‐based approaches to the development of science. © 2015 The Authors. Research Synthesis Methods Published by John Wiley & Sons Ltd. PMID:26626562
Li, Xiaowei; Mei, Qingqing; Dai, Xiaohu; Ding, Guoji
2017-03-01
Thermogravimetric analysis, Gaussian-fit-peak model (GFPM), and distributed activation energy model (DAEM) were firstly used to explore the effect of anaerobic digestion on sequential pyrolysis kinetic of four organic solid wastes (OSW). Results showed that the OSW weight loss mainly occurred in the second pyrolysis stage relating to organic matter decomposition. Compared with raw substrate, the weight loss of corresponding digestate was lower in the range of 180-550°C, but was higher in 550-900°C. GFPM analysis revealed that organic components volatized at peak temperatures of 188-263, 373-401 and 420-462°C had a faster degradation rate than those at 274-327°C during anaerobic digestion. DAEM analysis showed that anaerobic digestion had discrepant effects on activation energy for four OSW pyrolysis, possibly because of their different organic composition. It requires further investigation for the special organic matter, i.e., protein-like and carbohydrate-like groups, to confirm the assumption. Copyright © 2016 Elsevier Ltd. All rights reserved.
Narain, D.; Beers, R.J. van; Smeets, J.B.J.
2016-01-01
Recent studies demonstrate that biases found in human behavior can be explained by rational agents that make incorrect generative-model assumptions. While predicting a sequence of uncorrelated events, humans are biased towards overestimating its serial correlation. We demonstrate how such biases may
1992-12-21
models with protocol data. Review of past work column text editors (Prep: Neuwirth, Kaufer , Chandhok, & Morris, 1990). These tools offer a level of polish... Kaufer , D. S., Chandhok, R., & Morris, J. H. (1990). Issues in the design of computer support for co-authoring and commenting. In Proceedings of the
Interpretation of heavy metal speciation in sequential extraction using geochemical modelling
Cui, Yanshan; Weng, Liping
2015-01-01
Environmental context Heavy metal pollution is a worldwide environmental concern, and the risk depends not only on their total concentration, but also on their chemical speciation. Based on state-of-the-art geochemical modelling, we pinpoint the heavy metal pools approached by the widely used seq
Directory of Open Access Journals (Sweden)
Paul C. Roberts
2005-10-01
Full Text Available Studies performed to identify early events of ovarian cancer and to establish molecular markers to support early detection and development of chemopreventive regimens have been hindered by a lack of adequate cell models. Taking advantage of the spontaneous transformation of mouse ovarian surface epithelial (MDSE cells in culture, we isolated and characterized distinct transitional stages of ovarian cancer as the cells progressed from a premalignant nontumorigenic phenotype to a highly aggressive malignant phenotype. Transitional stages were concurrent with progressive increases in proliferation, anchorage-independent growth capacity, in vivo tumor formation, and aneuploidy. During neoplastic progression, our ovarian cancer model underwent distinct remodeling of the actin cytoskeleton and focal adhesion complexes, concomitant with downregulation and/or aberrant subcellular localization of two tumor-suppressor proteins Ecadherin and connexin-43. In addition, we demonstrate that epigenetic silencing of E-cadherin through promoter methylation is associated with neoplastic progression of our ovarian cancer model. These results establish critical interactions between cellular cytoskeletal remodeling and epigenetic silencing events in the progression of ovarian cancer. Thus, our MDSE model provides an excellent tool to identify both cellularand molecular changes in the early and late stages of ovarian cancer, to evaluate their regulation, and to determine their significance in an immunocompetent in vivo environment.
Corpus Callosum Analysis using MDL-based Sequential Models of Shape and Appearance
DEFF Research Database (Denmark)
Stegmann, Mikkel Bille; Davies, Rhodri H.; Ryberg, Charlotte
2004-01-01
This paper describes a method for automatically analysing and segmenting the corpus callosum from magnetic resonance images of the brain based on the widely used Active Appearance Models (AAMs) by Cootes et al. Extensions of the original method, which are designed to improve this specific case ar...
Directory of Open Access Journals (Sweden)
Brendan K. Podell
2017-02-01
Full Text Available Type 2 diabetes is a leading cause of morbidity and mortality among noncommunicable diseases, and additional animal models that more closely replicate the pathogenesis of human type 2 diabetes are needed. The goal of this study was to develop a model of type 2 diabetes in guinea pigs, in which diet-induced glucose intolerance precedes β-cell cytotoxicity, two processes that are crucial to the development of human type 2 diabetes. Guinea pigs developed impaired glucose tolerance after 8 weeks of feeding on a high-fat, high-carbohydrate diet, as determined by oral glucose challenge. Diet-induced glucose intolerance was accompanied by β-cell hyperplasia, compensatory hyperinsulinemia, and dyslipidemia with hepatocellular steatosis. Streptozotocin (STZ treatment alone was ineffective at inducing diabetic hyperglycemia in guinea pigs, which failed to develop sustained glucose intolerance or fasting hyperglycemia and returned to euglycemia within 21 days after treatment. However, when high-fat, high-carbohydrate diet-fed guinea pigs were treated with STZ, glucose intolerance and fasting hyperglycemia persisted beyond 21 days post-STZ treatment. Guinea pigs with diet-induced glucose intolerance subsequently treated with STZ demonstrated an insulin-secretory capacity consistent with insulin-independent diabetes. This insulin-independent state was confirmed by response to oral antihyperglycemic drugs, metformin and glipizide, which resolved glucose intolerance and extended survival compared with guinea pigs with uncontrolled diabetes. In this study, we have developed a model of sequential glucose intolerance and β-cell loss, through high-fat, high-carbohydrate diet and extensive optimization of STZ treatment in the guinea pig, which closely resembles human type 2 diabetes. This model will prove useful in the study of insulin-independent diabetes pathogenesis with or without comorbidities, where the guinea pig serves as a relevant model species.
Cooperative Distributed Sequential Spectrum Sensing
S, Jithin K; Gopalarathnam, Raghav
2010-01-01
We consider cooperative spectrum sensing for cognitive radios. We develop an energy efficient detector with low detection delay using sequential hypothesis testing. Sequential Probability Ratio Test (SPRT) is used at both the local nodes and the fusion center. We also analyse the performance of this algorithm and compare with the simulations. Modelling uncertainties in the distribution parameters are considered. Slow fading with and without perfect channel state information at the cognitive radios is taken into account.
Sequential Deposition of Copper on Solid Gold (111): A Statistical Model
1993-01-15
have to be accounted for. In the present communication we amplify a previously proposed model [21] to include the dynamics of the HS04 adsorption...bisulfate inter- action is both long ranged and repulsive. If we assume, as we have done in our previous work [21, 20], that the HS04 sits in a tripod...position, that is with its three oxygen atoms directly atop the Au atoms of the surface, then the adsorption of one HS04 necessarily excludes nearest
Jozwik, Kamila M; Kriegeskorte, Nikolaus; Mur, Marieke
2016-03-01
Object similarity, in brain representations and conscious perception, must reflect a combination of the visual appearance of the objects on the one hand and the categories the objects belong to on the other. Indeed, visual object features and category membership have each been shown to contribute to the object representation in human inferior temporal (IT) cortex, as well as to object-similarity judgments. However, the explanatory power of features and categories has not been directly compared. Here, we investigate whether the IT object representation and similarity judgments are best explained by a categorical or a feature-based model. We use rich models (>100 dimensions) generated by human observers for a set of 96 real-world object images. The categorical model consists of a hierarchically nested set of category labels (such as "human", "mammal", and "animal"). The feature-based model includes both object parts (such as "eye", "tail", and "handle") and other descriptive features (such as "circular", "green", and "stubbly"). We used non-negative least squares to fit the models to the brain representations (estimated from functional magnetic resonance imaging data) and to similarity judgments. Model performance was estimated on held-out images not used in fitting. Both models explained significant variance in IT and the amounts explained were not significantly different. The combined model did not explain significant additional IT variance, suggesting that it is the shared model variance (features correlated with categories, categories correlated with features) that best explains IT. The similarity judgments were almost fully explained by the categorical model, which explained significantly more variance than the feature-based model. The combined model did not explain significant additional variance in the similarity judgments. Our findings suggest that IT uses features that help to distinguish categories as stepping stones toward a semantic representation
GA-BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
Institute of Scientific and Technical Information of China (English)
Lu Junming; Lin Zhenghui
2002-01-01
In this paper, the glitching activity and process variations in the maximum power dissipation estimation of CMOS circuits are introduced. Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view. The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02. Compared with the traditional Monte Carlo-based technique, the new approach presented in this paper is more effective.
GA—BASED MAXIMUM POWER DISSIPATION ESTIMATION OF VLSI SEQUENTIAL CIRCUITS OF ARBITRARY DELAY MODELS
Institute of Scientific and Technical Information of China (English)
LuJunming; LinZhenghui
2002-01-01
In this paper,the glitching activity and process variations in the maximum power dissipation estimation of CMOS circulits are introduced.Given a circuit and the gate library,a new Genetic Algorithm (GA)-based technique is developed to determine the maximum power dissipation from a statistical point of view.The simulation on ISCAS-89 benchmarks shows that the ratio of the maximum power dissipation with glitching activity over the maximum power under zero-delay model ranges from 1.18 to 4.02.Compared with the traditional Monte Carlo-based technique,the new approach presented in this paper is more effective.
Detecting cells using non-negative matrix factorization on calcium imaging data.
Maruyama, Ryuichi; Maeda, Kazuma; Moroda, Hajime; Kato, Ichiro; Inoue, Masashi; Miyakawa, Hiroyoshi; Aonishi, Toru
2014-07-01
We propose a cell detection algorithm using non-negative matrix factorization (NMF) on Ca2+ imaging data. To apply NMF to Ca2+ imaging data, we use the bleaching line of the background fluorescence intensity as an a priori background constraint to make the NMF uniquely dissociate the background component from the image data. This constraint helps us to incorporate the effect of dye-bleaching and reduce the non-uniqueness of the solution. We demonstrate that in the case of noisy data, the NMF algorithm can detect cells more accurately than Mukamel's independent component analysis algorithm, a state-of-art method. We then apply the NMF algorithm to Ca2+ imaging data recorded on the local activities of subcellular structures of multiple cells in a wide area. We show that our method can decompose rapid transient components corresponding to somas and dendrites of many neurons, and furthermore, that it can decompose slow transient components probably corresponding to glial cells.
Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding
Directory of Open Access Journals (Sweden)
Ying Chen
2014-05-01
Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.
Infinity Behavior of Bounded Subharmonic Functions on Ricci Non-negative Manifolds
Institute of Scientific and Technical Information of China (English)
Bao Qiang WU
2004-01-01
In this paper, we study the infinity behavior of the bounded subharmonic functions on a Ricci non-negative Riemannian manifold M. We first show that limr→∞r2/V(r) ∫B(r)△hdv = 0 if h is a bounded subharmonic function. If we further assume that the Laplacian decays pointwisely faster than quadratically we show that h approaches its supremun pointwisely at infinity, under certain auxiliary conditions on the volume growth of M. In particular, our result applies to the case when the Riemannian manifold has maximum volume growth. We also derive a representation formula in our paper, from which one can easily derive Yau's Liouville theorem on bounded harmonic functions.
Categorical dimensions of human odor descriptor space revealed by non-negative matrix factorization
Energy Technology Data Exchange (ETDEWEB)
Chennubhotla, Chakra [University of Pittsburgh School of Medicine, Pittsburgh PA; Castro, Jason [Bates College
2013-01-01
In contrast to most other sensory modalities, the basic perceptual dimensions of olfaction remain un- clear. Here, we use non-negative matrix factorization (NMF) - a dimensionality reduction technique - to uncover structure in a panel of odor profiles, with each odor defined as a point in multi-dimensional descriptor space. The properties of NMF are favorable for the analysis of such lexical and perceptual data, and lead to a high-dimensional account of odor space. We further provide evidence that odor di- mensions apply categorically. That is, odor space is not occupied homogenously, but rather in a discrete and intrinsically clustered manner. We discuss the potential implications of these results for the neural coding of odors, as well as for developing classifiers on larger datasets that may be useful for predicting perceptual qualities from chemical structures.
Park, Sang Ha; Lee, Seokjin; Sung, Koeng-Mo
Non-negative matrix factorization (NMF) is widely used for monaural musical sound source separation because of its efficiency and good performance. However, an additional clustering process is required because the musical sound mixture is separated into more signals than the number of musical tracks during NMF separation. In the conventional method, manual clustering or training-based clustering is performed with an additional learning process. Recently, a clustering algorithm based on the mel-frequency cepstrum coefficient (MFCC) was proposed for unsupervised clustering. However, MFCC clustering supplies limited information for clustering. In this paper, we propose various timbre features for unsupervised clustering and a clustering algorithm with these features. Simulation experiments are carried out using various musical sound mixtures. The results indicate that the proposed method improves clustering performance, as compared to conventional MFCC-based clustering.
Minami, Shintaro; Sawada, Kengo; Chikenji, George
2013-01-18
Protein pairs that have the same secondary structure packing arrangement but have different topologies have attracted much attention in terms of both evolution and physical chemistry of protein structures. Further investigation of such protein relationships would give us a hint as to how proteins can change their fold in the course of evolution, as well as a insight into physico-chemical properties of secondary structure packing. For this purpose, highly accurate sequence order independent structure comparison methods are needed. We have developed a novel protein structure alignment algorithm, MICAN (a structure alignment algorithm that can handle Multiple-chain complexes, Inverse direction of secondary structures, Cα only models, Alternative alignments, and Non-sequential alignments). The algorithm was designed so as to identify the best structural alignment between protein pairs by disregarding the connectivity between secondary structure elements (SSE). One of the key feature of the algorithm is utilizing the multiple vector representation for each SSE, which enables us to correctly treat bent or twisted nature of long SSE. We compared MICAN with other 9 publicly available structure alignment programs, using both reference-dependent and reference-independent evaluation methods on a variety of benchmark test sets which include both sequential and non-sequential alignments. We show that MICAN outperforms the other existing methods for reproducing reference alignments of non-sequential test sets. Further, although MICAN does not specialize in sequential structure alignment, it showed the top level performance on the sequential test sets. We also show that MICAN program is the fastest non-sequential structure alignment program among all the programs we examined here. MICAN is the fastest and the most accurate program among non-sequential alignment programs we examined here. These results suggest that MICAN is a highly effective tool for automatically detecting non
2013-01-01
Background Protein pairs that have the same secondary structure packing arrangement but have different topologies have attracted much attention in terms of both evolution and physical chemistry of protein structures. Further investigation of such protein relationships would give us a hint as to how proteins can change their fold in the course of evolution, as well as a insight into physico-chemical properties of secondary structure packing. For this purpose, highly accurate sequence order independent structure comparison methods are needed. Results We have developed a novel protein structure alignment algorithm, MICAN (a structure alignment algorithm that can handle Multiple-chain complexes, Inverse direction of secondary structures, Cα only models, Alternative alignments, and Non-sequential alignments). The algorithm was designed so as to identify the best structural alignment between protein pairs by disregarding the connectivity between secondary structure elements (SSE). One of the key feature of the algorithm is utilizing the multiple vector representation for each SSE, which enables us to correctly treat bent or twisted nature of long SSE. We compared MICAN with other 9 publicly available structure alignment programs, using both reference-dependent and reference-independent evaluation methods on a variety of benchmark test sets which include both sequential and non-sequential alignments. We show that MICAN outperforms the other existing methods for reproducing reference alignments of non-sequential test sets. Further, although MICAN does not specialize in sequential structure alignment, it showed the top level performance on the sequential test sets. We also show that MICAN program is the fastest non-sequential structure alignment program among all the programs we examined here. Conclusions MICAN is the fastest and the most accurate program among non-sequential alignment programs we examined here. These results suggest that MICAN is a highly effective tool
Cavalli, Marco; Goldin, Beatrice; Comiti, Francesco; Brardinoni, Francesco; Marchi, Lorenzo
2017-08-01
Digital elevation models (DEMs) built from repeated topographic surveys permit producing DEM of Difference (DoD) that enables assessment of elevation variations and estimation of volumetric changes through time. In the framework of sediment transport studies, DEM differencing enables quantitative and spatially-distributed representation of erosion and deposition within the analyzed time window, at both the channel reach and the catchment scale. In this study, two high-resolution Digital Terrain Models (DTMs) derived from airborne LiDAR data (2 m resolution) acquired in 2005 and 2011 were used to characterize the topographic variations caused by sediment erosion, transport and deposition in two adjacent mountain basins (Gadria and Strimm, Vinschgau - Venosta valley, Eastern Alps, Italy). These catchments were chosen for their contrasting morphology and because they feature different types and intensity of sediment transfer processes. A method based on fuzzy logic, which takes into account spatially variable DTMs uncertainty, was used to derive the DoD of the study area. Volumes of erosion and deposition calculated from the DoD were then compared with post-event field surveys to test the consistency of two independent estimates. Results show an overall agreement between the estimates, with differences due to the intrinsic approximations of the two approaches. The consistency of DoD with post-event estimates encourages the integration of these two methods, whose combined application may permit to overcome the intrinsic limitations of the two estimations. The comparison between 2005 and 2011 DTMs allowed to investigate the relationships between topographic changes and geomorphometric parameters expressing the role of topography on sediment erosion and deposition (i.e., slope and contributing area) and describing the morphology influenced by debris flows and fluvial processes (i.e., curvature). Erosion and deposition relations in the slope-area space display substantial
Shahar, Ben; Doron, Guy; Szepsenwol, Ohad
2015-01-01
Previous research has shown a robust link between emotional abuse and neglect with social anxiety symptoms. However, the mechanisms through which these links operate are less clear. We hypothesized a model in which early experiences of abuse and neglect create aversive shame states, internalized into a stable shame-based cognitive-affective schema. Self-criticism is conceptualized as a safety strategy designed to conceal flaws and prevent further experiences of shame. However, self-criticism maintains negative self-perceptions and insecurity in social situations. To provide preliminary, cross-sectional support for this model, a nonclinical community sample of 219 adults from Israel (110 females, mean age = 38.7) completed measures of childhood trauma, shame-proneness, self-criticism and social anxiety symptoms. A sequential mediational model showed that emotional abuse, but not emotional neglect, predicted shame-proneness, which in turn predicted self-criticism, which in turn predicted social anxiety symptoms. These results provide initial evidence supporting the role of shame and self-criticism in the development and maintenance of social anxiety disorder. The clinical implications of these findings are discussed. Previous research has shown that histories of emotional abuse and emotional neglect predict social anxiety symptoms, but the mechanisms that underlie these associations are not clear. Using psycho-evolutionary and emotion-focused perspectives, the findings of the current study suggest that shame and self-criticism play an important role in social anxiety and may mediate the link between emotional abuse and symptoms. These findings also suggest that therapeutic interventions specifically targeting shame and self-criticism should be incorporated into treatments for social anxiety, especially with socially anxious patients with abuse histories. Copyright © 2014 John Wiley & Sons, Ltd.
Directory of Open Access Journals (Sweden)
Shikhar Gupta
2014-01-01
Full Text Available In this study, we have employed in silico methodology combining double pharmacophore based screening, molecular docking, and ADME/T filtering to identify dual binding site acetylcholinesterase inhibitors that can preferentially inhibit acetylcholinesterase and simultaneously inhibit the butyrylcholinesterase also but in the lesser extent than acetylcholinesterase. 3D-pharmacophore models of AChE and BuChE enzyme inhibitors have been developed from xanthostigmine derivatives through HypoGen and validated using test set, Fischer’s randomization technique. The best acetylcholinesterase and butyrylcholinesterase inhibitors pharmacophore hypotheses Hypo1_A and Hypo1_B, with high correlation coefficient of 0.96 and 0.94, respectively, were used as 3D query for screening the Zinc database. The screened hits were then subjected to the ADME/T and molecular docking study to prioritise the compounds. Finally, 18 compounds were identified as potential leads against AChE enzyme, showing good predicted activities and promising ADME/T properties.
King, Darlene R; Li, Weizhi; Squiers, John J; Mohan, Rachit; Sellke, Eric; Mo, Weirong; Zhang, Xu; Fan, Wensheng; DiMaio, J Michael; Thatcher, Jeffrey E
2015-11-01
Multispectral imaging (MSI) is an optical technique that measures specific wavelengths of light reflected from wound site tissue to determine the severity of burn wounds. A rapid MSI device to measure burn depth and guide debridement will improve clinical decision making and diagnoses. We used a porcine burn model to study partial thickness burns of varying severity. We made eight 4 × 4 cm burns on the dorsum of one minipig. Four burns were studied intact, and four burns underwent serial tangential excision. We imaged the burn sites with 400-1000 nm wavelengths. Histology confirmed that we achieved various partial thickness burns. Analysis of spectral images show that MSI detects significant variations in the spectral profiles of healthy tissue, superficial partial thickness burns, and deep partial thickness burns. The absorbance spectra of 515, 542, 629, and 669 nm were the most accurate in distinguishing superficial from deep partial thickness burns, while the absorbance spectra of 972 nm was the most accurate in guiding the debridement process. The ability to distinguish between partial thickness burns of varying severity to assess whether a patient requires surgery could be improved with an MSI device in a clinical setting. Copyright © 2015 Elsevier Ltd and ISBI. All rights reserved.
CHIRP-Like Signals: Estimation, Detection and Processing A Sequential Model-Based Approach
Energy Technology Data Exchange (ETDEWEB)
Candy, J. V. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-08-04
Chirp signals have evolved primarily from radar/sonar signal processing applications specifically attempting to estimate the location of a target in surveillance/tracking volume. The chirp, which is essentially a sinusoidal signal whose phase changes instantaneously at each time sample, has an interesting property in that its correlation approximates an impulse function. It is well-known that a matched-filter detector in radar/sonar estimates the target range by cross-correlating a replicant of the transmitted chirp with the measurement data reflected from the target back to the radar/sonar receiver yielding a maximum peak corresponding to the echo time and therefore enabling the desired range estimate. In this application, we perform the same operation as a radar or sonar system, that is, we transmit a “chirp-like pulse” into the target medium and attempt to first detect its presence and second estimate its location or range. Our problem is complicated by the presence of disturbance signals from surrounding broadcast stations as well as extraneous sources of interference in our frequency bands and of course the ever present random noise from instrumentation. First, we discuss the chirp signal itself and illustrate its inherent properties and then develop a model-based processing scheme enabling both the detection and estimation of the signal from noisy measurement data.
Behavioral Modeling of WSN MAC Layer Security Attacks: A Sequential UML Approach
DEFF Research Database (Denmark)
Pawar, Pranav M.; Nielsen, Rasmus Hjorth; Prasad, Neeli R.
2012-01-01
Wireless sensor networks (WSNs) are growing enormously and becoming increasingly attractive for a variety of application areas such as tele-health monitoring, industry monitoring, home automation and many more. The primary weakness shared by all wireless application and technologies is the vulner......Wireless sensor networks (WSNs) are growing enormously and becoming increasingly attractive for a variety of application areas such as tele-health monitoring, industry monitoring, home automation and many more. The primary weakness shared by all wireless application and technologies...... is the vulnerability to security attacks/threats. The performance and behavior of a WSN are vastly affected by such attacks. In order to be able to better address the vulnerabilities of WSNs in terms of security, it is important to understand the behavior of the attacks. This paper addresses the behavioral modeling...... of medium access control (MAC) security attacks in WSNs. The MAC layer is responsible for energy consumption, delay and channel utilization of the network and attacks on this layer can introduce significant degradation of the individual sensor nodes due to energy drain and in performance due to delays...
Ju, Bin; Qian, Yuntao; Ye, Minchao; Ni, Rong; Zhu, Chenxi
2015-01-01
Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.
Directory of Open Access Journals (Sweden)
Bin Ju
Full Text Available Predicting what items will be selected by a target user in the future is an important function for recommendation systems. Matrix factorization techniques have been shown to achieve good performance on temporal rating-type data, but little is known about temporal item selection data. In this paper, we developed a unified model that combines Multi-task Non-negative Matrix Factorization and Linear Dynamical Systems to capture the evolution of user preferences. Specifically, user and item features are projected into latent factor space by factoring co-occurrence matrices into a common basis item-factor matrix and multiple factor-user matrices. Moreover, we represented both within and between relationships of multiple factor-user matrices using a state transition matrix to capture the changes in user preferences over time. The experiments show that our proposed algorithm outperforms the other algorithms on two real datasets, which were extracted from Netflix movies and Last.fm music. Furthermore, our model provides a novel dynamic topic model for tracking the evolution of the behavior of a user over time.
Directory of Open Access Journals (Sweden)
Craig Nicolson
2013-06-01
Full Text Available Livelihood systems that depend on mobile resources must constantly adapt to change. For people living in permanent settlements, environmental changes that affect the distribution of a migratory species may reduce the availability of a primary food source, with the potential to destabilize the regional social-ecological system. Food security for Arctic indigenous peoples harvesting barren ground caribou (Rangifer tarandus granti depends on movement patterns of migratory herds. Quantitative assessments of physical, ecological, and social effects on caribou distribution have proven difficult because of the significant interannual variability in seasonal caribou movement patterns. We developed and evaluated a modeling approach for simulating the distribution of a migratory herd throughout its annual cycle over a multiyear period. Beginning with spatial and temporal scales developed in previous studies of the Porcupine Caribou Herd of Canada and Alaska, we used satellite collar locations to compute and analyze season-by-season probabilities of movement of animals between habitat zones under two alternative weather conditions for each season. We then built a set of transition matrices from these movement probabilities, and simulated the sequence of movements across the landscape as a Markov process driven by externally imposed seasonal weather states. Statistical tests showed that the predicted distributions of caribou were consistent with observed distributions, and significantly correlated with subsistence harvest levels for three user communities. Our approach could be applied to other caribou herds and could be adapted for simulating the distribution of other ungulates and species with similarly large interannual variability in the use of their range.
Directory of Open Access Journals (Sweden)
Matteo Pappalardo
Full Text Available The human histamine H4 receptor (hH4R, a member of the G-protein coupled receptors (GPCR family, is an increasingly attractive drug target. It plays a key role in many cell pathways and many hH4R ligands are studied for the treatment of several inflammatory, allergic and autoimmune disorders, as well as for analgesic activity. Due to the challenging difficulties in the experimental elucidation of hH4R structure, virtual screening campaigns are normally run on homology based models. However, a wealth of information about the chemical properties of GPCR ligands has also accumulated over the last few years and an appropriate combination of these ligand-based knowledge with structure-based molecular modeling studies emerges as a promising strategy for computer-assisted drug design. Here, two chemoinformatics techniques, the Intelligent Learning Engine (ILE and Iterative Stochastic Elimination (ISE approach, were used to index chemicals for their hH4R bioactivity. An application of the prediction model on external test set composed of more than 160 hH4R antagonists picked from the chEMBL database gave enrichment factor of 16.4. A virtual high throughput screening on ZINC database was carried out, picking ∼ 4000 chemicals highly indexed as H4R antagonists' candidates. Next, a series of 3D models of hH4R were generated by molecular modeling and molecular dynamics simulations performed in fully atomistic lipid membranes. The efficacy of the hH4R 3D models in discrimination between actives and non-actives were checked and the 3D model with the best performance was chosen for further docking studies performed on the focused library. The output of these docking studies was a consensus library of 11 highly active scored drug candidates. Our findings suggest that a sequential combination of ligand-based chemoinformatics approaches with structure-based ones has the potential to improve the success rate in discovering new biologically active GPCR drugs and
Guidi, Jenny; Tomba, Elena; Fava, Giovanni A
2016-02-01
A number of randomized controlled trials in major depressive disorder have employed a sequential model, which consists of the use of pharmacotherapy in the acute phase and of psychotherapy in its residual phase. The aim of this review was to provide an updated meta-analysis of the efficacy of this approach in reducing the risk of relapse in major depressive disorder and to place these findings in the larger context of treatment selection. Keyword searches were conducted in MEDLINE, EMBASE, PsycINFO, and Cochrane Library from inception of each database through October 2014. Randomized controlled trials examining the efficacy of the administration of psychotherapy after successful response to acute-phase pharmacotherapy in the treatment of adults with major depressive disorder were considered for inclusion in the meta-analysis. Thirteen high-quality studies with 728 patients in a sequential treatment arm and 682 in a control treatment arm were included. All studies involved cognitive-behavioral therapy (CBT). The pooled risk ratio for relapse/recurrence was 0.781 (95% confidence interval [CI]=0.671-0.909; number needed to treat=8), according to the random-effects model, suggesting a relative advantage in preventing relapse/recurrence compared with control conditions. A significant effect of CBT during continuation of antidepressant drugs compared with antidepressants alone or treatment as usual (risk ratio: 0.811; 95% CI=0.685-0.961; number needed to treat=10) was found. Patients randomly assigned to CBT who had antidepressants tapered and discontinued were significantly less likely to experience relapse/recurrence compared with those assigned to either clinical management or continuation of antidepressant medication (risk ratio: 0.674; 95% CI=0.482-0.943; number needed to treat=5). The sequential integration of CBT and pharmacotherapy is a viable strategy for preventing relapse in major depressive disorder. The current indications for the application of
View Dependent Sequential Point Trees
Institute of Scientific and Technical Information of China (English)
Wen-Cheng Wang; Feng Wei; En-Hua Wu
2006-01-01
Sequential point trees provide the state-of-the-art technique for rendering point models, by re-arranging hierarchical points sequentially according to geometric errors running on GPU for fast rendering. This paper presents a view dependent method to augment sequential point trees by embedding the hierarchical tree structures in the sequential list of hierarchical points. By the method, two kinds of indices are constructed to facilitate the points rendering in an order mostly from near to far and from coarse to fine. As a result, invisible points can be culled view-dependently in high efficiency for hardware acceleration, and at the same time, the advantage of sequential point trees could be still fully taken. Therefore, the new method can run much faster than the conventional sequential point trees, and the acceleration can be highly promoted particularly when the objects possess complex occlusion relationship and viewed closely because invisible points would be in a high percentage of the points at finer levels.
Devarajan, Karthik; Cheung, Vincent C.K.
2017-01-01
Non-negative matrix factorization (NMF) by the multiplicative updates algorithm is a powerful machine learning method for decomposing a high-dimensional nonnegative matrix V into two nonnegative matrices, W and H where V ~ WH. It has been successfully applied in the analysis and interpretation of large-scale data arising in neuroscience, computational biology and natural language processing, among other areas. A distinctive feature of NMF is its nonnegativity constraints that allow only additive linear combinations of the data, thus enabling it to learn parts that have distinct physical representations in reality. In this paper, we describe an information-theoretic approach to NMF for signal-dependent noise based on the generalized inverse Gaussian model. Specifically, we propose three novel algorithms in this setting, each based on multiplicative updates and prove monotonicity of updates using the EM algorithm. In addition, we develop algorithm-specific measures to evaluate their goodness-of-fit on data. Our methods are demonstrated using experimental data from electromyography studies as well as simulated data in the extraction of muscle synergies, and compared with existing algorithms for signal-dependent noise. PMID:24684448
2005-07-01
a constant factor of K + 2. (To see this, note sequential stacking requires training K+2 classifiers: the classifiers f1, . . . , fK used in cross...on the non- sequential learners (ME and VP) but improves per- formance of the sequential learners (CRFs and VPH - MMs) less consistently. This pattern
Institute of Scientific and Technical Information of China (English)
Xiu-rui GENG; Lu-yan JI; Kang SUN
2016-01-01
Non-negative matrix factorization (NMF) has been widely used in mixture analysis for hyperspectral remote sensing. When used for spectral unmixing analysis, however, it has two main shortcomings: (1) since the dimensionality of hyperspectral data is usually very large, NMF tends to suffer from large computational complexity for the popular multiplicative iteration rule;(2) NMF is sensitive to noise (outliers), and thus the corrupted data will make the results of NMF meaningless. Although principal component analysis (PCA) can be used to mitigate these two problems, the transformed data will contain negative numbers, hindering the direct use of the multiplicative iteration rule of NMF. In this paper, we analyze the impact of PCA on NMF, and fi nd that multiplicative NMF can also be applicable to data after principal component transformation. Based on this conclusion, we present a method to perform NMF in the principal component space, named ‘principal component NMF’ (PCNMF). Experimental results show that PCNMF is both accurate and time-saving.
Xi, Jianing; Li, Ao
2016-01-01
Recurrent copy number aberrations (RCNAs) in multiple cancer samples are strongly associated with tumorigenesis, and RCNA discovery is helpful to cancer research and treatment. Despite the emergence of numerous RCNA discovering methods, most of them are unable to detect RCNAs in complex patterns that are influenced by complicating factors including aberration in partial samples, co-existing of gains and losses and normal-like tumor samples. Here, we propose a novel computational method, called non-negative sparse singular value decomposition (NN-SSVD), to address the RCNA discovering problem in complex patterns. In NN-SSVD, the measurement of RCNA is based on the aberration frequency in a part of samples rather than all samples, which can circumvent the complexity of different RCNA patterns. We evaluate NN-SSVD on synthetic dataset by comparison on detection scores and Receiver Operating Characteristics curves, and the results show that NN-SSVD outperforms existing methods in RCNA discovery and demonstrate more robustness to RCNA complicating factors. Applying our approach on a breast cancer dataset, we successfully identify a number of genomic regions that are strongly correlated with previous studies, which harbor a bunch of known breast cancer associated genes.
Moschidis, Georgios
2016-01-01
The wave equation $\\square_{g_{M,a}}\\psi=0$ on subextremal Kerr spacetimes $(\\mathcal{M}_{M,a},g_{M,a})$, $0<|a|
Detecting heterogeneity in single-cell RNA-Seq data by non-negative matrix factorization
Zhu, Xun; Ching, Travers; Pan, Xinghua; Weissman, Sherman M.
2017-01-01
Single-cell RNA-Sequencing (scRNA-Seq) is a fast-evolving technology that enables the understanding of biological processes at an unprecedentedly high resolution. However, well-suited bioinformatics tools to analyze the data generated from this new technology are still lacking. Here we investigate the performance of non-negative matrix factorization (NMF) method to analyze a wide variety of scRNA-Seq datasets, ranging from mouse hematopoietic stem cells to human glioblastoma data. In comparison to other unsupervised clustering methods including K-means and hierarchical clustering, NMF has higher accuracy in separating similar groups in various datasets. We ranked genes by their importance scores (D-scores) in separating these groups, and discovered that NMF uniquely identifies genes expressed at intermediate levels as top-ranked genes. Finally, we show that in conjugation with the modularity detection method FEM, NMF reveals meaningful protein-protein interaction modules. In summary, we propose that NMF is a desirable method to analyze heterogeneous single-cell RNA-Seq data. The NMF based subpopulation detection package is available at: https://github.com/lanagarmire/NMFEM. PMID:28133571
A perturbation-based framework for link prediction via non-negative matrix factorization
Wang, Wenjun; Cai, Fei; Jiao, Pengfei; Pan, Lin
2016-12-01
Many link prediction methods have been developed to infer unobserved links or predict latent links based on the observed network structure. However, due to network noises and irregular links in real network, the performances of existed methods are usually limited. Considering random noises and irregular links, we propose a perturbation-based framework based on Non-negative Matrix Factorization to predict missing links. We first automatically determine the suitable number of latent features, which is inner rank in NMF, by Colibri method. Then, we perturb training set of a network by perturbation sets many times and get a series of perturbed networks. Finally, the common basis matrix and coefficients matrix of these perturbed networks are obtained via NMF and form similarity matrix of the network for link prediction. Experimental results on fifteen real networks show that the proposed framework has competitive performances compared with state-of-the-art link prediction methods. Correlations between the performances of different methods and the statistics of networks show that those methods with good precisions have similar consistence.
Rak, A. J.; McQuarrie, N.
2014-12-01
Applying isostasy and erosion to sequentially deformed balanced cross sections links the growth of hinterland structures to the developing foreland basins (FB) adjacent to fold-thrust belts (FTB), adding geologic constraints to modeled exhumation pathways. We sequentially deform the Rio Beni cross section in northern Bolivia (McQuarrie et al., 2008) with kinematic modeling software Move. In our model, topography evolves and basins develop for each model step as deformation, erosion, and isostasy are applied; and are a direct function of the geometry and kinematics of the cross section. The model is constrained by the depth of the foreland and hinterland basins, geology present at the surface, the depth and angle of the decollement, and the shape of the modern observed topography. Topography develops as thrusting occurs and loads the crust, producing a flexural wave and creating accommodation space in adjacent basins. Erosion of material above a newly generated topographic profile unloads the section while basin space is filled. Once the model sufficiently duplicates geologic constraints, a 0.5 km X 0.5 km grid of unique points is deformed with the model and used to determine displacement vectors for each 10 km shortening step. These displacement vectors, in conjunction with a prescribed time interval for each step, determine a velocity field that can be used in a modified version of the advection diffusion modeling software Pecube. Cooling ages predicted using this method are based on deformation rates, geometry, topography, and thermal parameters, and offer insight into possible rates of deformation, erosion, and deposition throughout FTB and FB development. Incorporating erosion, deposition, and isostasy in sequentially deformed balanced cross sections highlights the spatiotemporal aspects of sedimentary wedge propagation, identifies necessary external negative buoyancy affects, and provides additional geologic constraints to modeled exhumation pathways.
Energy Technology Data Exchange (ETDEWEB)
Sigeti, David E. [Los Alamos National Laboratory; Pelak, Robert A. [Los Alamos National Laboratory
2012-09-11
We present a Bayesian statistical methodology for identifying improvement in predictive simulations, including an analysis of the number of (presumably expensive) simulations that will need to be made in order to establish with a given level of confidence that an improvement has been observed. Our analysis assumes the ability to predict (or postdict) the same experiments with legacy and new simulation codes and uses a simple binomial model for the probability, {theta}, that, in an experiment chosen at random, the new code will provide a better prediction than the old. This model makes it possible to do statistical analysis with an absolute minimum of assumptions about the statistics of the quantities involved, at the price of discarding some potentially important information in the data. In particular, the analysis depends only on whether or not the new code predicts better than the old in any given experiment, and not on the magnitude of the improvement. We show how the posterior distribution for {theta} may be used, in a kind of Bayesian hypothesis testing, both to decide if an improvement has been observed and to quantify our confidence in that decision. We quantify the predictive probability that should be assigned, prior to taking any data, to the possibility of achieving a given level of confidence, as a function of sample size. We show how this predictive probability depends on the true value of {theta} and, in particular, how there will always be a region around {theta} = 1/2 where it is highly improbable that we will be able to identify an improvement in predictive capability, although the width of this region will shrink to zero as the sample size goes to infinity. We show how the posterior standard deviation may be used, as a kind of 'plan B metric' in the case that the analysis shows that {theta} is close to 1/2 and argue that such a plan B should generally be part of hypothesis testing. All the analysis presented in the paper is done with a
Spegazzini, Nicolas; Siesler, Heinz W; Ozaki, Yukihiro
2012-10-02
A sequential identification approach by two-dimensional (2D) correlation analysis for the identification of a chemical reaction model, activation, and thermodynamic parameters is presented in this paper. The identification task is decomposed into a sequence of subproblems. The first step is the construction of a reaction model with the suggested information by model-free 2D correlation analysis using a novel technique called derivative double 2D correlation spectroscopy (DD2DCOS), which enables one to analyze intensities with nonlinear behavior and overlapped bands. The second step is a model-based 2D correlation analysis where the activation and thermodynamic parameters are estimated by an indirect implicit calibration or a calibration-free approach. In this way, a minimization process for the spectral information by sample-sample 2D correlation spectroscopy and kinetic hard modeling (using ordinary differential equations) of the chemical reaction model is carried out. The sequential identification by 2D correlation analysis is illustrated with reference to the isomeric structure of diphenylurethane synthesized from phenylisocyanate and phenol. The reaction was investigated by FT-IR spectroscopy. The activation and thermodynamic parameters of the isomeric structures of diphenylurethane linked through a hydrogen bonding equilibrium were studied by means of an integration of model-free and model-based 2D correlation analysis called a sequential identification approach. The study determined the enthalpy (ΔH = 15.25 kJ/mol) and entropy (TΔS = 13.20 kJ/mol) of C═O···H hydrogen bonding of diphenylurethane through direct calculation from the differences in the kinetic parameters (δΔ(‡)H, -TδΔ(‡)S) at equilibrium in the chemical reaction system.
Functional biogeography of ocean microbes revealed through non-negative matrix factorization.
Directory of Open Access Journals (Sweden)
Xingpeng Jiang
Full Text Available The direct "metagenomic" sequencing of genomic material from complex assemblages of bacteria, archaea, viruses and microeukaryotes has yielded new insights into the structure of microbial communities. For example, analysis of metagenomic data has revealed the existence of previously unknown microbial taxa whose spatial distributions are limited by environmental conditions, ecological competition, and dispersal mechanisms. However, differences in genotypes that might lead biologists to designate two microbes as taxonomically distinct need not necessarily imply differences in ecological function. Hence, there is a growing need for large-scale analysis of the distribution of microbial function across habitats. Here, we present a framework for investigating the biogeography of microbial function by analyzing the distribution of protein families inferred from environmental sequence data across a global collection of sites. We map over 6,000,000 protein sequences from unassembled reads from the Global Ocean Survey dataset to [Formula: see text] protein families, generating a protein family relative abundance matrix that describes the distribution of each protein family across sites. We then use non-negative matrix factorization (NMF to approximate these protein family profiles as linear combinations of a small number of ecological components. Each component has a characteristic functional profile and site profile. Our approach identifies common functional signatures within several of the components. We use our method as a filter to estimate functional distance between sites, and find that an NMF-filtered measure of functional distance is more strongly correlated with environmental distance than a comparable PCA-filtered measure. We also find that functional distance is more strongly correlated with environmental distance than with geographic distance, in agreement with prior studies. We identify similar protein functions in several components and
Gene Ranking of RNA-Seq Data via Discriminant Non-Negative Matrix Factorization.
Jia, Zhilong; Zhang, Xiang; Guan, Naiyang; Bo, Xiaochen; Barnes, Michael R; Luo, Zhigang
2015-01-01
RNA-sequencing is rapidly becoming the method of choice for studying the full complexity of transcriptomes, however with increasing dimensionality, accurate gene ranking is becoming increasingly challenging. This paper proposes an accurate and sensitive gene ranking method that implements discriminant non-negative matrix factorization (DNMF) for RNA-seq data. To the best of our knowledge, this is the first work to explore the utility of DNMF for gene ranking. When incorporating Fisher's discriminant criteria and setting the reduced dimension as two, DNMF learns two factors to approximate the original gene expression data, abstracting the up-regulated or down-regulated metagene by using the sample label information. The first factor denotes all the genes' weights of two metagenes as the additive combination of all genes, while the second learned factor represents the expression values of two metagenes. In the gene ranking stage, all the genes are ranked as a descending sequence according to the differential values of the metagene weights. Leveraging the nature of NMF and Fisher's criterion, DNMF can robustly boost the gene ranking performance. The Area Under the Curve analysis of differential expression analysis on two benchmarking tests of four RNA-seq data sets with similar phenotypes showed that our proposed DNMF-based gene ranking method outperforms other widely used methods. Moreover, the Gene Set Enrichment Analysis also showed DNMF outweighs others. DNMF is also computationally efficient, substantially outperforming all other benchmarked methods. Consequently, we suggest DNMF is an effective method for the analysis of differential gene expression and gene ranking for RNA-seq data.
Finding Imaging Patterns of Structural Covariance via Non-Negative Matrix Factorization
Sotiras, Aristeidis; Resnick, Susan M.; Davatzikos, Christos
2015-01-01
In this paper, we investigate the use of Non-Negative Matrix Factorization (NNMF) for the analysis of structural neuroimaging data. The goal is to identify the brain regions that co-vary across individuals in a consistent way, hence potentially being part of underlying brain networks or otherwise influenced by underlying common mechanisms such as genetics and pathologies. NNMF offers a directly data-driven way of extracting relatively localized co-varying structural regions, thereby transcending limitations of Principal Component Analysis (PCA), Independent Component Analysis (ICA) and other related methods that tend to produce dispersed components of positive and negative loadings. In particular, leveraging upon the well known ability of NNMF to produce parts-based representations of image data, we derive decompositions that partition the brain into regions that vary in consistent ways across individuals. Importantly, these decompositions achieve dimensionality reduction via highly interpretable ways and generalize well to new data as shown via split-sample experiments. We empirically validate NNMF in two data sets: i) a Diffusion Tensor (DT) mouse brain development study, and ii) a structural Magnetic Resonance (sMR) study of human brain aging. We demonstrate the ability of NNMF to produce sparse parts-based representations of the data at various resolutions. These representations seem to follow what we know about the underlying functional organization of the brain and also capture some pathological processes. Moreover, we show that these low dimensional representations favorably compare to descriptions obtained with more commonly used matrix factorization methods like PCA and ICA. PMID:25497684
Sequential Divestiture and Firm Asymmetry
Directory of Open Access Journals (Sweden)
Wen Zhou
2013-01-01
Full Text Available Simple Cournot models of divestiture tend to generate incentives to divest which are too strong, predicting that firms will break up into an infinite number of divisions resulting in perfect competition. This paper shows that if the order of divestitures is endogenized, firms will always choose sequential, and hence very limited, divestitures. Divestitures favor the larger firm and the follower in a sequential game. Divestitures in which the larger firm is the follower generate greater industry profit and social welfare, but a smaller consumer surplus.
Goodman, Geoff; Chung, Hyewon; Fischel, Leah; Athey-Lloyd, Laura
2017-07-01
This study examined the sequential relations among three pertinent variables in child psychotherapy: therapeutic alliance (TA) (including ruptures and repairs), autism symptoms, and adherence to child-centered play therapy (CCPT) process. A 2-year CCPT of a 6-year-old Caucasian boy diagnosed with autism spectrum disorder was conducted weekly with two doctoral-student therapists, working consecutively for 1 year each, in a university-based community mental-health clinic. Sessions were video-recorded and coded using the Child Psychotherapy Process Q-Set (CPQ), a measure of the TA, and an autism symptom measure. Sequential relations among these variables were examined using simulation modeling analysis (SMA). In Therapist 1's treatment, unexpectedly, autism symptoms decreased three sessions after a rupture occurred in the therapeutic dyad. In Therapist 2's treatment, adherence to CCPT process increased 2 weeks after a repair occurred in the therapeutic dyad. The TA decreased 1 week after autism symptoms increased. Finally, adherence to CCPT process decreased 1 week after autism symptoms increased. The authors concluded that (1) sequential relations differ by therapist even though the child remains constant, (2) therapeutic ruptures can have an unexpected effect on autism symptoms, and (3) changes in autism symptoms can precede as well as follow changes in process variables.
Non-negative matrix factorization techniques advances in theory and applications
2016-01-01
This book collects new results, concepts and further developments of NMF. The open problems discussed include, e.g. in bioinformatics: NMF and its extensions applied to gene expression, sequence analysis, the functional characterization of genes, clustering and text mining etc. The research results previously scattered in different scientific journals and conference proceedings are methodically collected and presented in a unified form. While readers can read the book chapters sequentially, each chapter is also self-contained. This book can be a good reference work for researchers and engineers interested in NMF, and can also be used as a handbook for students and professionals seeking to gain a better understanding of the latest applications of NMF.
Sequential triangulation of orbital photography
Rajan, M.; Junkins, J. L.; Turner, J. D.
1979-01-01
The feasibility of structuring the satellite photogrammetric triangulation as an iterative Extended Kalman estimation algorithm is demonstrated. Comparative numerical results of the sequential against batch estimation algorithm are presented. Difficulty of accurately modeling of the attitude motion is overcome by utilizing the on-board angular rate measurements. Solutions of the differential equations and the evaluation of state transition matrix are carried out numerically.
Attack Trees with Sequential Conjunction
Jhawar, Ravi; Kordy, Barbara; Mauw, Sjouke; Radomirović, Sasa; Trujillo-Rasua, Rolando
2015-01-01
We provide the first formal foundation of SAND attack trees which are a popular extension of the well-known attack trees. The SAND at- tack tree formalism increases the expressivity of attack trees by intro- ducing the sequential conjunctive operator SAND. This operator enables the modeling of
Sequential triangulation of orbital photography
Rajan, M.; Junkins, J. L.; Turner, J. D.
1979-01-01
The feasibility of structuring the satellite photogrammetric triangulation as an iterative Extended Kalman estimation algorithm is demonstrated. Comparative numerical results of the sequential against batch estimation algorithm are presented. Difficulty of accurately modeling of the attitude motion is overcome by utilizing the on-board angular rate measurements. Solutions of the differential equations and the evaluation of state transition matrix are carried out numerically.
A hybrid-optimization method for large-scale non-negative full regualarization in image restoration
Guerrero, J.; Raydan, M.; Rojas, M.
2011-01-01
We describe a new hybrid-optimization method for solving the full-regularization problem of comput- ing both the regularization parameter and the corresponding regularized solution in 1-norm and 2-norm Tikhonov regularization with additional non-negativity constraints. The approach combines the simu
Kabinejadian, Foad; Ghista, Dhanjoo N
2012-09-01
We have recently developed a novel design for coronary arterial bypass surgical grafting, consisting of coupled sequential side-to-side and end-to-side anastomoses. This design has been shown to have beneficial blood flow patterns and wall shear stress distributions which may improve the patency of the CABG, as compared to the conventional end-to-side anastomosis. In our preliminary computational simulation of blood flow of this coupled sequential anastomoses design, the graft and the artery were adopted to be rigid vessels and the blood was assumed to be a Newtonian fluid. Therefore, the present study has been carried out in order to (i) investigate the effects of wall compliance and non-Newtonian rheology on the local flow field and hemodynamic parameters distribution, and (ii) verify the advantages of the CABG coupled sequential anastomoses design over the conventional end-to-side configuration in a more realistic bio-mechanical condition. For this purpose, a two-way fluid-structure interaction analysis has been carried out. A finite volume method is applied to solve the three-dimensional, time-dependent, laminar flow of the incompressible, non-Newtonian fluid; the vessel wall is modeled as a linearly elastic, geometrically non-linear shell structure. In an iteratively coupled approach the transient shell equations and the governing fluid equations are solved numerically. The simulation results indicate a diameter variation ratio of up to 4% and 5% in the graft and the coronary artery, respectively. The velocity patterns and qualitative distribution of wall shear stress parameters in the distensible model do not change significantly compared to the rigid-wall model, despite quite large side-wall deformations in the anastomotic regions. However, less flow separation and reversed flow is observed in the distensible models. The wall compliance reduces the time-averaged wall shear stress up to 32% (on the heel of the conventional end-to-side model) and somewhat
NMF-mGPU: non-negative matrix factorization on multi-GPU systems.
Mejía-Roa, Edgardo; Tabas-Madrid, Daniel; Setoain, Javier; García, Carlos; Tirado, Francisco; Pascual-Montano, Alberto
2015-02-13
In the last few years, the Non-negative Matrix Factorization ( NMF ) technique has gained a great interest among the Bioinformatics community, since it is able to extract interpretable parts from high-dimensional datasets. However, the computing time required to process large data matrices may become impractical, even for a parallel application running on a multiprocessors cluster. In this paper, we present NMF-mGPU, an efficient and easy-to-use implementation of the NMF algorithm that takes advantage of the high computing performance delivered by Graphics-Processing Units ( GPUs ). Driven by the ever-growing demands from the video-games industry, graphics cards usually provided in PCs and laptops have evolved from simple graphics-drawing platforms into high-performance programmable systems that can be used as coprocessors for linear-algebra operations. However, these devices may have a limited amount of on-board memory, which is not considered by other NMF implementations on GPU. NMF-mGPU is based on CUDA ( Compute Unified Device Architecture ), the NVIDIA's framework for GPU computing. On devices with low memory available, large input matrices are blockwise transferred from the system's main memory to the GPU's memory, and processed accordingly. In addition, NMF-mGPU has been explicitly optimized for the different CUDA architectures. Finally, platforms with multiple GPUs can be synchronized through MPI ( Message Passing Interface ). In a four-GPU system, this implementation is about 120 times faster than a single conventional processor, and more than four times faster than a single GPU device (i.e., a super-linear speedup). Applications of GPUs in Bioinformatics are getting more and more attention due to their outstanding performance when compared to traditional processors. In addition, their relatively low price represents a highly cost-effective alternative to conventional clusters. In life sciences, this results in an excellent opportunity to facilitate the
DEFF Research Database (Denmark)
Sørensen, Karen E.; Nielsen, Ole L.; Birck, Malene M.;
2012-01-01
and analysed for SOFA parameters. Dysfunction/failure was observed in the respiratory, haemostatic and hepatic system of all infected animals, together with initial cardiovascular dysfunction. The pulmonary system was the first to fail clinically, which corresponds with similar human findings, whereas......The human sequential organ failure assessment (SOFA) scoring system is used worldwide in intensive care units for assessing the extent of organ dysfunction/failure in patients with severe sepsis. An increasing number of septic cases are caused by Gram-positive bacteria as Staphylococcus aureus...
DEFF Research Database (Denmark)
Cavaliere, Giuseppe; Angelis, Luca De; Rahbek, Anders
2015-01-01
work done for the latter in Cavaliere, Rahbek and Taylor [Econometric Reviews (2014) forthcoming], we establish the asymptotic properties of the procedures based on information criteria in the presence of heteroskedasticity (conditional or unconditional) of a quite general and unknown form....... The relative finite-sample properties of the different methods are investigated by means of a Monte Carlo simulation study. For the simulation DGPs considered in the analysis, we find that the BIC-based procedure and the bootstrap sequential test procedure deliver the best overall performance in terms...
Finding Sequential Patterns from Large Sequence Data
Esmaeili, Mahdi
2010-01-01
Data mining is the task of discovering interesting patterns from large amounts of data. There are many data mining tasks, such as classification, clustering, association rule mining, and sequential pattern mining. Sequential pattern mining finds sets of data items that occur together frequently in some sequences. Sequential pattern mining, which extracts frequent subsequences from a sequence database, has attracted a great deal of interest during the recent data mining research because it is the basis of many applications, such as: web user analysis, stock trend prediction, DNA sequence analysis, finding language or linguistic patterns from natural language texts, and using the history of symptoms to predict certain kind of disease. The diversity of the applications may not be possible to apply a single sequential pattern model to all these problems. Each application may require a unique model and solution. A number of research projects were established in recent years to develop meaningful sequential pattern...
2011-01-01
Numerical Modeling of the Global Atmosphere in the Climate System. Kluwer Academic Press. Bi.mk.irt. J.M.. Testut , C.E.. Brasseur, P., Verron. J...2001JC001198. Itr.ink.irt, J.M., Testut , C.E., Parent, L, 2003b. An integrated system of sequential assimilation modules. sesam3.2 reference manual. MEOM...Fukumori. I.. Kamachi. M.. Martin. M.J., Mogensen, K., Oke. P.. Testut . C.E., Verroa J.. Weaver, A., 2009. Ocean data assimilation systems for GODAE
Sequentializing Parameterized Programs
Directory of Open Access Journals (Sweden)
Salvatore La Torre
2012-07-01
Full Text Available We exhibit assertion-preserving (reachability preserving transformations from parameterized concurrent shared-memory programs, under a k-round scheduling of processes, to sequential programs. The salient feature of the sequential program is that it tracks the local variables of only one thread at any point, and uses only O(k copies of shared variables (it does not use extra counters, not even one counter to keep track of the number of threads. Sequentialization is achieved using the concept of a linear interface that captures the effect an unbounded block of processes have on the shared state in a k-round schedule. Our transformation utilizes linear interfaces to sequentialize the program, and to ensure the sequential program explores only reachable states and preserves local invariants.
Institute of Scientific and Technical Information of China (English)
Xin-quan Jiang; Shao-yi Wang; Jun Zhao; Xiu-li Zhang; Zhi-yuan Zhang
2009-01-01
Aim To evaluate the effects of maxillary sinus floor elevation by a tissue-engineered bone complex of β-tricalcium phosphate (β-TCP) and autologous osteoblasts in dogs. Methodology Autologous osteoblasts from adult Beagle dogs were cultured in vitro. They were further combined with β-TCP to construct the tissue-engineered bone complex. 12 cases of maxillary sinus floor elevation surgery were made bilaterally in 6 animals and randomly repaired with the following 3 groups of materials: Group A (osteoblasts/β-TCP); Group B (β-TCP); Group C (autogenous bone) (n-4 per group). A polychrome sequential fluorescent labeling was performed post-operatively and the animals were sacrificed 24 weeks after operation for histological observation.Results Our results showed that autologous osteoblasts were successfully expanded and the osteoblastic phenoltypes were confirmed by ALP and Alizarin red staining. The cells could attach and proliferate well on the surface of the β-TCP scaffold. The fluorescent and histological observation showed that the tissue-engineered bone complex had an earlier mineralization and more bone formation inside the scaffold than β-TCP along or even autologous bone. It had also maximally maintained the elevated sinus height than both control groups. Conclusion Porous β-TCP has served as a good scaffold for autologous osteoblasts seeding. The tissue-engineered bone complex with β-TCP and autologous osteoblasts might be a better alternative to autologous bone for the clinical edentulous maxillary sinus augmentation.
Wang, Leana; Zhou, Yan; Liu, Cheng-hui; Zhou, Lixin; He, Yong; Pu, Yang; Nguyen, Thien An; Alfano, Robert R.
2015-03-01
The objective of this study was to find out the emission spectral fingerprints for discrimination of human colorectal and gastric cancer from normal tissue in vitro by applying native fluorescence. The native fluorescence (NFL) and Stokes shift spectra of seventy-two human cancerous and normal colorectal (colon, rectum) and gastric tissues were analyzed using three selected excitation wavelengths (e.g. 300 nm, 320 nm and 340 nm). Three distinct biomarkers, tryptophan, collagen and reduced nicotinamide adenine dinucleotide hydrate (NADH), were found in the samples of cancerous and normal tissues from eighteen subjects. The spectral profiles of tryptophan exhibited a sharp peak in cancerous colon tissues under a 300 nm excitation when compared with normal tissues. The changes in compositions of tryptophan, collagen, and NADH were found between colon cancer and normal tissues under an excitation of 300 nm by the non-negative basic biochemical component analysis (BBCA) model.
Sequential Design of Experiments
Energy Technology Data Exchange (ETDEWEB)
Anderson-Cook, Christine Michaela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-30
A sequential design of experiments strategy is being developed and implemented that allows for adaptive learning based on incoming results as the experiment is being run. The plan is to incorporate these strategies for the NCCC and TCM experimental campaigns to be run in the coming months. This strategy for experimentation has the advantages of allowing new data collected during the experiment to inform future experimental runs based on their projected utility for a particular goal. For example, the current effort for the MEA capture system at NCCC plans to focus on maximally improving the quality of prediction of CO_{2} capture efficiency as measured by the width of the confidence interval for the underlying response surface that is modeled as a function of 1) Flue Gas Flowrate [1000-3000] kg/hr; 2) CO_{2} weight fraction [0.125-0.175]; 3) Lean solvent loading [0.1-0.3], and; 4) Lean solvent flowrate [3000-12000] kg/hr.
Saha, Puspendu; Bose, Santanu; Mandal, Nibir
2016-10-01
Many fold-and-thrust belts display multi-storied thrust sequences, characterizing a composite architecture of the thrust wedges. Despite dramatic progress in sandbox modelling over the last three decades, our understanding of such composite thrust-wedge mechanics is limited and demands a re-visit to the problem of sequential thrusting in mechanically layered systems. This study offers a new approach to sandbox modelling, designed with a two-layered sandpack simulating a mechanically weak Coulomb layer, resting coherently upon a stronger Coulomb layer. Our experimental models reproduce strikingly similar styles of the multi-storied frontal thrust sequences observed in natural fold-and- thrust belts. The upper weak horizon undergoes sequential thrusting at a high spatial frequency, forming numerous, closely spaced frontal thrusts, whereas the lower strong horizon produces widely spaced thrusts with progressive horizontal shortening. This contrasting thrust progression behaviour gives rise to composite thrust architecture in the layered sandpack. We show the evolution of such composite thrust sequences as a function of frictional strength (μb) at the basal detachment and thickness ratio (Tr) between the weak and strong layers. For any given values of Tr and μb, the two thrust sequences progress at different rates; the closely-spaced, upper thrust sequence advances forelandward at a faster rate than the widely-spaced, lower thrust sequence. Basal friction (μb) has little effects on the vergence of thrusts in the upper weak layer; they verge always towards foreland, irrespective of Tr values. But, the lower strong layer develops back-vergent thrusts when μb is low (∼0.36). In our experiments, closely spaced thrusts in the upper sequence experience intense reactivation due to their interaction with widely spaced thrusts in the lower sequence. The interaction eventually affects the wedge topography, leading to two distinct parts: inner and outer wedges
2013-01-01
Background Three pathogenicity islands, viz. SPI-1 (Salmonella pathogenicity island 1), SPI-2 (Salmonella pathogenicity island 2) and T6SS (Type VI Secretion System), present in the genome of Salmonella typhimurium have been implicated in the virulence of the pathogen. While the regulation of SPI-1 and SPI-2 (both encoding components of the Type III Secretion System - T3SS) are well understood, T6SS regulation is comparatively less studied. Interestingly, inter-connections among the regulatory elements of these three virulence determinants have also been suggested to be essential for successful infection. However, till date, an integrated view of gene regulation involving the regulators of these three secretion systems and their cross-talk is not available. Results In the current study, relevant regulatory information available from literature have been integrated into a single Boolean network, which portrays the dynamics of T3SS (SPI-1 and SPI-2) and T6SS mediated virulence. Some additional regulatory interactions involving a two-component system response regulator YfhA have also been predicted and included in the Boolean network. These predictions are aimed at deciphering the effects of osmolarity on T6SS regulation, an aspect that has been suggested in earlier studies, but the mechanism of which was hitherto unknown. Simulation of the regulatory network was able to recreate in silico the experimentally observed sequential activation of SPI-1, SPI-2 and T6SS. Conclusions The present study integrates relevant gene regulatory data (from literature and our prediction) into a single network, representing the cross-communication between T3SS (SPI-1 and SPI-2) and T6SS. This holistic view of regulatory interactions is expected to improve the current understanding of pathogenesis of S. typhimurium. PMID:24079299
Finding Sequential Patterns from Large Sequence Data
Directory of Open Access Journals (Sweden)
Fazekas Gabor
2010-01-01
Full Text Available Data mining is the task of discovering interesting patterns from large amounts of data. There are many data mining tasks, such as classification, clustering, association rule mining, and sequential pattern mining. Sequential pattern mining finds sets of data items that occur together frequently in some sequences. Sequential pattern mining, which extracts frequent subsequences from a sequence database, has attracted a great deal of interest during the recent data mining research because it is the basis of many applications, such as: web user analysis, stock trend prediction, DNA sequence analysis, finding language or linguistic patterns from natural language texts, and using the history of symptoms to predict certain kind of disease. The diversity of the applications may not be possible to apply a single sequential pattern model to all these problems. Each application may require a unique model and solution. A number of research projects were established in recent years to develop meaningful sequential pattern models and efficient algorithms for mining these patterns. In this paper, we theoretically provided a brief overview three types of sequential patterns model.
Matrix Representation in Quantum Mechanics with Non-Negative QDF in the Case of a Hydrogen-Like Atom
Zhidkov, E P; Lovetsky, K P; Tretiakov, N P
2002-01-01
The correspondence rules A(q,p)\\mapsto\\hat{A} of the orthodoxal quantum mechanics do not allow one to introduce into the theory the non-negative quantum distribution function F(q,p). The correspondence rules A(q,p)\\mapsto\\hat{O}(A) of Kuryshkin's quantum mechanics (QMK) do allow one to do it. Besides, the operators \\hat{O}(A) turn out to be \\hat{A} bounded and \\hat{A} small at infinity for all systems of auxiliary functions {\\varphi_k}. This allows one to realise canonical matrix representation of QMK to investigate its dependence on the systems of functions {\\varphi_k}.
DEFF Research Database (Denmark)
2014-01-01
Due to applications in areas such as diagnostics and environmental safety, detection of molecules at very low concentrations has attracted recent attention. A powerful tool for this is Surface Enhanced Raman Spectroscopy (SERS) where substrates form localized areas of electromagnetic “hot spots...... a Bayesian Non-negative Matrix Factorization (NMF) approach to identify locations of target molecules. The proposed method is able to successfully analyze the spectra and extract the target spectrum. A visualization of the loadings of the basis vector is created and the results show a clear SNR enhancement...
Shafiee, Alireza
2016-09-24
A theoretical model for multi-tubular palladium-based membrane is proposed in this paper and validated against experimental data for two different sized membrane modules that operate at high temperatures. The model is used in a sequential simulation format to describe and analyse pure hydrogen and hydrogen binary mixture separations, and then extended to simulate an industrial scale membrane unit. This model is used as a sub-routine within an ASPEN Plus model to simulate a membrane reactor in a steam reforming hydrogen production plant. A techno-economic analysis is then conducted using the validated model for a plant producing 300 TPD of hydrogen. The plant utilises a thin (2.5 μm) defect-free and selective layer (Pd75Ag25 alloy) membrane reactor. The economic sensitivity analysis results show usefulness in finding the optimum operating condition that achieves minimum hydrogen production cost at break-even point. A hydrogen production cost of 1.98 $/kg is estimated while the cost of the thin-layer selective membrane is found to constitute 29% of total process capital cost. These results indicate the competiveness of this thin-layer membrane process against conventional methods of hydrogen production. © 2016 Hydrogen Energy Publications LLC
Wright, L.; Coddington, O.; Pilewskie, P.
2015-12-01
Current challenges in Earth remote sensing require improved instrument spectral resolution, spectral coverage, and radiometric accuracy. Hyperspectral instruments, deployed on both aircraft and spacecraft, are a growing class of Earth observing sensors designed to meet these challenges. They collect large amounts of spectral data, allowing thorough characterization of both atmospheric and surface properties. The higher accuracy and increased spectral and spatial resolutions of new imagers require new numerical approaches for processing imagery and separating surface and atmospheric signals. One potential approach is source separation, which allows us to determine the underlying physical causes of observed changes. Improved signal separation will allow hyperspectral instruments to better address key science questions relevant to climate change, including land-use changes, trends in clouds and atmospheric water vapor, and aerosol characteristics. In this work, we investigate a Non-negative Matrix Factorization (NMF) method for the separation of atmospheric and land surface signal sources. NMF offers marked benefits over other commonly employed techniques, including non-negativity, which avoids physically impossible results, and adaptability, which allows the method to be tailored to hyperspectral source separation. We adapt our NMF algorithm to distinguish between contributions from different physically distinct sources by introducing constraints on spectral and spatial variability and by using library spectra to inform separation. We evaluate our NMF algorithm with simulated hyperspectral images as well as hyperspectral imagery from several instruments including, the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS), NASA Hyperspectral Imager for the Coastal Ocean (HICO) and National Ecological Observatory Network (NEON) Imaging Spectrometer.
Sequential stochastic optimization
Cairoli, Renzo
1996-01-01
Sequential Stochastic Optimization provides mathematicians and applied researchers with a well-developed framework in which stochastic optimization problems can be formulated and solved. Offering much material that is either new or has never before appeared in book form, it lucidly presents a unified theory of optimal stopping and optimal sequential control of stochastic processes. This book has been carefully organized so that little prior knowledge of the subject is assumed; its only prerequisites are a standard graduate course in probability theory and some familiarity with discrete-paramet
Sequential monitoring with conditional randomization tests
Plamadeala, Victoria; 10.1214/11-AOS941
2012-01-01
Sequential monitoring in clinical trials is often employed to allow for early stopping and other interim decisions, while maintaining the type I error rate. However, sequential monitoring is typically described only in the context of a population model. We describe a computational method to implement sequential monitoring in a randomization-based context. In particular, we discuss a new technique for the computation of approximate conditional tests following restricted randomization procedures and then apply this technique to approximate the joint distribution of sequentially computed conditional randomization tests. We also describe the computation of a randomization-based analog of the information fraction. We apply these techniques to a restricted randomization procedure, Efron's [Biometrika 58 (1971) 403--417] biased coin design. These techniques require derivation of certain conditional probabilities and conditional covariances of the randomization procedure. We employ combinatoric techniques to derive t...
A sequential solution for anisotropic total variation image denoising with interval constraints
Xu, Jingyan; Noo, Frédéric
2017-09-01
We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush–Kuhn–Tucker conditions for constrained convex optimization.
Ma, Yehao; Li, Xian; Huang, Pingjie; Hou, Dibo; Wang, Qiang; Zhang, Guangxin
2017-04-01
In many situations the THz spectroscopic data observed from complex samples represent the integrated result of several interrelated variables or feature components acting together. The actual information contained in the original data might be overlapping and there is a necessity to investigate various approaches for model reduction and data unmixing. The development and use of low-rank approximate nonnegative matrix factorization (NMF) and smooth constraint NMF (CNMF) algorithms for feature components extraction and identification in the fields of terahertz time domain spectroscopy (THz-TDS) data analysis are presented. The evolution and convergence properties of NMF and CNMF methods based on sparseness, independence and smoothness constraints for the resulting nonnegative matrix factors are discussed. For general NMF, its cost function is nonconvex and the result is usually susceptible to initialization and noise corruption, and may fall into local minima and lead to unstable decomposition. To reduce these drawbacks, smoothness constraint is introduced to enhance the performance of NMF. The proposed algorithms are evaluated by several THz-TDS data decomposition experiments including a binary system and a ternary system simulating some applications such as medicine tablet inspection. Results show that CNMF is more capable of finding optimal solutions and more robust for random initialization in contrast to NMF. The investigated method is promising for THz data resolution contributing to unknown mixture identification.
Institute of Scientific and Technical Information of China (English)
Xiuxiong CHEN; Haozhao LI
2008-01-01
The authors show that the 2-non-negative traceless bisectional curvature is preserved along the K(a)hler-Ricci flow.The positivity of Ricci curvature is also preserved along the K(a)hler-Ricci flow with 2-non-negative traceless bisectional curvature.As a corollary,the K(a)hler-Ricci flow with 2-non-negative traceless bisectional curvature will converge to a K(a)hler-Ricci soliton in the sense of Cheeger-Gromov-Hausdorff topology if complex dimension n≥3.
Forced Sequence Sequential Decoding
DEFF Research Database (Denmark)
Jensen, Ole Riis; Paaske, Erik
1998-01-01
the iteration process provides the sequential decoders with side information that allows a smaller average load and minimizes the probability of computational overflow. Analytical results for the probability that the first RS word is decoded after C computations are presented. These results are supported...
Du, Dongping; Yang, Hui; Ednie, Andrew R; Bennett, Eric S
2016-09-01
Glycan structures account for up to 35% of the mass of cardiac sodium ( Nav ) channels. To question whether and how reduced sialylation affects Nav activity and cardiac electrical signaling, we conducted a series of in vitro experiments on ventricular apex myocytes under two different glycosylation conditions, reduced protein sialylation (ST3Gal4(-/-)) and full glycosylation (control). Although aberrant electrical signaling is observed in reduced sialylation, realizing a better understanding of mechanistic details of pathological variations in INa and AP is difficult without performing in silico studies. However, computer model of Nav channels and cardiac myocytes involves greater levels of complexity, e.g., high-dimensional parameter space, nonlinear and nonconvex equations. Traditional linear and nonlinear optimization methods have encountered many difficulties for model calibration. This paper presents a new statistical metamodeling approach for efficient computer experiments and optimization of Nav models. First, we utilize a fractional factorial design to identify control variables from the large set of model parameters, thereby reducing the dimensionality of parametric space. Further, we develop the Gaussian process model as a surrogate of expensive and time-consuming computer models and then identify the next best design point that yields the maximal probability of improvement. This process iterates until convergence, and the performance is evaluated and validated with real-world experimental data. Experimental results show the proposed algorithm achieves superior performance in modeling the kinetics of Nav channels under a variety of glycosylation conditions. As a result, in silico models provide a better understanding of glyco-altered mechanistic details in state transitions and distributions of Nav channels. Notably, ST3Gal4(-/-) myocytes are shown to have higher probabilities accumulated in intermediate inactivation during the repolarization and yield a
Soelter, Jan; Schumacher, Jan; Spors, Hartwig; Schmuker, Michael
2014-09-01
Segmentation of functional parts in image series of functional activity is a common problem in neuroscience. Here we apply regularized non-negative matrix factorization (rNMF) to extract glomeruli in intrinsic optical signal (IOS) images of the olfactory bulb. Regularization allows us to incorporate prior knowledge about the spatio-temporal characteristics of glomerular signals. We demonstrate how to identify suitable regularization parameters on a surrogate dataset. With appropriate regularization segmentation by rNMF is more resilient to noise and requires fewer observations than conventional spatial independent component analysis (sICA). We validate our approach in experimental data using anatomical outlines of glomeruli obtained by 2-photon imaging of resting synapto-pHluorin fluorescence. Taken together, we show that rNMF provides a straightforward method for problem tailored source separation that enables reliable automatic segmentation of functional neural images, with particular benefit in situations with low signal-to-noise ratio as in IOS imaging.
Gauvin, Laetitia; Cattuto, Ciro
2014-01-01
The increasing availability of temporal network data is calling for more research on extracting and characterizing mesoscopic structures in temporal networks and on relating such structure to specific functions or properties of the system. An outstanding challenge is the extension of the results achieved for static networks to time-varying networks, where the topological structure of the system and the temporal activity patterns of its components are intertwined. Here we investigate the use of a latent factor decomposition technique, non-negative tensor factorization, to extract the community-activity structure of temporal networks. The method is intrinsically temporal and allows to simultaneously identify communities and to track their activity over time. We represent the time-varying adjacency matrix of a temporal network as a three-way tensor and approximate this tensor as a sum of terms that can be interpreted as communities of nodes with an associated activity time series. We summarize known computationa...
Paloma, Cynthia S.
The plasma electron temperature (Te) plays a critical role in a tokamak nu- clear fusion reactor since temperatures on the order of 108K are required to achieve fusion conditions. Many plasma properties in a tokamak nuclear fusion reactor are modeled by partial differential equations (PDE's) because they depend not only on time but also on space. In particular, the dynamics of the electron temperature is governed by a PDE referred to as the Electron Heat Transport Equation (EHTE). In this work, a numerical method is developed to solve the EHTE based on a custom finite-difference technique. The solution of the EHTE is compared to temperature profiles obtained by using TRANSP, a sophisticated plasma transport code, for specific discharges from the DIII-D tokamak, located at the DIII-D National Fusion Facility in San Diego, CA. The thermal conductivity (also called thermal diffusivity) of the electrons (Xe) is a plasma parameter that plays a critical role in the EHTE since it indicates how the electron temperature diffusion varies across the minor effective radius of the tokamak. TRANSP approximates Xe through a curve-fitting technique to match experimentally measured electron temperature profiles. While complex physics-based model have been proposed for Xe, there is a lack of a simple mathematical model for the thermal diffusivity that could be used for control design. In this work, a model for Xe is proposed based on a scaling law involving key plasma variables such as the electron temperature (Te), the electron density (ne), and the safety factor (q). An optimization algorithm is developed based on the Sequential Quadratic Programming (SQP) technique to optimize the scaling factors appearing in the proposed model so that the predicted electron temperature and magnetic flux profiles match predefined target profiles in the best possible way. A simulation study summarizing the outcomes of the optimization procedure is presented to illustrate the potential of the
Directory of Open Access Journals (Sweden)
Sujoy Roy
2017-08-01
Full Text Available In this study, we developed and evaluated a novel text-mining approach, using non-negative tensor factorization (NTF, to simultaneously extract and functionally annotate transcriptional modules consisting of sets of genes, transcription factors (TFs, and terms from MEDLINE abstracts. A sparse 3-mode term × gene × TF tensor was constructed that contained weighted frequencies of 106,895 terms in 26,781 abstracts shared among 7,695 genes and 994 TFs. The tensor was decomposed into sub-tensors using non-negative tensor factorization (NTF across 16 different approximation ranks. Dominant entries of each of 2,861 sub-tensors were extracted to form term–gene–TF annotated transcriptional modules (ATMs. More than 94% of the ATMs were found to be enriched in at least one KEGG pathway or GO category, suggesting that the ATMs are functionally relevant. One advantage of this method is that it can discover potentially new gene–TF associations from the literature. Using a set of microarray and ChIP-Seq datasets as gold standard, we show that the precision of our method for predicting gene–TF associations is significantly higher than chance. In addition, we demonstrate that the terms in each ATM can be used to suggest new GO classifications to genes and TFs. Taken together, our results indicate that NTF is useful for simultaneous extraction and functional annotation of transcriptional regulatory networks from unstructured text, as well as for literature based discovery. A web tool called Transcriptional Regulatory Modules Extracted from Literature (TREMEL, available at http://binf1.memphis.edu/tremel, was built to enable browsing and searching of ATMs.
Directory of Open Access Journals (Sweden)
Laetitia Gauvin
Full Text Available The increasing availability of temporal network data is calling for more research on extracting and characterizing mesoscopic structures in temporal networks and on relating such structure to specific functions or properties of the system. An outstanding challenge is the extension of the results achieved for static networks to time-varying networks, where the topological structure of the system and the temporal activity patterns of its components are intertwined. Here we investigate the use of a latent factor decomposition technique, non-negative tensor factorization, to extract the community-activity structure of temporal networks. The method is intrinsically temporal and allows to simultaneously identify communities and to track their activity over time. We represent the time-varying adjacency matrix of a temporal network as a three-way tensor and approximate this tensor as a sum of terms that can be interpreted as communities of nodes with an associated activity time series. We summarize known computational techniques for tensor decomposition and discuss some quality metrics that can be used to tune the complexity of the factorized representation. We subsequently apply tensor factorization to a temporal network for which a ground truth is available for both the community structure and the temporal activity patterns. The data we use describe the social interactions of students in a school, the associations between students and school classes, and the spatio-temporal trajectories of students over time. We show that non-negative tensor factorization is capable of recovering the class structure with high accuracy. In particular, the extracted tensor components can be validated either as known school classes, or in terms of correlated activity patterns, i.e., of spatial and temporal coincidences that are determined by the known school activity schedule.
Yun, Younghee; Jung, Wonmo; Kim, Hyunho; Jang, Bo-Hyoung; Kim, Min-Hee; Noh, Jiseong; Ko, Seong-Gyu; Choi, Inhwa
2017-08-01
Syndrome differentiation (SD) results in a diagnostic conclusion based on a cluster of concurrent symptoms and signs, including pulse form and tongue color. In Korea, there is a strong interest in the standardization of Traditional Medicine (TM). In order to standardize TM treatment, standardization of SD should be given priority. The aim of this study was to explore the SD, or symptom clusters, of patients with atopic dermatitis (AD) using non-negative factorization methods and k-means clustering analysis. We screened 80 patients and enrolled 73 eligible patients. One TM dermatologist evaluated the symptoms/signs using an existing clinical dataset from patients with AD. This dataset was designed to collect 15 dermatologic and 18 systemic symptoms/signs associated with AD. Non-negative matrix factorization was used to decompose the original data into a matrix with three features and a weight matrix. The point of intersection of the three coordinates from each patient was placed in three-dimensional space. With five clusters, the silhouette score reached 0.484, and this was the best silhouette score obtained from two to nine clusters. Patients were clustered according to the varying severity of concurrent symptoms/signs. Through the distribution of the null hypothesis generated by 10,000 permutation tests, we found significant cluster-specific symptoms/signs from the confidence intervals in the upper and lower 2.5% of the distribution. Patients in each cluster showed differences in symptoms/signs and severity. In a clinical situation, SD and treatment are based on the practitioners' observations and clinical experience. SD, identified through informatics, can contribute to development of standardized, objective, and consistent SD for each disease. Copyright © 2017. Published by Elsevier Ltd.
Gauvin, Laetitia; Panisson, André; Cattuto, Ciro
2014-01-01
The increasing availability of temporal network data is calling for more research on extracting and characterizing mesoscopic structures in temporal networks and on relating such structure to specific functions or properties of the system. An outstanding challenge is the extension of the results achieved for static networks to time-varying networks, where the topological structure of the system and the temporal activity patterns of its components are intertwined. Here we investigate the use of a latent factor decomposition technique, non-negative tensor factorization, to extract the community-activity structure of temporal networks. The method is intrinsically temporal and allows to simultaneously identify communities and to track their activity over time. We represent the time-varying adjacency matrix of a temporal network as a three-way tensor and approximate this tensor as a sum of terms that can be interpreted as communities of nodes with an associated activity time series. We summarize known computational techniques for tensor decomposition and discuss some quality metrics that can be used to tune the complexity of the factorized representation. We subsequently apply tensor factorization to a temporal network for which a ground truth is available for both the community structure and the temporal activity patterns. The data we use describe the social interactions of students in a school, the associations between students and school classes, and the spatio-temporal trajectories of students over time. We show that non-negative tensor factorization is capable of recovering the class structure with high accuracy. In particular, the extracted tensor components can be validated either as known school classes, or in terms of correlated activity patterns, i.e., of spatial and temporal coincidences that are determined by the known school activity schedule.
DEFF Research Database (Denmark)
Herckenrath, Daan; Fiandaca, G.; Auken, Esben
2013-01-01
these improvements were insignificant and geophysical parameter estimates became slightly worse. When employing a low-quality petrophysical relationship, groundwater model parameters improved less for both the SHI and JHI, where the SHI performed relatively better. When comparing a SHI and JHI for a real......-world groundwater model and ERT data, differences in parameter estimates were small. For both cases investigated in this paper, the SHI seems favorable, taking into account parameter error, data fit and the complexity of implementing a JHI in combination with its larger computational burden. © 2013 Author(s)....
Praet, Jelle; Santermans, Eva; Daans, Jasmijn; Le Blon, Debbie; Hoornaert, Chloé; Goossens, Herman; Hens, Niel; Van der Linden, Annemie; Berneman, Zwi; Ponsaerts, Peter
2015-01-01
While multiple rodent preclinical studies, and to a lesser extent human clinical trials, claim the feasibility, safety, and potential clinical benefit of cell grafting in the central nervous system (CNS), currently only little convincing knowledge exists regarding the actual fate of the grafted cells and their effect on the surrounding environment (or vice versa). Our preceding studies already indicated that only a minor fraction of the initially grafted cell population survives the grafting process, while the surviving cell population becomes invaded by highly activated microglia/macrophages and surrounded by reactive astrogliosis. In the current study, we further elaborate on early cellular and inflammatory events following syngeneic grafting of eGFP(+) mouse embryonic fibroblasts (mEFs) in the CNS of immunocompetent mice. Based on obtained quantitative histological data, we here propose a detailed mathematically derived working model that sequentially comprises hypoxia-induced apoptosis of grafted mEFs, neutrophil invasion, neoangiogenesis, microglia/macrophage recruitment, astrogliosis, and eventually survival of a limited number of grafted mEFs. Simultaneously, we observed that the cellular events following mEF grafting activates the subventricular zone neural stem and progenitor cell compartment. This proposed model therefore further contributes to our understanding of cell graft-induced cellular responses and will eventually allow for successful manipulation of this intervention.
Szatmári, Gábor; Barta, Károly; Pásztor, László
2015-04-01
Modelling of large-scale spatial variability of soil properties is a promising subject in soil science, as well as in general environmental research, since the resulted model(s) can be applied to solve various problems. In addition to "purely" map an environmental element, the spatial uncertainty of the map product can deduced, specific areas could be identified and/or delineated (contaminated or endangered regions, plots for fertilization, etc.). Geostatistics, which can be regarded as a subset of statistics specialized in analysis and interpretation of geographically referenced data, offer a huge amount of tools to solve these tasks. Numerous spatial modeling methods have been developed in the past decades based on the regionalized variable theory. One of these techniques is sequential stochastic simulation, which can be conditioned with universal kriging (also referred to as regression kriging). As opposed to universal kriging (UK), sequential simulation conditioned with universal kriging (SSUK) provides not just one but several alternative and equally probable "maps", i.e. realizations. The realizations reproduce the global statistics (e.g. sample histogram, variogram), i.e. they reflect/model the reality in a certain global (and not local!) sense. In this paper we present and test SSUK developed in R-code and its utilizations in a water erosion affected study area. Furthermore, we compare the results from UK and SSUK. For this purpose, two soil variables were selected: soil organic matter (SOM) content and rooting depth (RD). SSUK approach is illustrated with a legacy soil dataset from a study area endangered by water erosion in Central Hungary. Legacy soil data was collected in the end of the 1980s in the framework of the National Land Evaluation Programme. Spatially exhaustive covariates were derived from a digital elevation model and from the land-use-map of the study area. SSUK was built upon a UK prediction system for both variables and 200 realizations
Directory of Open Access Journals (Sweden)
Naoto Hayasaka
Full Text Available BACKGROUND: There is an increasing need for animal disease models for pathophysiological research and efficient drug screening. However, one of the technical barriers to the effective use of the models is the difficulty of non-invasive and sequential monitoring of the same animals. Micro-CT is a powerful tool for serial diagnostic imaging of animal models. However, soft tissue contrast resolution, particularly in the brain, is insufficient for detailed analysis, unlike the current applications of CT in the clinical arena. We address the soft tissue contrast resolution issue in this report. METHODOLOGY: We performed contrast-enhanced CT (CECT on mouse models of experimental cerebral infarction and hepatic ischemia. Pathological changes in each lesion were quantified for two weeks by measuring the lesion volume or the ratio of high attenuation area (%HAA, indicative of increased vascular permeability. We also compared brain images of stroke rats and ischemic mice acquired with micro-CT to those acquired with 11.7-T micro-MRI. Histopathological analysis was performed to confirm the diagnosis by CECT. PRINCIPAL FINDINGS: In the models of cerebral infarction, vascular permeability was increased from three days through one week after surgical initiation, which was also confirmed by Evans blue dye leakage. Measurement of volume and %HAA of the liver lesions demonstrated differences in the recovery process between mice with distinct genetic backgrounds. Comparison of CT and MR images acquired from the same stroke rats or ischemic mice indicated that accuracy of volumetric measurement, as well as spatial and contrast resolutions of CT images, was comparable to that obtained with MRI. The imaging results were also consistent with the histological data. CONCLUSIONS: This study demonstrates that the CECT scanning method is useful in rodents for both quantitative and qualitative evaluations of pathologic lesions in tissues/organs including the brain, and is
Sequential measurements of conjugate observables
Energy Technology Data Exchange (ETDEWEB)
Carmeli, Claudio [Dipartimento di Fisica, Universita di Genova, Via Dodecaneso 33, 16146 Genova (Italy); Heinosaari, Teiko [Department of Physics and Astronomy, Turku Centre for Quantum Physics, University of Turku, 20014 Turku (Finland); Toigo, Alessandro, E-mail: claudio.carmeli@gmail.com, E-mail: teiko.heinosaari@utu.fi, E-mail: alessandro.toigo@polimi.it [Dipartimento di Matematica ' Francesco Brioschi' , Politecnico di Milano, Piazza Leonardo da Vinci 32, 20133 Milano (Italy)
2011-07-15
We present a unified treatment of sequential measurements of two conjugate observables. Our approach is to derive a mathematical structure theorem for all the relevant covariant instruments. As a consequence of this result, we show that every Weyl-Heisenberg covariant observable can be implemented as a sequential measurement of two conjugate observables. This method is applicable both in finite- and infinite-dimensional Hilbert spaces, therefore covering sequential spin component measurements as well as position-momentum sequential measurements.
Spirov, Alexander V; Myasnikova, Ekaterina M; Holloway, David M
2016-04-01
Gene network simulations are increasingly used to quantify mutual gene regulation in biological tissues. These are generally based on linear interactions between single-entity regulatory and target genes. Biological genes, by contrast, commonly have multiple, partially independent, cis-regulatory modules (CRMs) for regulator binding, and can produce variant transcription and translation products. We present a modeling framework to address some of the gene regulatory dynamics implied by this biological complexity. Spatial patterning of the hunchback (hb) gene in Drosophila development involves control by three CRMs producing two distinct mRNA transcripts. We use this example to develop a differential equations model for transcription which takes into account the cis-regulatory architecture of the gene. Potential regulatory interactions are screened by a genetic algorithms (GAs) approach and compared to biological expression data.
Sequential cloning of chromosomes
Energy Technology Data Exchange (ETDEWEB)
Lacks, S.A.
1991-12-31
A method for sequential cloning of chromosomal DNA and chromosomal DNA cloned by this method are disclosed. The method includes the selection of a target organism having a segment of chromosomal DNA to be sequentially cloned. A first DNA segment, having a first restriction enzyme site on either side. homologous to the chromosomal DNA to be sequentially cloned is isolated. A first vector product is formed by ligating the homologous segment into a suitably designed vector. The first vector product is circularly integrated into the target organism`s chromosomal DNA. The resulting integrated chromosomal DNA segment includes the homologous DNA segment at either end of the integrated vector segment. The integrated chromosomal DNA is cleaved with a second restriction enzyme and ligated to form a vector-containing plasmid, which is replicated in a host organism. The replicated plasmid is then cleaved with the first restriction enzyme. Next, a DNA segment containing the vector and a segment of DNA homologous to a distal portion of the previously isolated DNA segment is isolated. This segment is then ligated to form a plasmid which is replicated within a suitable host. This plasmid is then circularly integrated into the target chromosomal DNA. The chromosomal DNA containing the circularly integrated vector is treated with a third, retrorestriction enzyme. The cleaved DNA is ligated to give a plasmid that is used to transform a host permissive for replication of its vector. The sequential cloning process continues by repeated cycles of circular integration and excision. The excision is carried out alternately with the second and third enzymes.
Sequential cloning of chromosomes
Lacks, S.A.
1995-07-18
A method for sequential cloning of chromosomal DNA of a target organism is disclosed. A first DNA segment homologous to the chromosomal DNA to be sequentially cloned is isolated. The first segment has a first restriction enzyme site on either side. A first vector product is formed by ligating the homologous segment into a suitably designed vector. The first vector product is circularly integrated into the target organism`s chromosomal DNA. The resulting integrated chromosomal DNA segment includes the homologous DNA segment at either end of the integrated vector segment. The integrated chromosomal DNA is cleaved with a second restriction enzyme and ligated to form a vector-containing plasmid, which is replicated in a host organism. The replicated plasmid is then cleaved with the first restriction enzyme. Next, a DNA segment containing the vector and a segment of DNA homologous to a distal portion of the previously isolated DNA segment is isolated. This segment is then ligated to form a plasmid which is replicated within a suitable host. This plasmid is then circularly integrated into the target chromosomal DNA. The chromosomal DNA containing the circularly integrated vector is treated with a third, retrorestriction (class IIS) enzyme. The cleaved DNA is ligated to give a plasmid that is used to transform a host permissive for replication of its vector. The sequential cloning process continues by repeated cycles of circular integration and excision. The excision is carried out alternately with the second and third enzymes. 9 figs.
Directory of Open Access Journals (Sweden)
Šime Ukić
2013-01-01
Full Text Available Gradient ion chromatography was used for the separation of eight sugars: arabitol, cellobiose, fructose, fucose, lactulose, melibiose, N-acetyl-D-glucosamine, and raffinose. The separation method was optimized using a combination of simplex or genetic algorithm with the isocratic-to-gradient retention modeling. Both the simplex and genetic algorithms provided well separated chromatograms in a similar analysis time. However, the simplex methodology showed severe drawbacks when dealing with local minima. Thus the genetic algorithm methodology proved as a method of choice for gradient optimization in this case. All the calculated/predicted chromatograms were compared with the real sample data, showing more than a satisfactory agreement.
Guptaroy, P; Sau, Goutam; Biswas, S K; Bhattacharya, S
2009-01-01
We have attempted to develop here tentatively a model for $J/\\Psi$ production in p+p, d+Au, Cu + Cu and Au + Au collisions at RHIC energies on the basic ansatz that the results of nucleus-nucleus collisions could be arrived at from the nucleon-nucleon (p + p)-interactions with induction of some additional specific features of high energy nuclear collisions. Based on the proposed new and somewhat unfamiliar model, we have tried (i) to capture the properties of invariant $p_T$ -spectra for $J/\\Psi$ meson production; (ii) to study the nature of centrality dependence of the $p_T$ -spectra; (iii) to understand the rapidity distributions; (iv) to obtain the characteristics of the average transverse momentum $$ and the values of $$ as well and (v) to trace the nature of nuclear modification factor. The alternative approach adopted here describes the data-sets on the above-mentioned various observables in a fairly satisfactory manner. And, finally, the nature of $J/\\Psi$-production at Large Hadron Collider(LHC)-energ...
Text Classification: A Sequential Reading Approach
Dulac-Arnold, Gabriel; Gallinari, Patrick
2011-01-01
We propose to model the text classification process as a sequential decision process. In this process, an agent learns to classify documents into topics while reading the document sentences sequentially and learns to stop as soon as enough information was read for deciding. The proposed algorithm is based on a modelisation of Text Classification as a Markov Decision Process and learns by using Reinforcement Learning. Experiments on four different classical mono-label corpora show that the proposed approach performs comparably to classical SVM approaches for large training sets, and better for small training sets. In addition, the model automatically adapts its reading process to the quantity of training information provided.
Raja, Muhammad Asif Zahoor; Zameer, Aneela; Khan, Aziz Ullah; Wazwaz, Abdul Majid
2016-01-01
In this study, a novel bio-inspired computing approach is developed to analyze the dynamics of nonlinear singular Thomas-Fermi equation (TFE) arising in potential and charge density models of an atom by exploiting the strength of finite difference scheme (FDS) for discretization and optimization through genetic algorithms (GAs) hybrid with sequential quadratic programming. The FDS procedures are used to transform the TFE differential equations into a system of nonlinear equations. A fitness function is constructed based on the residual error of constituent equations in the mean square sense and is formulated as the minimization problem. Optimization of parameters for the system is carried out with GAs, used as a tool for viable global search integrated with SQP algorithm for rapid refinement of the results. The design scheme is applied to solve TFE for five different scenarios by taking various step sizes and different input intervals. Comparison of the proposed results with the state of the art numerical and analytical solutions reveals that the worth of our scheme in terms of accuracy and convergence. The reliability and effectiveness of the proposed scheme are validated through consistently getting optimal values of statistical performance indices calculated for a sufficiently large number of independent runs to establish its significance.
Moghadam, M Nassajian; Aminian, K; Asghari, M; Parnianpour, M
2013-01-01
The way central nervous system manages the excess degrees of freedom to solve kinetic redundancy of musculoskeletal system remains an open question. In this study, we utilise the concept of synergy formation as a simplifying control strategy to find the muscle recruitment based on summation of identified muscle synergies to balance the biomechanical demands (biaxial external torque) during an isometric shoulder task. A numerical optimisation-based shoulder model was used to obtain muscle activation levels when a biaxial external isometric torque is imposed at the shoulder glenohumeral joint. In the numerical simulations, 12 different shoulder torque vectors in the transverse plane are considered. For each selected direction for the torque vector, the resulting muscle activation data are calculated. The predicted muscle activation data are used for grouping muscles in some fixed element synergies by the non-negative matrix factorisation method. Next, torque produced by these synergies are computed and projected in the 2D torque space to investigate the magnitude and direction of torques that each muscle synergy generated. The results confirmed our expectation that few dominant synergies are sufficient to reconstruct the torque vectors and each muscle contributed to more than one synergy. Decomposition of the concatenated data, combining the activation and external torque, provided functional muscle synergies that produced torques in the four principal directions. Four muscle synergies were able to account for more than 95% of variation of the original data.
Messadi, Abdel Majid; Mardassi, Besma; Ouali, Jamel Abdennaceur; Touir, Jamel
2016-06-01
Integrated sedimentological studies, diagenesis, sequential analysis and clay mineralogy on the Upper Paleocene rocks in Tamerza area provide important information on the reconstruction of the depositional basin, cyclicity, and paleoclimatic contexts. Facies analysis and petrographic studies have led to the recognition of nine facies that were deposited in three facies belts: Sebkha, inner ramp and outer ramp summarized in a carbonate ramp model: Homoclinal ramp under an arid climate. The upward and lateral changes in thickness and composition show a general regressive trend that records a transition from an outer ramp to Sebkha, creating different types of confinement. The facies stacking patterns constitute several kinds of meter-scale, shallowing-upward cycles. Nine different types of depositional cycles and several models of Sebkha sequences were defined. These different types of facies, characterized within the Thelja Formation, compose seven depositional sequences, mainly made of carbonates, marls and evaporates. Detailed multi approach analysis provides important information on evaporitic sequence stratigraphy. In carbonates beds, the diagenetic analysis provides an overview and chronology of diagenetic processes. A particular attention was paid to early stage cementation which enables us to characterize better the depositional environments. In addition to cementation, other features define the diagenetic history. X-ray diffraction reveals the presence of smectite suggesting an arid climate. Moreover, the clinoptilolite and the frequency of primary dolomite indicate different degrees of confinement. The seven depositional sequences showing a hierarchical organization of many cycles, as described above, suggested that eustatic sea level oscillations caused by cyclic perturbations of the Earth's orbit play a fundamental role in determining the formation of hierarchical cyclic rhythmicity.
Sequential Analysis in High Dimensional Multiple Testing and Sparse Recovery
Malloy, Matt
2011-01-01
This paper studies the problem of high-dimensional multiple testing and sparse recovery from the perspective of sequential analysis. In this setting, the probability of error is a function of the dimension of the problem. A simple sequential testing procedure for this problem is proposed. We derive necessary conditions for reliable recovery in the non-sequential setting and contrast them with sufficient conditions for reliable recovery using the proposed sequential testing procedure. Applications of the main results to several commonly encountered models show that sequential testing can be exponentially more sensitive to the difference between the null and alternative distributions (in terms of the dependence on dimension), implying that subtle cases can be much more reliably determined using sequential methods.
Forced Sequence Sequential Decoding
DEFF Research Database (Denmark)
Jensen, Ole Riis
In this thesis we describe a new concatenated decoding scheme based on iterations between an inner sequentially decoded convolutional code of rate R=1/4 and memory M=23, and block interleaved outer Reed-Solomon codes with non-uniform profile. With this scheme decoding with good performance is pos...... of computational overflow. Analytical results for the probability that the first Reed-Solomon word is decoded after C computations are presented. This is supported by simulation results that are also extended to other parameters....
Ghoraani, Behnaz
2016-12-01
Time-frequency (TF) representation has found wide use in many challenging signal processing tasks including classification, interference rejection, and retrieval. Advances in TF analysis methods have led to the development of powerful techniques, which use non-negative matrix factorization (NMF) to adaptively decompose the TF data into TF basis components and coefficients. In this paper, standard NMF is modified for TF data, such that the improved TF bases can be used for signal classification applications with overlapping classes and data retrieval. The new method, called jointly learnt NMF (JLNMF) method, identifies both distinct and shared TF bases and is able to use the decomposed bases to successfully retrieve and separate the class-specific information from data. The paper provides the framework of the proposed JLNMF cost function and proposes a projected gradient framework to solve for limit point stationarity solutions. The developed algorithm has been applied to a synthetic data retrieval experiment and epileptic spikes in EEG signals of infantile spasms and discrimination of pathological voice disorder. The experimental results verified that JLNMF successfully identified the class-specific information, thus enhancing data separation performance.
Ray, Sumanta; Maulik, Ujjwal
2016-12-20
Detecting perturbation in modular structure during HIV-1 disease progression is an important step to understand stage specific infection pattern of HIV-1 virus in human cell. In this article, we proposed a novel methodology on integration of multiple biological information to identify such disruption in human gene module during different stages of HIV-1 infection. We integrate three different biological information: gene expression information, protein-protein interaction information and gene ontology information in single gene meta-module, through non negative matrix factorization (NMF). As the identified metamodules inherit those information so, detecting perturbation of these, reflects the changes in expression pattern, in PPI structure and in functional similarity of genes during the infection progression. To integrate modules of different data sources into strong meta-modules, NMF based clustering is utilized here. Perturbation in meta-modular structure is identified by investigating the topological and intramodular properties and putting rank to those meta-modules using a rank aggregation algorithm. We have also analyzed the preservation structure of significant GO terms in which the human proteins of the meta-modules participate. Moreover, we have performed an analysis to show the change of coregulation pattern of identified transcription factors (TFs) over the HIV progression stages.
Asynchronous sequential machine design and analysis
Tinder, Richard
2009-01-01
Asynchronous Sequential Machine Design and Analysis provides a lucid, in-depth treatment of asynchronous state machine design and analysis presented in two parts: Part I on the background fundamentals related to asynchronous sequential logic circuits generally, and Part II on self-timed systems, high-performance asynchronous programmable sequencers, and arbiters.Part I provides a detailed review of the background fundamentals for the design and analysis of asynchronous finite state machines (FSMs). Included are the basic models, use of fully documented state diagrams, and the design and charac
Sequentiality versus simultaneity: Interrelated factor demand
Asphjell, M.K.; Letterie, W.A.; Nilsen, O.A.; Pfann, G.A.
2014-01-01
Firms may adjust capital and labor sequentially or simultaneously. In this paper, we develop a structural model of interrelated factor demand subject to nonconvex adjustment costs and estimated by simulated method of moments. Based on Norwegian manufacturing industry plant-level data, parameter
Sequentiality versus simultaneity: Interrelated factor demand
Asphjell, M.K.; Letterie, W.A.; Nilsen, O.A.; Pfann, G.A.
2014-01-01
Firms may adjust capital and labor sequentially or simultaneously. In this paper, we develop a structural model of interrelated factor demand subject to nonconvex adjustment costs and estimated by simulated method of moments. Based on Norwegian manufacturing industry plant-level data, parameter esti
基于序列分析的僵尸网络检测模型%Botnet detection model based on sequential analysis
Institute of Scientific and Technical Information of China (English)
范轶彦; 邬国锐; 陈监利; 汤博
2011-01-01
The contemporary IRC botnet detection methods are not suitable for botnet detection under infrequently command and control interactions. To detect small stealthy botnet, a botnet detection model based on sequential analysis is proposed, which is a complement to contemporary passive detection technologies. Several probe methods and detection algorithms are discussed considering response types of clients, and average round of detection is analyzed, only small portion of command and control interactions are observed to declare single or multiple IRC bot. The results show that botnet detection is completed in expected round under controlled false positive rate and false negative rate.%现有的IRC botnet检测技术不适合控制命令交互不频繁的botnet检测.为了实现小规模隐秘僵尸网络的检测,提出了一种基于序列分析的僵尸网络检测模型,对现有的被动检测技术进行补充.讨论了几种探测技术和检测算法,根据客户端响应类型选择检测算法,分析了平均检测轮数,只须观察少量的命令控制交互,能够对单个或多个IRC僵尸主机进行检测.实验结果表明,在保证误报率和漏报率的前提下该方法能在预定检测轮数内完成判定.
Accounting for Heterogeneous Returns in Sequential Schooling Decisions
Zamarro, G.
2006-01-01
This paper presents a method for estimating returns to schooling that takes into account that returns may be heterogeneous among agents and that educational decisions are made sequentially.A sequential decision model is interesting because it explicitly considers that the level of education of each
Karacan, C.O.; Olea, R.A.; Goodman, G.
2012-01-01
Determination of the size of the gas emission zone, the locations of gas sources within, and especially the amount of gas retained in those zones is one of the most important steps for designing a successful methane control strategy and an efficient ventilation system in longwall coal mining. The formation of the gas emission zone and the potential amount of gas-in-place (GIP) that might be available for migration into a mine are factors of local geology and rock properties that usually show spatial variability in continuity and may also show geometric anisotropy. Geostatistical methods are used here for modeling and prediction of gas amounts and for assessing their associated uncertainty in gas emission zones of longwall mines for methane control.This study used core data obtained from 276 vertical exploration boreholes drilled from the surface to the bottom of the Pittsburgh coal seam in a mining district in the Northern Appalachian basin. After identifying important coal and non-coal layers for the gas emission zone, univariate statistical and semivariogram analyses were conducted for data from different formations to define the distribution and continuity of various attributes. Sequential simulations performed stochastic assessment of these attributes, such as gas content, strata thickness, and strata displacement. These analyses were followed by calculations of gas-in-place and their uncertainties in the Pittsburgh seam caved zone and fractured zone of longwall mines in this mining district. Grid blanking was used to isolate the volume over the actual panels from the entire modeled district and to calculate gas amounts that were directly related to the emissions in longwall mines.Results indicated that gas-in-place in the Pittsburgh seam, in the caved zone and in the fractured zone, as well as displacements in major rock units, showed spatial correlations that could be modeled and estimated using geostatistical methods. This study showed that GIP volumes may
Trimmer, Karen
2016-01-01
This paper investigates reasoned risk-taking in decision-making by school principals using a methodology that combines sequential use of psychometric and traditional measurement techniques. Risk-taking is defined as when decisions are made that are not compliant with the regulatory framework, the primary governance mechanism for public schools in…
Active Sequential Hypothesis Testing
Naghshvar, Mohammad
2012-01-01
Consider a decision maker who is responsible to dynamically collect observations so as to enhance his information in a speedy manner about an underlying phenomena of interest while accounting for the penalty of wrong declaration. The special cases of the problem are shown to be that of variable-length coding with feedback and noisy dynamic search. Due to the sequential nature of the problem, the decision maker relies on his current information state to adaptively select the most "informative" sensing action among the available ones. In this paper, using results in dynamic programming, a lower bound for the optimal total cost is established. Moreover, upper bounds are obtained via an analysis of heuristic policies for dynamic selection of actions. It is shown that the proposed heuristics achieve asymptotic optimality in many practically relevant problems including the problems of variable-length coding with feedback and noisy dynamic search; where asymptotic optimality implies that the relative difference betw...
Multi-Attribute Sequential Search
Bearden, J. Neil; Connolly, Terry
2007-01-01
This article describes empirical and theoretical results from two multi-attribute sequential search tasks. In both tasks, the DM sequentially encounters options described by two attributes and must pay to learn the values of the attributes. In the "continuous" version of the task the DM learns the precise numerical value of an attribute when she…
A sequential tree approach for incremental sequential pattern mining
Indian Academy of Sciences (India)
RAJESH KUMAR BOGHEY; SHAILENDRA SINGH
2016-12-01
‘‘Sequential pattern mining’’ is a prominent and significant method to explore the knowledge and innovation from the large database. Common sequential pattern mining algorithms handle static databases.Pragmatically, looking into the functional and actual execution, the database grows exponentially thereby leading to the necessity and requirement of such innovation, research, and development culminating into the designing of mining algorithm. Once the database is updated, the previous mining result will be incorrect, and we need to restart and trigger the entire mining process for the new updated sequential database. To overcome and avoid the process of rescanning of the entire database, this unique system of incremental mining of sequential pattern is available. The previous approaches, system, and techniques are a priori-based frameworks but mine patterns is an advanced and sophisticated technique giving the desired solution. We propose and incorporate an algorithm called STISPM for incremental mining of sequential patterns using the sequence treespace structure. STISPM uses the depth-first approach along with backward tracking and the dynamic lookahead pruning strategy that removes infrequent and irregular patterns. The process and approach from the root node to any leaf node depict a sequential pattern in the database. The structural characteristic of the sequence tree makes it convenient and appropriate for incremental sequential pattern mining. The sequence tree also stores all the sequential patterns with its count and statistics, so whenever the support system is withdrawn or changed, our algorithm using frequent sequence tree as the storage structure can find and detect all the sequential patternswithout mining the database once again.
Directory of Open Access Journals (Sweden)
Sarah M Leonard
Full Text Available The current interest in epigenetic priming is underpinned by the belief that remodelling of the epigenetic landscape will sensitise tumours to subsequent therapy. In this pre-clinical study, paediatric AML cells expanded in culture and primary AML xenografts were treated with decitabine, a DNA demethylating agent, and cytarabine, a frontline cytotoxic agent used in the treatment of AML, either alone or in combination. Sequential treatment with decitabine and cytarabine was found to be more effective in reducing tumour burden than treatment with cytarabine alone suggesting that the sequential delivery of these agents may a have real clinical advantage in the treatment of paediatric AML. However we found no evidence to suggest that this outcome was dependent on priming with a hypomethylating agent, as the benefits observed were independent of the order in which these drugs were administered.
Sequential Testing: Basics and Benefits
1978-03-01
103-109 44. A. Wald , Sequential Analysis, John Wiley and Sons, 1947 45. A Wald and J. Wolfowitz , "Optimum Character of The Sequential Probability Ratio...work done by A. Wald [44].. Wald’s work on sequential analysis can be used virtually’without modification in a situation where decisions are made... Wald can be used. The decision to accept, reject, or continue the test depends on: 8 < (8 0/el)r exp [-(1/01 - 1/0 0 )V(t)] < A (1) where 0 and A are
Page 1 A STUDY OF ASSOCATION ENERGIES OF SEQUENTIAL ...
African Journals Online (AJOL)
The analysis is performed on a, da, .......... , da, d sequential n-mer clusters. ... quantitative theoretical model needed to describe the structures and energetics of small water clusters that ... delocalized molecular orbital method both at the.
Sequential operators in computability logic
Japaridze, Giorgi
2007-01-01
Computability logic (CL) (see http://www.cis.upenn.edu/~giorgi/cl.html) is a semantical platform and research program for redeveloping logic as a formal theory of computability, as opposed to the formal theory of truth which it has more traditionally been. Formulas in CL stand for (interactive) computational problems, understood as games between a machine and its environment; logical operators represent operations on such entities; and "truth" is understood as existence of an effective solution, i.e., of an algorithmic winning strategy. The formalism of CL is open-ended, and may undergo series of extensions as the study of the subject advances. The main groups of operators on which CL has been focused so far are the parallel, choice, branching, and blind operators. The present paper introduces a new important group of operators, called sequential. The latter come in the form of sequential conjunction and disjunction, sequential quantifiers, and sequential recurrences. As the name may suggest, the algorithmic ...
Complementary sequential measurements generate entanglement
Coles, Patrick J.; Piani, Marco
2013-01-01
We present a new paradigm for capturing the complementarity of two observables. It is based on the entanglement created by the interaction between the system observed and the two measurement devices used to measure the observables sequentially. Our main result is a lower bound on this entanglement and resembles well-known entropic uncertainty relations. Besides its fundamental interest, this result directly bounds the effectiveness of sequential bipartite operations---corresponding to the mea...
Sequentially pulsed traveling wave accelerator
Caporaso, George J.; Nelson, Scott D.; Poole, Brian R.
2009-08-18
A sequentially pulsed traveling wave compact accelerator having two or more pulse forming lines each with a switch for producing a short acceleration pulse along a short length of a beam tube, and a trigger mechanism for sequentially triggering the switches so that a traveling axial electric field is produced along the beam tube in synchronism with an axially traversing pulsed beam of charged particles to serially impart energy to the particle beam.
The average value inequality in sequential effect algebras
Jun, Shen
2009-01-01
A sequential effect algebra $(E,0,1, \\oplus, \\circ)$ is an effect algebra on which a sequential product $\\circ$ with certain physics properties is defined, in particular, sequential effect algebra is an important model for studying quantum measurement theory. In 2005, Gudder asked the following problem: If $a, b\\in (E,0,1,\\oplus, \\circ)$ and $a\\bot b$ and $a\\circ b\\bot a\\circ b$, is it the case that $2(a\\circ b)\\leq a^2\\oplus b^2$ ? In this paper, we construct an example to answer the problem negatively.
Lammoglia, Sabine-Karen; Moeys, Julien; Barriuso, Enrique; Larsbo, Mats; Marín-Benito, Jesús-María; Justes, Eric; Alletto, Lionel; Ubertosi, Marjorie; Nicolardot, Bernard; Munier-Jolain, Nicolas; Mamy, Laure
2017-03-01
The current challenge in sustainable agriculture is to introduce new cropping systems to reduce pesticides use in order to reduce ground and surface water contamination. However, it is difficult to carry out in situ experiments to assess the environmental impacts of pesticide use for all possible combinations of climate, crop, and soils; therefore, in silico tools are necessary. The objective of this work was to assess pesticides leaching in cropping systems coupling the performances of a crop model (STICS) and of a pesticide fate model (MACRO). STICS-MACRO has the advantage of being able to simulate pesticides fate in complex cropping systems and to consider some agricultural practices such as fertilization, mulch, or crop residues management, which cannot be accounted for with MACRO. The performance of STICS-MACRO was tested, without calibration, from measurements done in two French experimental sites with contrasted soil and climate properties. The prediction of water percolation and pesticides concentrations with STICS-MACRO was satisfactory, but it varied with the pedoclimatic context. The performance of STICS-MACRO was shown to be similar or better than that of MACRO. The improvement of the simulation of crop growth allowed better estimate of crop transpiration therefore of water balance. It also allowed better estimate of pesticide interception by the crop which was found to be crucial for the prediction of pesticides concentrations in water. STICS-MACRO is a new promising tool to improve the assessment of the environmental risks of pesticides used in cropping systems.
WE-G-207-02: Full Sequential Projection Onto Convex Sets (FS-POCS) for X-Ray CT Reconstruction
Energy Technology Data Exchange (ETDEWEB)
Liu, L; Han, Y [Tianjin University, Tianjin (China); Jin, M [University of Texas at Arlington, Arlington, TX (United States)
2015-06-15
Purpose: To develop an iterative reconstruction method for X-ray CT, in which the reconstruction can quickly converge to the desired solution with much reduced projection views. Methods: The reconstruction is formulated as a convex feasibility problem, i.e. the solution is an intersection of three convex sets: 1) data fidelity (DF) set – the L2 norm of the difference of observed projections and those from the reconstructed image is no greater than an error bound; 2) non-negativity of image voxels (NN) set; and 3) piecewise constant (PC) set - the total variation (TV) of the reconstructed image is no greater than an upper bound. The solution can be found by applying projection onto convex sets (POCS) sequentially for these three convex sets. Specifically, the algebraic reconstruction technique and setting negative voxels as zero are used for projection onto the DF and NN sets, respectively, while the projection onto the PC set is achieved by solving a standard Rudin, Osher, and Fatemi (ROF) model. The proposed method is named as full sequential POCS (FS-POCS), which is tested using the Shepp-Logan phantom and the Catphan600 phantom and compared with two similar algorithms, TV-POCS and CP-TV. Results: Using the Shepp-Logan phantom, the root mean square error (RMSE) of reconstructed images changing along with the number of iterations is used as the convergence measurement. In general, FS- POCS converges faster than TV-POCS and CP-TV, especially with fewer projection views. FS-POCS can also achieve accurate reconstruction of cone-beam CT of the Catphan600 phantom using only 54 views, comparable to that of FDK using 364 views. Conclusion: We developed an efficient iterative reconstruction for sparse-view CT using full sequential POCS. The simulation and physical phantom data demonstrated the computational efficiency and effectiveness of FS-POCS.
DEFF Research Database (Denmark)
Eriksson, Sophie E; Elgström, Erika; Bäck, Tom
2014-01-01
for small, established tumors. A combination of such radionuclides may be successful in regimens of radioimmunotherapy. In this study, rats were treated by sequential administration of first a 177Lu-labeled antibody, followed by a 211At-labeled antibody 25 days later. METHODS: Rats bearing solid colon...... carcinoma tumors were treated with 400 MBq/kg body weight 177Lu-BR96. After 25 days, three groups of animals were given either 5 or 10 MBq/kg body weight of 211At-BR96 simultaneously with or without a blocking agent reducing halogen uptake in normal tissues. Control animals were not given any 211At-BR96....... Myelotoxicity, body weight, tumor size, and development of metastases were monitored for 120 days. RESULTS: Tumors were undetectable in 90% of the animals on day 25, independent of treatment. Additional treatment with 211At-labeled antibodies did not reduce the proportion of animals developing metastases...
The Progression of Sequential Reactions
Directory of Open Access Journals (Sweden)
Jack McGeachy
2010-01-01
Full Text Available Sequential reactions consist of linked reactions in which the product of the first reaction becomes the substrate of a second reaction. Sequential reactions occur in industrially important processes, such as the chlorination of methane. A generalized series of three sequential reactions was analyzed in order to determine the times at which each chemical species reaches its maximum. To determine the concentration of each species as a function of time, the differential rate laws for each species were solved. The solution of each gave the concentration curve of the chemical species. The concentration curves of species A1 and A2 possessed discreet maxima, which were determined through slope-analysis. The concentration curve of the final product, A3, did not possess a discreet maximum, but rather approached a finite limit.
Transistor switching and sequential circuits
Sparkes, John J
1969-01-01
Transistor Switching and Sequential Circuits presents the basic ideas involved in the construction of computers, instrumentation, pulse communication systems, and automation. This book discusses the design procedure for sequential circuits. Organized into two parts encompassing eight chapters, this book begins with an overview of the ways on how to generate the types of waveforms needed in digital circuits, principally ramps, square waves, and delays. This text then considers the behavior of some simple circuits, including the inverter, the emitter follower, and the long-tailed pair. Other cha
Complementary sequential measurements generate entanglement
Coles, Patrick J.; Piani, Marco
2014-01-01
We present a paradigm for capturing the complementarity of two observables. It is based on the entanglement created by the interaction between the system observed and the two measurement devices used to measure the observables sequentially. Our main result is a lower bound on this entanglement and resembles well-known entropic uncertainty relations. Besides its fundamental interest, this result directly bounds the effectiveness of sequential bipartite operations—corresponding to the measurement interactions—for entanglement generation. We further discuss the intimate connection of our result with two primitives of information processing, namely, decoupling and coherent teleportation.
Robust sequential working memory recall in heterogeneous cognitive networks
Rabinovich, Mikhail I.; Sokolov, Yury; Kozma, Robert
2014-01-01
Psychiatric disorders are often caused by partial heterogeneous disinhibition in cognitive networks, controlling sequential and spatial working memory (SWM). Such dynamic connectivity changes suggest that the normal relationship between the neuronal components within the network deteriorates. As a result, competitive network dynamics is qualitatively altered. This dynamics defines the robust recall of the sequential information from memory and, thus, the SWM capacity. To understand pathological and non-pathological bifurcations of the sequential memory dynamics, here we investigate the model of recurrent inhibitory-excitatory networks with heterogeneous inhibition. We consider the ensemble of units with all-to-all inhibitory connections, in which the connection strengths are monotonically distributed at some interval. Based on computer experiments and studying the Lyapunov exponents, we observed and analyzed the new phenomenon—clustered sequential dynamics. The results are interpreted in the context of the winnerless competition principle. Accordingly, clustered sequential dynamics is represented in the phase space of the model by two weakly interacting quasi-attractors. One of them is similar to the sequential heteroclinic chain—the regular image of SWM, while the other is a quasi-chaotic attractor. Coexistence of these quasi-attractors means that the recall of the normal information sequence is intermittently interrupted by episodes with chaotic dynamics. We indicate potential dynamic ways for augmenting damaged working memory and other cognitive functions. PMID:25452717
Face Super-resolution With Non-negative Featrue Basis Constraint%非负特征基约束的人脸超分辨率
Institute of Scientific and Technical Information of China (English)
兰诚栋; 胡瑞敏; 韩镇; 卢涛
2011-01-01
Principal Component Analysis (PC A) is commonly used for human face images representation in face super-resolution. But the features extracted by PCA are holistic and difficult to have semantic interpretation. In order to synthesize a better super-resolution face image with the results of the face images representation, we propose face a super-resolution algorithm with non-negative featrue basis constraint The algorithm uses the NMF to obtain non-negative featrue basis of face sample images, and the target image is regularized by Markov random fields, with maximum a posteriori probability approach. Finally, the steepest descent method is used to optimize non-negative featrue basis coefficient of high-resolution image. Experimental results show that, in the subjective and objective quality, the face super-resolution algorithm with non-negative feature basis constrait performs better than PCA-based algorithms.%主成分分析(PCA)是人脸超分辨率申常用的人脸图像表达方法,但是PCA方法的特征是整体的且难以语义解释.为了使表达的结果更好地用于合成超分辨率人脸图像,提出一种非负特征基约束的人脸超分辨率算法.该算法利用非负矩阵分解(NMF)获取样本人脸图像的非负特征基,结合最大后验概率的方法,对目标图像进行马尔可夫随机场正则约束,最速下降法优化得到高分辨率人脸图像的非负特征基系数.实验结果表明,在主客观质量上,非负特征基约束的人脸超分辨率算法的性能胜过基于PCA的算法.
Sequential evidence accumulation in decision making
Directory of Open Access Journals (Sweden)
Daniel Hausmann
2008-03-01
Full Text Available Judgments and decisions under uncertainty are frequently linked to a prior sequential search for relevant information. In such cases, the subject has to decide when to stop the search for information. Evidence accumulation models from social and cognitive psychology assume an active and sequential information search until enough evidence has been accumulated to pass a decision threshold. In line with such theories, we conceptualize the evidence threshold as the ``desired level of confidence'' (DLC of a person. This model is tested against a fixed stopping rule (one-reason decision making and against the class of multi-attribute information integrating models. A series of experiments using an information board for horse race betting demonstrates an advantage of the proposed model by measuring the individual DLC of each subject and confirming its correctness in two separate stages. In addition to a better understanding of the stopping rule (within the narrow framework of simple heuristics, the results indicate that individual aspiration levels might be a relevant factor when modelling decision making by task analysis of statistical environments.
Energy Technology Data Exchange (ETDEWEB)
Lee, Dong-Chang [CancerCare Manitoba, Winnipeg, MB (Canada); Jans, Hans; McEwan, Sandy; Riauka, Terence [Department of Oncology, University of Alberta, Edmonton, AB (Canada); Cross Cancer Institute, Alberta Health Services, Edmonton, AB (Canada); Martin, Wayne; Wieler, Marguerite [Division of Neurology, University of Alberta, Edmonton, AB (Canada)
2014-08-15
In this work, a class of non-negative matrix factorization (NMF) technique known as alternating non-negative least squares, combined with the projected gradient method, is used to analyze twenty-five [{sup 11}C]-DTBZ dynamic PET/CT brain data. For each subject, a two-factor model is assumed and two factors representing the striatum (factor 1) and the non-striatum (factor 2) tissues are extracted using the proposed NMF technique and commercially available factor analysis software “Pixies”. The extracted factor 1 and 2 curves represent the binding site of the radiotracer and describe the uptake and clearance of the radiotracer by soft tissues in the brain, respectively. The proposed NMF technique uses prior information about the dynamic data to obtain sample time-activity curves representing the striatum and the non-striatum tissues. These curves are then used for “warm” starting the optimization. Factor solutions from the two methods are compared graphically and quantitatively. In healthy subjects, radiotracer uptake by factors 1 and 2 are approximately 35–40% and 60–65%, respectively. The solutions are also used to develop a factor-based metric for the detection of early, untreated Parkinson's disease. The metric stratifies healthy subjects from suspected Parkinson's patients (based on the graphical method). The analysis shows that both techniques produce comparable results with similar computational time. The “semi-automatic” approach used by the NMF technique allows clinicians to manually set a starting condition for “warm” starting the optimization in order to facilitate control and efficient interaction with the data.
Jayanthi, Aditya; Coker, Christopher
2016-11-01
In the last decade, CFD simulations have transitioned from the stage where they are used to validate the final designs to the main stream development of products driven by the simulation. However, there are still niche areas of applications liking oiling simulations, where the traditional CFD simulation times are probative to use them in product development and have to rely on experimental methods, which are expensive. In this paper a unique example of Sprocket-Chain simulation will be presented using nanoFluidx a commercial SPH code developed by FluiDyna GmbH and Altair Engineering. The grid less nature of the of SPH method has inherent advantages in the areas of application with complex geometry which pose severe challenge to classical finite volume CFD methods due to complex moving geometries, moving meshes and high resolution requirements leading to long simulation times. The simulations times using nanoFluidx can be reduced from weeks to days allowing the flexibility to run more simulation and can be in used in main stream product development. The example problem under consideration is a classical Multiphysics problem and a sequentially coupled solution of Motion Solve and nanoFluidX will be presented. This abstract is replacing DFD16-2016-000045.
Sensitivity validation technique for sequential kriging metamodel
Energy Technology Data Exchange (ETDEWEB)
Huh, Seung Kyun; Lee, Jin Min; Lee, Tae Hee [Hanyang Univ., Seoul (Korea, Republic of)
2012-08-15
Metamodels have been developed with a variety of design optimization techniques in the field of structural engineering over the last decade because they are efficient, show excellent prediction performance, and provide easy interconnections into design frameworks. To construct a metamodel, a sequential procedure involving steps such as the design of experiments, metamodeling techniques, and validation techniques is performed. Because validation techniques can measure the accuracy of the metamodel, the number of presampled points for an accurate kriging metamodel is decided by the validation technique in the sequential kriging metamodel. Because the interpolation model such as the kriging metamodel based on computer experiments passes through responses at presampled points, additional analyses or reconstructions of the meta models are required to measure the accuracy of the meta model if existing validation techniques are applied. In this study, we suggest a sensitivity validation that does not require additional analyses or reconstructions of the meta models. Fourteen two dimensional mathematical problems and an engineering problem are illustrated to show the feasibility of the suggested method.
A Bayesian sequential processor approach to spectroscopic portal system decisions
Energy Technology Data Exchange (ETDEWEB)
Sale, K; Candy, J; Breitfeller, E; Guidry, B; Manatt, D; Gosnell, T; Chambers, D
2007-07-31
The development of faster more reliable techniques to detect radioactive contraband in a portal type scenario is an extremely important problem especially in this era of constant terrorist threats. Towards this goal the development of a model-based, Bayesian sequential data processor for the detection problem is discussed. In the sequential processor each datum (detector energy deposit and pulse arrival time) is used to update the posterior probability distribution over the space of model parameters. The nature of the sequential processor approach is that a detection is produced as soon as it is statistically justified by the data rather than waiting for a fixed counting interval before any analysis is performed. In this paper the Bayesian model-based approach, physics and signal processing models and decision functions are discussed along with the first results of our research.
Nonlinear interferometry approach to photonic sequential logic
Mabuchi, Hideo
2011-01-01
Motivated by rapidly advancing capabilities for extensive nanoscale patterning of optical materials, I propose an approach to implementing photonic sequential logic that exploits circuit-scale phase coherence for efficient realizations of fundamental components such as a NAND-gate-with-fanout and a bistable latch. Kerr-nonlinear optical resonators are utilized in combination with interference effects to drive the binary logic. Quantum-optical input-output models are characterized numerically using design parameters that yield attojoule-scale energy separation between the latch states.
Generalised sequential crossover of words and languages
Jeganathan, L; Sengupta, Ritabrata
2009-01-01
In this paper, we propose a new operation, Generalised Sequential Crossover (GSCO) of words, which in some sense an abstract model of crossing over of the chromosomes in the living organisms. We extend GSCO over language $L$ iteratively ($GSCO^*(L)$ as well as iterated GSCO over two languages $GSCO^*(L_1,L_2)$). Our study reveals that $GSCO^*(L)$ is subclass of regular languages for any $L$. We compare the different classes of GSCO languages with the prominent sub-regular classes.
Institute of Scientific and Technical Information of China (English)
黎丹
2016-01-01
Based on the DK sequential reciprocity game model, with the general hypothesis, this paper constructs a two stage sequential reciprocity model of cooperative behavior between the enterprise and new employee under the condition that the enterprise supply organizational supportive . The Research shows:1)When new employees are fully rational, the enterprise will not provide organizational support, new employees do not work hard, the two sides can not establish a mutually beneficial cooperation.2)considering the mutual motivation of new employees, if the company does not provide organizational support, new employees will choose do not work hard, the two sides also can not establish a mutually beneficial cooperation. 3)If the new employee has the mutual motivation, and the enterprise chooses the organization to support the behavior, then there are three kinds of sequential reciprocity equilibrium.%以DK序贯互惠博弈模型为基础，在一般化假设基础上，构建了提供组织支持的企业与新员工努力行为的两阶段序贯互惠模型.研究表明：1）当新员工完全理性时，企业将不提供组织支持，新员工也选择低努力工作行为，双方不可能建立互惠合作关系.2）在考虑新员工互惠动机后，若企业不提供组织支持，新员工也将选择低努力工作行为，双方同样不可能建立互惠合作关系.3）如果新员工具有互惠动机，而企业也选择提供组织支持，则存在三种序贯互惠均衡.
基于时序事件图的作战仿真溯因模型∗%Battle Simulation Model of Causal Traceability Based on Sequential Events Graph
Institute of Scientific and Technical Information of China (English)
王伟; 赵晓哲; 王勃
2016-01-01
通过对作战仿真结果进行因果关系分析，可以加深对作战计划的理解，为指挥决策提供支持，是仿真的重要一环。为了能够对仿真结果进行因果关系分析，首先阐述了溯因和因果关系的概念，对支持溯因的仿真输出数据进行了需求分析，进而构建了以事件为核心描述单元的仿真输出数据模型；根据对事件图基本概念的理解，提出了基于时序事件图的溯因模型，最后给出了仿真输出数据映射为时序事件图的转化算法。%Through the causality analysis was carried out on the operational simulation results, it can deepen the understand⁃ing of the operational plan, and provide support for command and decision making, so it is one of the most important aspects of the simulation. In order to analyse the causal relationship between the simulation results, this paper first expounds the ab⁃duction and the concept of causality, to support the back because of the simulation output data of demand analysis, then build the event as the core simulation output data model for describing the unit; According to the basic concept of event graph, it proposes back because of the model based on the sequential events figure, finally it gives the simulation output data mapping to a sequential event graph transformation algorithm.
Can post-error dynamics explain sequential reaction time patterns?
Directory of Open Access Journals (Sweden)
Stephanie eGoldfarb
2012-07-01
Full Text Available We investigate human error dynamics in sequential two-alternative choice tasks. When subjects repeatedly discriminate between two stimuli, their error rates and mean reaction times (RTs systematically depend on prior sequences of stimuli. We analyze these sequential effects on RTs, separating error and correct responses, and identify a sequential RT tradeoff: a sequence of stimuli which yields a relatively fast RT on error trials will produce a relatively slow RT on correct trials and vice versa. We reanalyze previous data and acquire and analyze new data in a choice task with stimulus sequences generated by a first-order Markov process having unequal probabilities of repetitions and alternations. We then show that relationships among these stimulus sequences and the corresponding RTs for correct trials, error trials, and averaged over all trials are significantly influenced by the probability of alternations; these relationships have not been captured by previous models. Finally, we show that simple, sequential updates to the initial condition and thresholds of a pure drift diffusion model can account for the trends in RT for correct and error trials. Our results suggest that error-based parameter adjustments are critical to modeling sequential effects.
Sequential Detection of Digital Watermarking
Institute of Scientific and Technical Information of China (English)
LI Li; YU Yu-lian; WANG Pei
2005-01-01
The paper analyzed a new watermarking detection paradigm including double detection thresholds based on sequential hypothesis testing. A joint design of watermarking encoding and detection was proposed. The paradigm had good immunity to noisy signal attacks and high detection probability. Many experiments proved that the above algorithm can detect watermarks about 66% faster than popular detectors, which could have significant impact on many applications such as video watermarking detection and watermark-searching in a large database of digital contents.
Mining Emerging Sequential Patterns for Activity Recognition in Body Sensor Networks
DEFF Research Database (Denmark)
Gu, Tao; Wang, Liang; Chen, Hanhua
2010-01-01
Body Sensor Networks oer many applications in healthcare, well-being and entertainment. One of the emerging applications is recognizing activities of daily living. In this paper, we introduce a novel knowledge pattern named Emerging Sequential Pattern (ESP)|a sequential pattern that discovers...... signicant class dierences|to recognize both simple (i.e., sequential) and complex (i.e., interleaved and concurrent) activities. Based on ESPs, we build our complex activity models directly upon the sequential model to recognize both activity types. We conduct comprehensive empirical studies to evaluate...
Nonspherical supernova remnants. IV - Sequential explosions in OB associations
Tenorio-Tagle, G.; Bodenheimer, P.; Rozyczka, M.
1987-01-01
Multisupernova remnants, driven by sequential supernova explosions in OB associations, are modelled by means of two-dimensional hydrodynamical calculations. It is shown that due to the Rayleigh-Taylor instability the remnants quickly evolve into highly irregular structures. A critical evaluation of the multisupernova model as an explanation for supershells is given.
Vibration Isolation and Transmissibility Characteristics of Passive Sequential Damper
Directory of Open Access Journals (Sweden)
M.S. Patil
2004-01-01
Full Text Available This paper presents a half-car model (4-degrees-of-freedom employing nonlinear passlve sequential damper. The vibration isolation and transmrssibility effect on the vehicle's centre ofgravity (C.G. has been studied. The results have been compared for transmissibility, displacement, and velocity transient response for half-car model having nonlinear passive sequentialhydropneumatic damper under different terrain excitation.
Directory of Open Access Journals (Sweden)
Miao Zhu
Full Text Available Combination of percutaneous microwave ablation (PMWA and intravenous injection of 131I-hypericin(IIIH may bear potential as a mini-invasive treatment for tumor. The objective of this study was to assess the effect of PMWA and IIIH in breast tumor growth.Ten New Zealand White rabbits bearing VX2 breast carcinomas were randomly divided into two groups (each 5 examples and processed using PMWA followed by IIIH and IIIH alone. The IIIH activity was evaluated using planar scintigraphy, autoradiography and biodistribution analysis. The maximum effective safe dose of IIIH was found through 48 rabbits with VX2 breast tumor, which were randomized into six groups (n=8 per group. Subsequently, a further 75 rabbits bearing VX2 breast solid tumors were randomly divided into five groups (each 15 examples and treated as follows: A, no treatment group; B, PMWA alone; C, IIIH alone; D, PMWA+IIIH×1 (at 8 h post-PMWA; and E, PMWA+IIIH×2 (at 8 h and at 8 days post-PMWA. The therapeutic effect was assessed by measurement of tumor size and performation of positron emission tomography/computed tomograph (PET/CT scans, liver and renal function tests and Kaplan-Meier survival analysis.The planar scintigraphy findings suggested a significant uptake of 131I in necrotic tumor tissue. The autoradiography gray scales indicated higher selective uptake of IIIH by necrotic tissue, with significant differences between the groups with and those without necrotic tumor tissue (P<0.05. The maximum effective safe dose of IIIH was 1 mCi/kg. The PET/CT scans and tumor size measurement suggested improvements in treatment groups at all time points (P<0.01. Significant differences were detected among Groups A, B, D and E (P<0.05. Lower levels of lung metastasis were detected in Groups D and E (P<0.05. There were no abnormalities in liver and renal functions tests or other reported side effects.IIIH exhibited selective uptake by necrotic tumor tissue. Sequential therapy involving PMWA
Information Geometry and Sequential Monte Carlo
Sim, Aaron; Stumpf, Michael P H
2012-01-01
This paper explores the application of methods from information geometry to the sequential Monte Carlo (SMC) sampler. In particular the Riemannian manifold Metropolis-adjusted Langevin algorithm (mMALA) is adapted for the transition kernels in SMC. Similar to its function in Markov chain Monte Carlo methods, the mMALA is a fully adaptable kernel which allows for efficient sampling of high-dimensional and highly correlated parameter spaces. We set up the theoretical framework for its use in SMC with a focus on the application to the problem of sequential Bayesian inference for dynamical systems as modelled by sets of ordinary differential equations. In addition, we argue that defining the sequence of distributions on geodesics optimises the effective sample sizes in the SMC run. We illustrate the application of the methodology by inferring the parameters of simulated Lotka-Volterra and Fitzhugh-Nagumo models. In particular we demonstrate that compared to employing a standard adaptive random walk kernel, the SM...
DEFF Research Database (Denmark)
Gu, Tao; Wu, Zhanqing; Tao, Xianping;
2009-01-01
upon the training dataset for complex activities, we build our activity models by mining a set of Emerging Patterns from the sequential activity trace only and apply our models in recognizing sequential, interleaved and concurrent activities. We conduct our empirical studies in a real smart home...
Chrétien, Stéphane; Guyeux, Christophe; Conesa, Bastien; Delage-Mouroux, Régis; Jouvenot, Michèle; Huetz, Philippe; Descôtes, Françoise
2016-08-31
Non-Negative Matrix factorization has become an essential tool for feature extraction in a wide spectrum of applications. In the present work, our objective is to extend the applicability of the method to the case of missing and/or corrupted data due to outliers. An essential property for missing data imputation and detection of outliers is that the uncorrupted data matrix is low rank, i.e. has only a small number of degrees of freedom. We devise a new version of the Bregman proximal idea which preserves nonnegativity and mix it with the Augmented Lagrangian approach for simultaneous reconstruction of the features of interest and detection of the outliers using a sparsity promoting ℓ 1 penality. An application to the analysis of gene expression data of patients with bladder cancer is finally proposed.
On sequential countably compact topological semigroups
Gutik, Oleg V; RepovÅ¡, DuÅ¡an
2008-01-01
We study topological and algebraic properties of sequential countably compact topological semigroups similar to compact topological semigroups. We prove that a sequential countably compact topological semigroup does not contain the bicyclic semigroup. Also we show that the closure of a subgroup in a sequential countably compact topological semigroup is a topological group, that the inversion in a Clifford sequential countably compact topological semigroup is continuous and we prove the analogue of the Rees-Suschkewitsch Theorem for simple regular sequential countably compact topological semigroups.
Energy Technology Data Exchange (ETDEWEB)
Edwards, Lloyd [USDA Forest Service, Southern Research Station; Parresol, Bernie [USDA Forest Service, Southern Research Station
2012-09-17
The primary research objective of the project is to determine an optimum model to spatially interpolate point derived tree site index (SI). This optimum model will use relevant data from 635 measured sample points to create continuous 40 meter SI raster layer of entire study extent.
Sequential pattern formation governed by signaling gradients
Jörg, David J; Jülicher, Frank
2016-01-01
Rhythmic and sequential segmentation of the embryonic body plan is a vital developmental patterning process in all vertebrate species. However, a theoretical framework capturing the emergence of dynamic patterns of gene expression from the interplay of cell oscillations with tissue elongation and shortening and with signaling gradients, is still missing. Here we show that a set of coupled genetic oscillators in an elongating tissue that is regulated by diffusing and advected signaling molecules can account for segmentation as a self-organized patterning process. This system can form a finite number of segments and the dynamics of segmentation and the total number of segments formed depend strongly on kinetic parameters describing tissue elongation and signaling molecules. The model accounts for existing experimental perturbations to signaling gradients, and makes testable predictions about novel perturbations. The variety of different patterns formed in our model can account for the variability of segmentatio...
Sequential pattern formation governed by signaling gradients
Jörg, David J.; Oates, Andrew C.; Jülicher, Frank
2016-10-01
Rhythmic and sequential segmentation of the embryonic body plan is a vital developmental patterning process in all vertebrate species. However, a theoretical framework capturing the emergence of dynamic patterns of gene expression from the interplay of cell oscillations with tissue elongation and shortening and with signaling gradients, is still missing. Here we show that a set of coupled genetic oscillators in an elongating tissue that is regulated by diffusing and advected signaling molecules can account for segmentation as a self-organized patterning process. This system can form a finite number of segments and the dynamics of segmentation and the total number of segments formed depend strongly on kinetic parameters describing tissue elongation and signaling molecules. The model accounts for existing experimental perturbations to signaling gradients, and makes testable predictions about novel perturbations. The variety of different patterns formed in our model can account for the variability of segmentation between different animal species.
Sequential Beamforming Synthetic Aperture Imaging
DEFF Research Database (Denmark)
Kortbek, Jacob; Jensen, Jørgen Arendt; Gammelmark, Kim Løkke
2013-01-01
Synthetic aperture sequential beamforming (SASB) is a novel technique which allows to implement synthetic aperture beamforming on a system with a restricted complexity, and without storing RF-data. The objective is to improve lateral resolution and obtain a more depth independent resolution...... and a range independent lateral resolution is obtained. The SASB method has been investigated using simulations in Field II and by off-line processing of data acquired with a commercial scanner. The lateral resolution increases with a decreasing F#. Grating lobes appear if F# 6 2 for a linear array with k-pitch...
An Intuitionistic Epistemic Logic for Sequential Consistency on Shared Memory
Hirai, Yoichi
In the celebrated Gödel Prize winning papers, Herlihy, Shavit, Saks and Zaharoglou gave topological characterization of waitfree computation. In this paper, we characterize waitfree communication logically. First, we give an intuitionistic epistemic logic k∨ for asynchronous communication. The semantics for the logic k∨ is an abstraction of Herlihy and Shavit's topological model. In the same way Kripke model for intuitionistic logic informally describes an agent increasing its knowledge over time, the semantics of k∨ describes multiple agents passing proofs around and developing their knowledge together. On top of the logic k∨, we give an axiom type that characterizes sequential consistency on shared memory. The advantage of intuitionistic logic over classical logic then becomes apparent as the axioms for sequential consistency are meaningless for classical logic because they are classical tautologies. The axioms are similar to the axiom type for prelinearity (ϕ ⊃ ψ) ∨ (ψ ⊃ ϕ). This similarity reflects the analogy between sequential consistency for shared memory scheduling and linearity for Kripke frames: both require total order on schedules or models. Finally, under sequential consistency, we give soundness and completeness between a set of logical formulas called waitfree assertions and a set of models called waitfree schedule models.
Retailers and consumers in sequential auctions of collectibles
DEFF Research Database (Denmark)
Vincent Lyk-Jensen, Stéphanie; Chanel, Olivier
2007-01-01
We analyse an independent private-value model, where heterogeneous bidders compete for objects sold in sequential second-price auctions. In this heterogeneous game, bidders may have differently distributed valuations, and some have multi-unit demand with decreasing marginal values (retailers); ot...
Naked exclusion in the lab : The case of sequential contracting
Boone, J.; Müller, W.; Suetens, S.
2014-01-01
In the context of the naked exclusion model of Rasmusen, Ramseyer and Wiley [1991] and Segal and Whinston [2000b], we examine whether sequential contracting is more conducive to exclusion in the lab, and whether it is cheaper for the incumbent than simultaneous contracting. We find that an incumbent
Yoon, Sang-Young; Ko, Jeonghan; Jung, Myung-Chul
2016-07-01
The aim of study is to suggest a job rotation schedule by developing a mathematical model in order to reduce cumulative workload from the successive use of the same body region. Workload assessment using rapid entire body assessment (REBA) was performed for the model in three automotive assembly lines of chassis, trim, and finishing to identify which body part exposed to relatively high workloads at workstations. The workloads were incorporated to the model to develop a job rotation schedule. The proposed schedules prevent the exposure to high workloads successively on the same body region and minimized between-worker variance in cumulative daily workload. Whereas some of workers were successively assigned to high workload workstation under no job rotation and serial job rotation. This model would help to reduce the potential for work-related musculoskeletal disorders (WMSDs) without additional cost for engineering work, although it may need more computational time and relative complex job rotation sequences. Copyright © 2016 Elsevier Ltd and The Ergonomics Society. All rights reserved.
King, Daniel W.; King, Lynda A.; McArdle, John J.; Shalev, Arieh Y.; Doron-LaMarca, Susan
2009-01-01
Depression and posttraumatic stress disorder (PTSD) are highly comorbid conditions that may arise following exposure to psychological trauma. This study examined their temporal sequencing and mutual influence using bivariate latent difference score structural equation modeling. Longitudinal data from 182 emergency room patients revealed level of…
Thiem, A.; Schlink, U.; Pan, X.-C.; Hu, M.; Peters, A.; Wiedensohler, A.; Breitner, S.; Cyrys, J.; Wehner, B.; Rösch, C.; Franck, U.
2012-05-01
Increasing traffic density and a changing car fleet on the one hand as well as various reduction measures on the other hand may influence the composition of the particle population and, hence, the health risks for residents of megacities like Beijing. A suitable tool for identification and quantification of source group-related particle exposure compositions is desirable in order to derive optimal adaptation and reduction strategies and therefore, is presented in this paper. Particle number concentrations have been measured in high time- and space-resolution at an urban background monitoring site in Beijing, China, during 2004-2008. In this study a new pattern recognition procedure based on non-negative matrix factorization (NMF) was introduced to extract characteristic diurnal air pollution patterns of particle number and volume size distributions for the study period. Initialization and weighting strategies for NMF applications were carefully considered and a scaling procedure for ranking of obtained patterns was implemented. In order to account for varying particle sizes in the full diameter range [3 nm; 10 μm] two separate NMF applications (a) for diurnal particle number concentration data (NMF-N) and (b) volume concentration data (NMF-V) have been performed. Five particle number concentration-related NMF-N factors were assigned to patterns mainly describing the development of ultrafine (particle diameter Dp < 100 nm instead of DP) as well as fine particles (Dp < 2.5 μm), since absolute number concentrations are highest in these diameter ranges. The factors are classified into primary and secondary sources. Primary sources mostly involved anthropogenic emission sources such as traffic emissions or emissions of nearby industrial plants, whereas secondary sources involved new particle formation and accumulation (particle growth) processes. For the NMF-V application the five extracted factors mainly described coarse particle (2.5 μm < Dp < 10 μm) variations
Visual tracker using sequential bayesian learning: discriminative, generative, and hybrid.
Lei, Yun; Ding, Xiaoqing; Wang, Shengjin
2008-12-01
This paper presents a novel solution to track a visual object under changes in illumination, viewpoint, pose, scale, and occlusion. Under the framework of sequential Bayesian learning, we first develop a discriminative model-based tracker with a fast relevance vector machine algorithm, and then, a generative model-based tracker with a novel sequential Gaussian mixture model algorithm. Finally, we present a three-level hierarchy to investigate different schemes to combine the discriminative and generative models for tracking. The presented hierarchical model combination contains the learner combination (at level one), classifier combination (at level two), and decision combination (at level three). The experimental results with quantitative comparisons performed on many realistic video sequences show that the proposed adaptive combination of discriminative and generative models achieves the best overall performance. Qualitative comparison with some state-of-the-art methods demonstrates the effectiveness and efficiency of our method in handling various challenges during tracking.
Lieshout, M.N.M. van
2008-01-01
We advocate the use of Markov sequential object processes for tracking a variable number of moving objects through video frames with a view towards depth calculation. A regression model based on a sequential object process quantifies goodness of fit; regularization terms are incorporated to control
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp
2012-01-01
We consider the optimization of power set-points to a large number of wind turbines arranged within close vicinity of each other in a wind farm. The goal is to maximize the total electric power extracted from the wind, taking the wake effects that couple the individual turbines in the farm...... into account. For any mean wind speed, turbulence intensity, and direction we find the optimal static operating points for the wind farm. We propose an iterative optimization scheme to achieve this goal. When the complicated, nonlinear, dynamics of the aerodynamics in the turbines and of the fluid dynamics...... describing the turbulent wind fields’ propagation through the farm are included in a highly detailed black-box model, numerical results for any given values of the parameter sets can easily be evaluated. However, analytic expressions for model representation in the optimization algorithms might be hard...
DEFF Research Database (Denmark)
Hovgaard, Tobias Gybel; Larsen, Lars F. S.; Jørgensen, John Bagterp
2012-01-01
from the wind farm model, enabling us to use a very simple linear relationship for describing the turbine interactions. In addition, we allow individual turbines to be turned on or off introducing integer variables into the optimization problem. We solve this within the same framework of iterative...... is far superior to, a more naive distribution scheme. We employ a fast convex quadratic programming solver to carry out the iterations in the range of microseconds for even large wind farms....
Soliman, Moomen; Eldyasti, Ahmed
2017-03-01
Recently, partial nitrification has been adopted widely either for the nitrite shunt process or intermediate nitrite generation step for the Anammox process. However, partial nitrification has been hindered by the complexity of maintaining stable nitrite accumulation at high nitrogen loading rates (NLR) which affect the feasibility of the process for high nitrogen content wastewater. Thus, the operational data of a lab scale SBR performing complete partial nitrification as a first step of nitrite shunt process at NLRs of 0.3-1.2kg/(m(3)d) have been used to calibrate and validate a process model developed using BioWin® in order to describe the long-term dynamic behavior of the SBR. Moreover, an identifiability analysis step has been introduced to the calibration protocol to eliminate the needs of the respirometric analysis for SBR models. The calibrated model was able to predict accurately the daily effluent ammonia, nitrate, nitrite, alkalinity concentrations and pH during all different operational conditions.
Composite SAR imaging using sequential joint sparsity
Sanders, Toby; Gelb, Anne; Platte, Rodrigo B.
2017-06-01
This paper investigates accurate and efficient ℓ1 regularization methods for generating synthetic aperture radar (SAR) images. Although ℓ1 regularization algorithms are already employed in SAR imaging, practical and efficient implementation in terms of real time imaging remain a challenge. Here we demonstrate that fast numerical operators can be used to robustly implement ℓ1 regularization methods that are as or more efficient than traditional approaches such as back projection, while providing superior image quality. In particular, we develop a sequential joint sparsity model for composite SAR imaging which naturally combines the joint sparsity methodology with composite SAR. Our technique, which can be implemented using standard, fractional, or higher order total variation regularization, is able to reduce the effects of speckle and other noisy artifacts with little additional computational cost. Finally we show that generalizing total variation regularization to non-integer and higher orders provides improved flexibility and robustness for SAR imaging.
Structural features of sequential weak measurements
Diósi, Lajos
2016-07-01
We discuss the abstract structure of sequential weak measurement (WM) of general observables. In all orders, the sequential WM correlations without postselection yield the corresponding correlations of the Wigner function, offering direct quantum tomography through the moments of the canonical variables. Correlations in spin-1/2 sequential weak measurements coincide with those in strong measurements, they are constrained kinematically, and they are equivalent with single measurements. In sequential WMs with postselection, an anomaly occurs, different from the weak value anomaly of single WMs. In particular, the spread of polarization σ ̂ as measured in double WMs of σ ̂ will diverge for certain orthogonal pre- and postselected states.
Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E
2017-04-15
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.
Long, C J; Bunker, D; Li, X; Karen, V L; Takeuchi, I
2009-10-01
In this work we apply a technique called non-negative matrix factorization (NMF) to the problem of analyzing hundreds of x-ray microdiffraction (microXRD) patterns from a combinatorial materials library. An in-house scanning x-ray microdiffractometer is used to obtain microXRD patterns from 273 different compositions on a single composition spread library. NMF is then used to identify the unique microXRD patterns present in the system and quantify the contribution of each of these basis patterns to each experimental diffraction pattern. As a baseline, the results of NMF are compared to the results obtained using principle component analysis. The basis patterns found using NMF are then compared to reference patterns from a database of known structural patterns in order to identify known structures. As an example system, we explore a region of the Fe-Ga-Pd ternary system. The use of NMF in this case reduces the arduous task of analyzing hundreds of microXRD patterns to the much smaller task of identifying only nine microXRD patterns.
Immediate Sequential Bilateral Cataract Surgery
DEFF Research Database (Denmark)
Kessel, Line; Andresen, Jens; Erngaard, Ditte
2015-01-01
The aim of the present systematic review was to examine the benefits and harms associated with immediate sequential bilateral cataract surgery (ISBCS) with specific emphasis on the rate of complications, postoperative anisometropia, and subjective visual function in order to formulate evidence......-based national Danish guidelines for cataract surgery. A systematic literature review in PubMed, Embase, and Cochrane central databases identified three randomized controlled trials that compared outcome in patients randomized to ISBCS or bilateral cataract surgery on two different dates. Meta-analyses were...... performed using the Cochrane Review Manager software. The quality of the evidence was assessed using the GRADE method (Grading of Recommendation, Assessment, Development, and Evaluation). We did not find any difference in the risk of complications or visual outcome in patients randomized to ISBCS or surgery...
Optimization of reversible sequential circuits
Sayem, Abu Sadat Md
2010-01-01
In recent years reversible logic has been considered as an important issue for designing low power digital circuits. It has voluminous applications in the present rising nanotechnology such as DNA computing, Quantum Computing, low power VLSI and quantum dot automata. In this paper we have proposed optimized design of reversible sequential circuits in terms of number of gates, delay and hardware complexity. We have designed the latches with a new reversible gate and reduced the required number of gates, garbage outputs, and delay and hardware complexity. As the number of gates and garbage outputs increase the complexity of reversible circuits, this design will significantly enhance the performance. We have proposed reversible D-latch and JK latch which are better than the existing designs available in literature.
Institute of Scientific and Technical Information of China (English)
刘文霞; 蒋程; 张建华; 王昕伟; 于雷; 刘德先
2013-01-01
A multistage reliability model of wind turbine is built utilizing a systematic method based on Markov chain approach, considering the drawback of the traditional wind turbine reliability model in sequential Monte Carlo Simulation. The probability of occurrence and duration of each state can be obtained using the state transition rate between each output power state of wind turbine calculated out with the regional wind regime of wind farm and operation historical data of wind turbine. On this basis, the double sampling method for the sequential Monte Carlo simulation is proposed. The simulation program for multistage reliability model of wind turbine is compiled. Then it is compared with the commonly used two-state model based on a single sampling method. Simulation results verify the feasibility of the proposed model based on Markov method. It can reflect accurately the output power of the wind turbine of any duration under fault conditions, improve the accuracy and expand the application range of the simulation model.% 针对传统风电机组可靠性模型不适合序贯蒙特卡罗仿真的不足，利用基于马尔可夫链的解析方法建立了风电机组的多状态可靠性模型。通过对整个风电场的风况和风机的历史运行数据的统计，得出风机有功输出状态之间的转移率，利用基于马尔可夫链的解析方法求出每个风机状态出现的概率和该状态的平均持续时间。在此基础上，提出了用于序贯蒙特卡罗仿真的双重抽样方法。在 Matlab 中编制了风电机组多状态可靠性模型的仿真程序，并与常用的基于单重抽样方法的两状态模型进行比较分析。仿真结果表明了所建多状态可靠性模型和所提双重抽样方法的有效性，该模型能反映故障情况下任意持续时间的风机出力，从而提高了模型的准确性和应用范围。
Directory of Open Access Journals (Sweden)
Jung Im Kim
2014-01-01
Full Text Available Objectives. To perform dual analysis of tumor perfusion and glucose metabolism using perfusion CT and FDG-PET/CT for the purpose of monitoring the early response to bevacizumab therapy in rabbit VX2 tumor models and to assess added value of FDG-PET to perfusion CT. Methods. Twenty-four VX2 carcinoma tumors implanted in bilateral back muscles of 12 rabbits were evaluated. Serial concurrent perfusion CT and FDG-PET/CT were performed before and 3, 7, and 14 days after bevacizumab therapy (treatment group or saline infusion (control group. Perfusion CT was analyzed to calculate blood flow (BF, blood volume (BV, and permeability surface area product (PS; FDG-PET was analyzed to calculate SUVmax, SUVmean, total lesion glycolysis (TLG, entropy, and homogeneity. The flow-metabolic ratio (FMR was also calculated and immunohistochemical analysis of microvessel density (MVD was performed. Results. On day 14, BF and BV in the treatment group were significantly lower than in the control group. There were no significant differences in all FDG-PET-derived parameters between both groups. In the treatment group, FMR prominently decreased after therapy and was positively correlated with MVD. Conclusions. In VX2 tumors, FMR could provide further insight into the early antiangiogenic effect reflecting a mismatch in intratumor blood flow and metabolism.
Energy Technology Data Exchange (ETDEWEB)
Chung, Soo Young [Dept. of Pathology, Dongnam Institute of Radiological and Medical Sciences, Busan (Korea, Republic of); Jeon, Gyeong Sik [Dept. of Radiology, CHA Bundang Medical Center, College of Medicine, CHA University, Seongnam (Korea, Republic of); Lee, Byung Mo [Dept. of Surgery, Seoul Paik Hospital, Inje University College of Medicine, Seoul (Korea, Republic of)
2013-04-15
To compare the volume change and the regenerative capacity between portal vein ligation (embolization) (PVL) and heterochronous PVL with hepatic artery ligation (HAL) in a rodent model. The animals were separated into three groups: group I, ligation of the left lateral and median portal vein branches; group II, completion of PVL, followed by ligation of the same branches of the hepatic artery after 48 h; control group, laparotomy without ligation was performed. Five rats from each group were sacrificed on 1, 3, 5, and 7 days after the operation. Volume change measurement, liver function tests and immunohistochemical analysis were performed. The volume of the nonligated lobe between groups I and II was not significantly different by day 5 and day 7. Mean alanine aminotransferase and total bilirubin levels were significantly higher in group II, while the albumin level was higher in group I. Both c-kit- and MIB-5-positive cells used in the activity detection of regeneration were more prevalent in group I on day 1, 3, and 5, with statistical significance. There was no operation related mortality. PVL alone is safe and effective in compensatory liver regeneration. Performing both PVL and HAL does not confer any additional benefits.
Directory of Open Access Journals (Sweden)
Muhammad G. Saleh
2012-01-01
Full Text Available Purpose. To evaluate whether 3T clinical MRI with a small-animal coil and gradient-echo (GE sequence could be used to characterize long-term left ventricular remodelling (LVR following nonreperfused myocardial infarction (MI using semi-automatic segmentation software (SASS in a rat model. Materials and Methods. 5 healthy rats were used to validate left ventricular mass (LVM measured by MRI with postmortem values. 5 sham and 7 infarcted rats were scanned at 2 and 4 weeks after surgery to allow for functional and structural analysis of the heart. Measurements included ejection fraction (EF, end-diastolic volume (EDV, end-systolic volume (ESV, and LVM. Changes in different regions of the heart were quantified using wall thickness analyses. Results. LVM validation in healthy rats demonstrated high correlation between MR and postmortem values. Functional assessment at 4 weeks after MI revealed considerable reduction in EF, increases in ESV, EDV, and LVM, and contractile dysfunction in infarcted and noninfarcted regions. Conclusion. Clinical 3T MRI with a small animal coil and GE sequence generated images in a rat heart with adequate signal-to-noise ratio (SNR for successful semiautomatic segmentation to accurately and rapidly evaluate long-term LVR after MI.
Single and sequential inversions of radiomagnetotelluric and transient electromagnetic data
Pratama, Ridho Nanda; Widodo
2017-07-01
The Volvi basin is an alluvial valley located 45 km northeast of the city of Thessaloniki in Northern Greece. It is a neotectonic graben (6 km wide) structure with increasing seismic activity where the large 1978 Thessaloniki earthquake occurred. Hence, near surface Electromagnetic (EM) which are Radiomagnetotelluric (RMT) and Transient Electromagnetic (TEM) measurements are carried out to understand the location of the local active fault and the top of the basement structure of the research area. The sequential Inversion of both data was performed to get detailed information of subsurface structure. Whereas RMT data is sensitive to describe in shallow structure, while the deeper structure is related to TEM data. We derived the sequential inversion scheme from the second order of Marquardt algorithm using singular value decomposition (SVD). The sequential model has been improved the resolution of the single model which has more than 0.9 on the importance value. Single and sequential inversions of RMT and TEM give a consistent result in which both identify the fault structure indication.
Silva, Juliete A F; Ferrucci, Danilo L; Peroni, Luis A; Abrahão, Patrícia G S; Salamene, Aline F; Rossa-Junior, Carlos; Carvalho, Hernandes F; Stach-Machado, Dagmar R
2012-06-01
Molecular mechanisms responsible for periodontal disease (PD) and its worsening in type 1 Diabetes Mellitus (DM1) remain unknown. Cytokine profile and expression levels of collagenases, Mmp14, and tissue inhibitors were determined, as were the numbers of neutrophils and macrophages in combined streptozotocin-induced DM1 and ligature-induced PD models. Increased IL-23 (80-fold) and Mmp8 expression (25-fold) was found in DM1. Ligature resulted in an IL-1β/IL-6 profile, increased expression of Mmp8, Mmp13, and Mmp14 (but not Mmp1), and transient expression of Timp1 and Reck in non-diabetics. PD in DM1 involved IL-1β (but not IL-6) and IL-23/IL-17, reduced IL-6 and IL-10, sustained Mmp8 and Mmp14, increased Mmp13 and reduced Reck expression in association with 20-fold higher counts of neutrophils and macrophages. IL-23 and Mmp8 expression are hallmarks of DM1. In association with the IL-1/IL-6 (Th1) response in PD, one found a secondary IL-17 (Th17) pathway in non-diabetic rats. Low IL-6/TNF-α suggest that the Th1 response was compromised in DM1, while IL-17 indicates a prevalence of the Th17 pathway, resulting in high neutrophil recruitment. Mmp8, Mmp13, and Mmp14 expression seems important in the tissue destruction during PD in DM1. PD-associated IL-1/IL-6 (Th1), IL-10, and Reck expression are associated with the acute-to-chronic inflammation transition, which is lost in DM1. In conclusion, IL-23/IL-17 are associated with the PD progression in DM1. Copyright © 2011 Wiley Periodicals, Inc.
Directory of Open Access Journals (Sweden)
Huber-Wagner S
2010-05-01
Full Text Available Abstract Background There are several well established scores for the assessment of the prognosis of major trauma patients that all have in common that they can be calculated at the earliest during intensive care unit stay. We intended to develop a sequential trauma score (STS that allows prognosis at several early stages based on the information that is available at a particular time. Study design In a retrospective, multicenter study using data derived from the Trauma Registry of the German Trauma Society (2002-2006, we identified the most relevant prognostic factors from the patients basic data (P, prehospital phase (A, early (B1, and late (B2 trauma room phase. Univariate and logistic regression models as well as score quality criteria and the explanatory power have been calculated. Results A total of 2,354 patients with complete data were identified. From the patients basic data (P, logistic regression showed that age was a significant predictor of survival (AUCmodel p, area under the curve = 0.63. Logistic regression of the prehospital data (A showed that blood pressure, pulse rate, Glasgow coma scale (GCS, and anisocoria were significant predictors (AUCmodel A = 0.76; AUCmodel P + A = 0.82. Logistic regression of the early trauma room phase (B1 showed that peripheral oxygen saturation, GCS, anisocoria, base excess, and thromboplastin time to be significant predictors of survival (AUCmodel B1 = 0.78; AUCmodel P +A + B1 = 0.85. Multivariate analysis of the late trauma room phase (B2 detected cardiac massage, abbreviated injury score (AIS of the head ≥ 3, the maximum AIS, the need for transfusion or massive blood transfusion, to be the most important predictors (AUCmodel B2 = 0.84; AUCfinal model P + A + B1 + B2 = 0.90. The explanatory power - a tool for the assessment of the relative impact of each segment to mortality - is 25% for P, 7% for A, 17% for B1 and 51% for B2. A spreadsheet for the easy calculation of the sequential trauma
Sequential association rules in atonal music
A. Honingh; T. Weyde; D. Conklin
2009-01-01
This paper describes a preliminary study on the structure of atonal music. In the same way as sequential association rules of chords can be found in tonal music, sequential association rules of pitch class set categories can be found in atonal music. It has been noted before that certain pitch class
Sequential association rules in atonal music
Honingh, A.; Weyde, T.; Conklin, D.
2009-01-01
This paper describes a preliminary study on the structure of atonal music. In the same way as sequential association rules of chords can be found in tonal music, sequential association rules of pitch class set categories can be found in atonal music. It has been noted before that certain pitch class
Sequential auctions, price trends, and risk preferences
Hu, A.; Zou, L.
2015-01-01
We analyze sequential auctions in a general environment where bidders are heterogeneous in risk exposures and exhibit non-quasilinear utilities. We derive a pure strategy symmetric equilibrium for the sequential Dutch and Vickrey auctions respectively, with an arbitrary number of identical objects f
Analyzing Sequential Patterns in Retail Databases
Institute of Scientific and Technical Information of China (English)
Unil Yun
2007-01-01
Finding correlated sequential patterns in large sequence databases is one of the essential tasks in data miningsince a huge number of sequential patterns are usually mined, but it is hard to find sequential patterns with the correlation.According to the requirement of real applications, the needed data analysis should be different. In previous mining approaches,after mining the sequential patterns, sequential patterns with the weak affinity are found even with a high minimum support.In this paper, a new framework is suggested for mining weighted support affinity patterns in which an objective measure,sequential ws-confidence is developed to detect correlated sequential patterns with weighted support affinity patterns. Toefficiently prune the weak affinity patterns, it is proved that ws-confidence measure satisfies the anti-monotone and crossweighted support properties which can be applied to eliminate sequential patterns with dissimilar weighted support levels.Based on the framework, a weighted support affinity pattern mining algorithm (WSMiner) is suggested. The performancestudy shows that WSMiner is efficient and scalable for mining weighted support affinity patterns.
Trial Sequential Methods for Meta-Analysis
Kulinskaya, Elena; Wood, John
2014-01-01
Statistical methods for sequential meta-analysis have applications also for the design of new trials. Existing methods are based on group sequential methods developed for single trials and start with the calculation of a required information size. This works satisfactorily within the framework of fixed effects meta-analysis, but conceptual…
Discriminative predation: Simultaneous and sequential encounter experiments
Institute of Scientific and Technical Information of China (English)
C.D.BEATTY; D.W.FRANKS
2012-01-01
There are many situations in which the ability of animals to distinguish between two similar looking objects can have significant selective consequences.For example,the objects that require discrimination may be edible versus defended prey,predators versus non-predators,or mates of varying quality.Working from the premise that there are situations in which discrimination may be more or less successful,we hypothesized that individuals find it more difficult to distinguish between stimuli when they encounter them sequentially rather than simultaneously.Our study has wide biological and psychological implications from the perspective of signal perception,signal evolution,and discrimination,and could apply to any system where individuals are making relative judgments or choices between two or more stimuli or signals.While this is a general principle that might seem intuitive,it has not been experimentally tested in this context,and is often not considered in the design of models or experiments,or in the interpretation of a wide range of studies.Our study is different from previous studies in psychology in that a) the level of similarity of stimuli are gradually varied to obtain selection gradients,and b) we discuss the implications of our study for specific areas in ecology,such as the level of perfection of mimicry in predator-prey systems.Our experiments provide evidence that it is indeed more difficult to distinguish between stimuli - and to learn to distinguish between stimuli - when they are encountered sequentially rather than simultaneously,even if the intervening time interval is short.
Alternative quadratic programming for non-negative matrix low-order factorization%非负矩阵低秩分解的交替二次规划算法
Institute of Scientific and Technical Information of China (English)
阳明盛; 刘力军
2014-01-01
非负矩阵分解算法有多种，但都存在着各自的缺陷。在现有工作的基础上，将非负矩阵分解(NMF)模型转化为一组(两个)二次凸规划模型，利用二次凸规划有解的充分必要条件推导出迭代公式，进行交替迭代，可求出问题的解。得到的解不仅具有某种最优性、稀疏性，还避免了约束非线性规划求解的复杂过程和大量的计算。证明了迭代的收敛性，且收敛速度快于已知的方法，对于大规模数据模型尤能显示出其优越性。%Many algorithms are available for solving the problem of non-negative matrix factorization (NMF)despite respective shortcomings.Based on existing works,NMF model is transformed into one group of (two ) convex quadratic programming model. Using the sufficient and necessary conditions for quadratic programming problems,iteration formula for NMF is obtained by which the problem is solved after alternative iteration process.The obtained solution reaches its optimality and sparseness while avoiding computational burden and complexity for solving constrained nonlinear programming problems.The iteration convergence can be proved easily and its speed is faster than that of existing approaches.The proposed approach has its superority for large-scale data model.
Multi-agent sequential hypothesis testing
Kim, Kwang-Ki K.
2014-12-15
This paper considers multi-agent sequential hypothesis testing and presents a framework for strategic learning in sequential games with explicit consideration of both temporal and spatial coordination. The associated Bayes risk functions explicitly incorporate costs of taking private/public measurements, costs of time-difference and disagreement in actions of agents, and costs of false declaration/choices in the sequential hypothesis testing. The corresponding sequential decision processes have well-defined value functions with respect to (a) the belief states for the case of conditional independent private noisy measurements that are also assumed to be independent identically distributed over time, and (b) the information states for the case of correlated private noisy measurements. A sequential investment game of strategic coordination and delay is also discussed as an application of the proposed strategic learning rules.
Image fusion algorithm based on NSCT and non-negative matrix factorization%NSCT和非负矩阵分解的图像融合方法
Institute of Scientific and Technical Information of China (English)
李美丽; 李言俊; 王红梅; 张科
2010-01-01
非采样Contourlet变换(Nonsubsampled Contourlet transform,NSCT)是一种新的多尺度变换,它同时具有方向性、各向异性和平移不变性,能有效地表示图像的边沿与轮廓.非负矩阵分解(Non-negative Matrix Factorization,NMF)是在矩阵中所有元素均为非负数的条件下的一种矩阵分解方法.在非负矩阵分解过程中,适当地选取特征空间的维数能够获得原始数据的局部特征.提出了一种基于NSCT和NMF的图像融合方法.首先用NSCT对已配准的源图像进行分解,得到低通子带系数和各带通子带系数;其次将低通子带系数作为原始数据,选取特征空间的维数为1,利用非负矩阵分解得到包含特征基的低通子带系数;对各带通子带系数采取绝对值最大的原则进行系数选择,得到融合图像的各带通子带系数;最后经过NSCT逆变换得到融合图像.实验结果表明,融合结果优于Laplacian方法、小渡方法和NMF方法.
Directory of Open Access Journals (Sweden)
Mark Lutman
2013-10-01
Full Text Available Cochlear implants (CIs require efficient speech processing to maximize information transmission to the brain, especially in noise. A novel CI processing strategy was proposed in our previous studies, in which sparsity-constrained non-negative matrix factorization (NMF was applied to the envelope matrix in order to improve the CI performance in noisy environments. It showed that the algorithm needs to be adaptive, rather than fixed, in order to adjust to acoustical conditions and individual characteristics. Here, we explore the benefit of a system that allows the user to adjust the signal processing in real time according to their individual listening needs and their individual hearing capabilities. In this system, which is based on MATLABR , SIMULINKR and the xPC TargetTM environment, the input/outupt (I/O boards are interfaced between the SIMULINK blocks and the CI stimulation system, such that the output can be controlled successfully in the manner of a hardware-in-the-loop (HIL simulation, hence offering a convenient way to implement a real time signal processing module that does not require any low level language. The sparsity constrained parameter of the algorithm was adapted online subjectively during an experiment with normal-hearing subjects and noise vocoded speech simulation. Results show that subjects chose different parameter values according to their own intelligibility preferences, indicating that adaptive real time algorithms are beneficial to fully explore subjective preferences. We conclude that the adaptive real time systems are beneficial for the experimental design, and such systems allow one to conduct psychophysical experiments with high ecological validity.
Multilevel sequential Monte Carlo samplers
Beskos, Alexandros
2016-08-29
In this article we consider the approximation of expectations w.r.t. probability distributions associated to the solution of partial differential equations (PDEs); this scenario appears routinely in Bayesian inverse problems. In practice, one often has to solve the associated PDE numerically, using, for instance finite element methods which depend on the step-size level . hL. In addition, the expectation cannot be computed analytically and one often resorts to Monte Carlo methods. In the context of this problem, it is known that the introduction of the multilevel Monte Carlo (MLMC) method can reduce the amount of computational effort to estimate expectations, for a given level of error. This is achieved via a telescoping identity associated to a Monte Carlo approximation of a sequence of probability distributions with discretization levels . âˆž>h0>h1â‹¯>hL. In many practical problems of interest, one cannot achieve an i.i.d. sampling of the associated sequence and a sequential Monte Carlo (SMC) version of the MLMC method is introduced to deal with this problem. It is shown that under appropriate assumptions, the attractive property of a reduction of the amount of computational effort to estimate expectations, for a given level of error, can be maintained within the SMC context. That is, relative to exact sampling and Monte Carlo for the distribution at the finest level . hL. The approach is numerically illustrated on a Bayesian inverse problem. Â© 2016 Elsevier B.V.
Sequential hypothesis testing with spatially correlated presence-absence data.
DePalma, Elijah; Jeske, Daniel R; Lara, Jesus R; Hoddle, Mark
2012-06-01
A pest management decision to initiate a control treatment depends upon an accurate estimate of mean pest density. Presence-absence sampling plans significantly reduce sampling efforts to make treatment decisions by using the proportion of infested leaves to estimate mean pest density in lieu of counting individual pests. The use of sequential hypothesis testing procedures can significantly reduce the number of samples required to make a treatment decision. Here we construct a mean-proportion relationship for Oligonychus perseae Tuttle, Baker, and Abatiello, a mite pest of avocados, from empirical data, and develop a sequential presence-absence sampling plan using Bartlett's sequential test procedure. Bartlett's test can accommodate pest population models that contain nuisance parameters that are not of primary interest. However, it requires that population measurements be independent, which may not be realistic because of spatial correlation of pest densities across trees within an orchard. We propose to mitigate the effect of spatial correlation in a sequential sampling procedure by using a tree-selection rule (i.e., maximin) that sequentially selects each newly sampled tree to be maximally spaced from all other previously sampled trees. Our proposed presence-absence sampling methodology applies Bartlett's test to a hypothesis test developed using an empirical mean-proportion relationship coupled with a spatial, statistical model of pest populations, with spatial correlation mitigated via the aforementioned tree-selection rule. We demonstrate the effectiveness of our proposed methodology over a range of parameter estimates appropriate for densities of O. perseae that would be observed in avocado orchards in California.
Automated weighing by sequential inference in dynamic environments
Martin, A D
2015-01-01
We demonstrate sequential mass inference of a suspended bag of milk powder from simulated measurements of the vertical force component at the pivot while the bag is being filled. We compare the predictions of various sequential inference methods both with and without a physics model to capture the system dynamics. We find that non-augmented and augmented-state unscented Kalman filters (UKFs) in conjunction with a physics model of a pendulum of varying mass and length provide rapid and accurate predictions of the milk powder mass as a function of time. The UKFs outperform the other method tested - a particle filter. Moreover, inference methods which incorporate a physics model outperform equivalent algorithms which do not.
Structure learning in human sequential decision-making.
Acuña, Daniel E; Schrater, Paul
2010-12-02
Studies of sequential decision-making in humans frequently find suboptimal performance relative to an ideal actor that has perfect knowledge of the model of how rewards and events are generated in the environment. Rather than being suboptimal, we argue that the learning problem humans face is more complex, in that it also involves learning the structure of reward generation in the environment. We formulate the problem of structure learning in sequential decision tasks using Bayesian reinforcement learning, and show that learning the generative model for rewards qualitatively changes the behavior of an optimal learning agent. To test whether people exhibit structure learning, we performed experiments involving a mixture of one-armed and two-armed bandit reward models, where structure learning produces many of the qualitative behaviors deemed suboptimal in previous studies. Our results demonstrate humans can perform structure learning in a near-optimal manner.
Structure learning in human sequential decision-making.
Directory of Open Access Journals (Sweden)
Daniel E Acuña
Full Text Available Studies of sequential decision-making in humans frequently find suboptimal performance relative to an ideal actor that has perfect knowledge of the model of how rewards and events are generated in the environment. Rather than being suboptimal, we argue that the learning problem humans face is more complex, in that it also involves learning the structure of reward generation in the environment. We formulate the problem of structure learning in sequential decision tasks using Bayesian reinforcement learning, and show that learning the generative model for rewards qualitatively changes the behavior of an optimal learning agent. To test whether people exhibit structure learning, we performed experiments involving a mixture of one-armed and two-armed bandit reward models, where structure learning produces many of the qualitative behaviors deemed suboptimal in previous studies. Our results demonstrate humans can perform structure learning in a near-optimal manner.
Sequential experimental design based generalised ANOVA
Energy Technology Data Exchange (ETDEWEB)
Chakraborty, Souvik, E-mail: csouvik41@gmail.com; Chowdhury, Rajib, E-mail: rajibfce@iitr.ac.in
2016-07-15
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Sequential experimental design based generalised ANOVA
Chakraborty, Souvik; Chowdhury, Rajib
2016-07-01
Over the last decade, surrogate modelling technique has gained wide popularity in the field of uncertainty quantification, optimization, model exploration and sensitivity analysis. This approach relies on experimental design to generate training points and regression/interpolation for generating the surrogate. In this work, it is argued that conventional experimental design may render a surrogate model inefficient. In order to address this issue, this paper presents a novel distribution adaptive sequential experimental design (DA-SED). The proposed DA-SED has been coupled with a variant of generalised analysis of variance (G-ANOVA), developed by representing the component function using the generalised polynomial chaos expansion. Moreover, generalised analytical expressions for calculating the first two statistical moments of the response, which are utilized in predicting the probability of failure, have also been developed. The proposed approach has been utilized in predicting probability of failure of three structural mechanics problems. It is observed that the proposed approach yields accurate and computationally efficient estimate of the failure probability.
Symbolic Model Checking for Sequential Circuit Verification
1993-07-15
umI4A8807, and in paut by the Semiconductor Research Corporation under Contract 92cW~-294. The fourti author was supported by an AT&T Bell Laboamtories Ph.D...found late in the design phase of a digital circuit are a major cause of unexpected delays in realising the circuit in hardware. As a result, interest in...block diagram of the stack. It consists of an array of d cells, each cell consisting of a control part, a data part and a completion tree. The data
Energy Technology Data Exchange (ETDEWEB)
Man, Jun [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Zhang, Jiangjiang [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Li, Weixuan [Pacific Northwest National Laboratory, Richland Washington USA; Zeng, Lingzao [Zhejiang Provincial Key Laboratory of Agricultural Resources and Environment, Institute of Soil and Water Resources and Environmental Science, College of Environmental and Resource Sciences, Zhejiang University, Hangzhou China; Wu, Laosheng [Department of Environmental Sciences, University of California, Riverside California USA
2016-10-01
The ensemble Kalman filter (EnKF) has been widely used in parameter estimation for hydrological models. The focus of most previous studies was to develop more efficient analysis (estimation) algorithms. On the other hand, it is intuitively understandable that a well-designed sampling (data-collection) strategy should provide more informative measurements and subsequently improve the parameter estimation. In this work, a Sequential Ensemble-based Optimal Design (SEOD) method, coupled with EnKF, information theory and sequential optimal design, is proposed to improve the performance of parameter estimation. Based on the first-order and second-order statistics, different information metrics including the Shannon entropy difference (SD), degrees of freedom for signal (DFS) and relative entropy (RE) are used to design the optimal sampling strategy, respectively. The effectiveness of the proposed method is illustrated by synthetic one-dimensional and two-dimensional unsaturated flow case studies. It is shown that the designed sampling strategies can provide more accurate parameter estimation and state prediction compared with conventional sampling strategies. Optimal sampling designs based on various information metrics perform similarly in our cases. The effect of ensemble size on the optimal design is also investigated. Overall, larger ensemble size improves the parameter estimation and convergence of optimal sampling strategy. Although the proposed method is applied to unsaturated flow problems in this study, it can be equally applied in any other hydrological problems.
Kvasha, Anton; Khalifa, Muhammad; Biswas, Seema; Hamoud, Mohamad; Nordkin, Dmitri; Bramnik, Zakhar; Willenz, Udi; Farraj, Moaad; Waksman, Igor
2016-10-01
Transanal, hybrid natural orifice translumenal endoscopic surgery (NOTES) and NOTES-assisted natural orifice specimen extraction techniques hold promise as leaders in the field of natural orifice surgery. We report the feasibility of a novel NOTES assisted technique for unlimited length, clean, endolumenal proctocolectomy in a porcine model. This technique is a modification of a transanal intussusception and pull-through procedure recently published by our group. Rectal mobilization was achieved laparoscopically; this was followed by a transanal recto-rectal intussusception and pull-through (IPT). IPT was established in a stepwise fashion. First, the proximal margin of resection was attached laparoscopically to the shaft of the anvil of an end-to-end circular stapler with a ligature around the rectum. Second, this complex was pulled transanally to produce IPT. To achieve an unlimited-length proctocolectomy, the IPT step was repeated several times prior to bowel resection. This was facilitated by removing the ligature applied in the first step of this procedure. Once sequential IPT established the desired length of bowel to be resected, a second ligature was placed around the rectum approximating the proximal and distal resection margins. The specimen was resected and extracted by making a full-thickness incision through the 2 bowel walls. The anastomosis was achieved by deploying the stapler. The technique was found to be feasible. Peritoneal samples, collected after transanal specimen extraction, did not demonstrate bacterial growth. The minimally invasive nature of this evolving technique as well as its aseptic bowel manipulation has the potential to limit the complications associated with abdominal wall incision and surgical site infection.
Directory of Open Access Journals (Sweden)
Hualin Xie
2015-07-01
Full Text Available Using a sequential slack-based measure (SSBM model, this paper analyzes the spatiotemporal disparities of urban land use economic efficiency (ULUEE under environmental constraints, and its influencing factors in 270 cities across China from 2003–2012. The main results are as follows: (1 The average ULUEE for Chinese cities is only 0.411, and out of the 270 cities, only six cities are always efficient in urban land use in the study period. Most cities have a lot of room to improve the economic output of secondary and tertiary industries, as well as environmental protection work; (2 The eastern region of China enjoys the highest ULUEE, followed by the western and central regions. Super-scale cities show the best performance of all four city scales, followed by large-scale, small-scale and medium-scale cities. Cities with relatively developed economies and less pollutant discharge always have better ULUEE; (3 The results of slack variables analysis show that most cities have problems such as the labor surplus, over-development, excessive pollutant discharge, economic output shortage, and unreasonable use of funds is the most serious one; (4 The regression results of the influencing factors show that improvements of the per capita GDP and land use intensity are helpful to raise ULUEE. The urbanization rate and the proportion of foreign enterprises’ output account for the total output in the secondary and tertiary industries only have the same effect in some regions and city scales. The land management policy and land leasing policy have negative impact on the ULUEE in all the three regions and four city scales; (5 Some targeted policy goals are proposed, including the reduction of surplus labor, and pay more attention to environmental protection. Most importantly, effective implementation of land management policies from the central government, and stopping blind leasing of land to make up the local government’s financial deficit would be very
Institute of Scientific and Technical Information of China (English)
吴荣玉; 樊丰; 舒建
2012-01-01
Matrix factorization is an effective tool to realize mass data processing and analysis. Non-negative matrix factorization is a kind of orthogonal transformation. It can realize non-negative decomposition in the condition that all elements are non-negative. The technology of robust Hash use the secret key to extract some robust features from multimedia content, then,these features are compressed to produce hash value. We can authenticate the authenticity of media content through comparing the Hash transited along with the media content with the Hash produced by receiver.%矩阵分解是实现大规模数据处理与分析的一种有效工具.矩阵的非负矩阵分解NMF(Non-Negative Matrix Factorization)变换是一种正交变换,是在矩阵中所有元素均为非负的条件下对其实现的非负分解.鲁棒哈希技术利用密钥提取多媒体内容的某些鲁棒特征,通过进一步压缩产生哈希值,通过比较跟随媒体内容传送来的哈希和接收端产生的哈希,实现对媒体内容的真实性认证.
National Aeronautics and Space Administration — This paper discusses the effect of sequential conflict resolution maneuvers of an infinite aircraft flow through a finite control volume. Aircraft flow models are...
Optimal power flow using sequential quadratic programming
Nejdawi, Imad M.
1999-11-01
Optimal power flow (OPF) is an operational as well as a planning tool used by electric utilities to help them operate their network in the most economic and secure mode of operation. Various algorithms to solve the OPF problem evolved over the past three decades; linear programming (LP) techniques were among the major mathematical programming methods utilized. The linear models of the objective function and the linearization of the constraints are the main features of these techniques. The main advantages of the LP approach are simplicity and speed. Nonlinear programming techniques have been applied to OPF solution. The major drawback is the expensive solution of large sparse systems of equations. This research is concerned with the development of a new OPF solution algorithm using sequential quadratic programming (SQP). In this formulation, a small dense system the size of which is equal to the number of control variables is solved in an inner loop. The Jacobian and Hessian terms are calculated in an outer loop. The total number of outer loop iterations is comparable to those in an ordinary load flow in contrast to 20--30 iterations in other nonlinear methods. In addition, the total number of floating point operations is less than that encountered in direct methods by two orders of magnitude. We also model dispatch over a twenty four-hour time horizon in a transmission constrained power network that includes price-responsive loads where large energy customers can operate their loads in time intervals with lowest spot prices.
Cooperation induced by random sequential exclusion
Li, Kun; Cong, Rui; Wang, Long
2016-06-01
Social exclusion is a common and powerful tool to penalize deviators in human societies, and thus to effectively elevate collaborative efforts. Current models on the evolution of exclusion behaviors mostly assume that each peer excluder independently makes the decision to expel the defectors, but has no idea what others in the group would do or how the actual punishment effect will be. Thus, a more realistic model, random sequential exclusion, is proposed. In this mechanism, each excluder has to pay an extra scheduling cost and then all the excluders are arranged in a random order to implement the exclusion actions. If one free rider has already been excluded by an excluder, the remaining excluders will not participate in expelling this defector. We find that this mechanism can help stabilize cooperation under more unfavorable conditions than the normal peer exclusion can do, either in well-mixed population or on social networks. However, too large a scheduling cost may undermine the advantage of this mechanism. Our work validates the fact that collaborative practice among punishers plays an important role in further boosting cooperation.
Sequential Monte Carlo on large binary sampling spaces
Schäfer, Christian
2011-01-01
A Monte Carlo algorithm is said to be adaptive if it automatically calibrates its current proposal distribution using past simulations. The choice of the parametric family that defines the set of proposal distributions is critical for a good performance. In this paper, we present such a parametric family for adaptive sampling on high-dimensional binary spaces. A practical motivation for this problem is variable selection in a linear regression context. We want to sample from a Bayesian posterior distribution on the model space using an appropriate version of Sequential Monte Carlo. Raw versions of Sequential Monte Carlo are easily implemented using binary vectors with independent components. For high-dimensional problems, however, these simple proposals do not yield satisfactory results. The key to an efficient adaptive algorithm are binary parametric families which take correlations into account, analogously to the multivariate normal distribution on continuous spaces. We provide a review of models for binar...
Markov sequential pattern recognition : dependency and the unknown class.
Energy Technology Data Exchange (ETDEWEB)
Malone, Kevin Thomas; Haschke, Greg Benjamin; Koch, Mark William
2004-10-01
The sequential probability ratio test (SPRT) minimizes the expected number of observations to a decision and can solve problems in sequential pattern recognition. Some problems have dependencies between the observations, and Markov chains can model dependencies where the state occupancy probability is geometric. For a non-geometric process we show how to use the effective amount of independent information to modify the decision process, so that we can account for the remaining dependencies. Along with dependencies between observations, a successful system needs to handle the unknown class in unconstrained environments. For example, in an acoustic pattern recognition problem any sound source not belonging to the target set is in the unknown class. We show how to incorporate goodness of fit (GOF) classifiers into the Markov SPRT, and determine the worse case nontarget model. We also develop a multiclass Markov SPRT using the GOF concept.
Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods
NeuroData; Paninski, L
2015-01-01
Vogelstein JT, Paninski L. Spike Inference from Calcium Imaging using Sequential Monte Carlo Methods. Statistical and Applied Mathematical Sciences Institute (SAMSI) Program on Sequential Monte Carlo Methods, 2008
Efficacy of premixed versus sequential administration of ...
African Journals Online (AJOL)
an adjuvant to intrathecal hyperbaric bupivacaine in lower limb surgery ... sequential administration in separate syringes on block characteristics, haemodynamic parameters, ... significant side effects and reduces the postoperative analgesic requirement. ... acceptance spinal anaesthesia is fast becoming the procedure of.
Delayed Sequential Coding of Correlated Sources
Ma, Nan; Ishwar, Prakash
2007-01-01
Motivated by video coding applications, we study the problem of sequential coding of correlated sources with (noncausal) encoding and/or decoding frame-delays. The fundamental tradeoffs between individual frame rates, individual frame distortions, and encoding/decoding frame-delays are derived in terms of a single-letter information-theoretic characterization of the rate-distortion region for general inter-frame source correlations and certain types of (potentially frame-specific and coupled) single-letter fidelity criteria. For video sources which are spatially stationary memoryless and temporally Gauss--Markov, MSE frame distortions, and a sum-rate constraint, our results expose the optimality of differential predictive coding among all causal sequential coders. Somewhat surprisingly, causal sequential encoding with one-step delayed noncausal sequential decoding can exactly match the sum-rate-MSE performance of joint coding for all nontrivial MSE-tuples satisfying certain positive semi-definiteness conditio...
A universal property for sequential measurement
Westerbaan, Abraham; Westerbaan, Bas
2016-09-01
We study the sequential product the operation p ∗ q = √{ p } q √{ p } on the set of effects, [0, 1]𝒜, of a von Neumann algebra 𝒜 that represents sequential measurement of first p and then q. In their work [J. Math. Phys. 49(5), 052106 (2008)], Gudder and Latrémolière give a list of axioms based on physical grounds that completely determines the sequential product on a von Neumann algebra of type I, that is, a von Neumann algebra ℬ(ℋ) of all bounded operators on some Hilbert space ℋ. In this paper we give a list of axioms that completely determines the sequential product on all von Neumann algebras simultaneously (Theorem 4).
Limited backward induction: foresight and behavior in sequential games
Marco Mantovani
2015-01-01
The paper tests experimentally for limited foresight in sequential games. We develop a general out-of-equilibrium framework of strategic thinking based on limited foresight. It assumes the players take decisions focusing on close-by nodes, following backward induction – what we call limited backward induction (LBI). The main prediction of the model is tested in the context of a modified Game of 21. In line with the theoretical hypotheses, our results show most players think strategically only...
From global fits of neutrino data to constrained sequential dominance
Björkeroth, Fredrik
2014-01-01
Constrained sequential dominance (CSD) is a natural framework for implementing the see-saw mechanism of neutrino masses which allows the mixing angles and phases to be accurately predicted in terms of relatively few input parameters. We perform a global analysis on a class of CSD($n$) models where, in the flavour basis, two right-handed neutrinos are dominantly responsible for the "atmospheric" and "solar" neutrino masses with Yukawa couplings to $(\
Mining Sequential Patterns In Multidimensional Data
Plantevit, Marc
2008-01-01
Sequential pattern mining is a key technique of data mining with broad applications (user behavior analysis, bioinformatic, security, music, etc.). Sequential pattern mining aims at discovering correlations among events through time. There exist many algorithms to discover such patterns. However, these approaches only take one dimension into account (e.g. product dimension in customer market basket problem analysis) whereas data are multidimensional in nature. In this thesis, we define multid...
Forecasting daily streamflow using online sequential extreme learning machines
Lima, Aranildo R.; Cannon, Alex J.; Hsieh, William W.
2016-06-01
While nonlinear machine methods have been widely used in environmental forecasting, in situations where new data arrive continually, the need to make frequent model updates can become cumbersome and computationally costly. To alleviate this problem, an online sequential learning algorithm for single hidden layer feedforward neural networks - the online sequential extreme learning machine (OSELM) - is automatically updated inexpensively as new data arrive (and the new data can then be discarded). OSELM was applied to forecast daily streamflow at two small watersheds in British Columbia, Canada, at lead times of 1-3 days. Predictors used were weather forecast data generated by the NOAA Global Ensemble Forecasting System (GEFS), and local hydro-meteorological observations. OSELM forecasts were tested with daily, monthly or yearly model updates. More frequent updating gave smaller forecast errors, including errors for data above the 90th percentile. Larger datasets used in the initial training of OSELM helped to find better parameters (number of hidden nodes) for the model, yielding better predictions. With the online sequential multiple linear regression (OSMLR) as benchmark, we concluded that OSELM is an attractive approach as it easily outperformed OSMLR in forecast accuracy.
Institute of Scientific and Technical Information of China (English)
摆玉龙; 高海沙; 柴乾隆; 黄春林
2013-01-01
Sequential data assimilation methods have been widely applied in many data assimilation systems and each method has its own characteristics. In this paper,we introduce three typical assimilation methods, for example,Ensemble Kalman Filter,Ensemble Transform Kalman Filter and Deterministic Ensemble Kal-man Filter. Based on the classical nonlinear model (eg,Lorenz-96 model) ,the numerical experiments were developed to test the sensitivity of all these methods. Different key parameters were investigated with respect to four aspects,which were the number of ensembles,the number of observations,the inflation factor and the localization radius. The results show: the number of ensembles and observations will directly influence the assimilation results; the optimal selection of the inflation factors and the localization radius will improve the accuracy of the assimilation. According to the final comparative studies,the deterministic En-KF is a method that has a better robust performance. It can achieve a better assimilation effect,especially in the occasion of the small ensemble numbers.%顺序数据同化方法在数据同化系统中得到了广泛的应用,其性能各有优缺.选择3种典型的顺序数据同化算法,即集合Kalman滤波、集合转换Kalman滤波和确定性Kalman滤波,使用经典的Lorenz-96模型进行敏感性实验,研究不同的关键参数变化,如集合数目变化、观测数变化、误差放大因子变化和定位半径变化时对同化效果的影响.实验表明:集合数目和观测数目的多少直接影响3种方法的同化效果；协方差放大因子和定位半径的选择会提高同化精度.综合比较,确定性集合Kalman滤波算法是一种具有较强鲁棒性的滤波算法,能够在集合数较小的情况下达到较好的同化效果.
A sequential growth dynamics for a directed acyclic dyadic graph
Krugly, Alexey L
2011-01-01
A model of discrete spacetime on a microscopic level is considered. It is a directed acyclic dyadic graph. This is the particular case of a causal set. The goal of this model is to describe particles as some repetitive symmetrical self-organized structures of the graph without any reference to continuous spacetime. The dynamics of the model is considered. This dynamics is stochastic sequential additions of new vertexes. Growth of the graph is a Markovian process. This dynamics is a consequence of a causality principle.
Evolution of Decisions in Population Games with Sequentially Searching Individuals
Directory of Open Access Journals (Sweden)
Tadeas Priklopil
2015-09-01
Full Text Available In many social situations, individuals endeavor to find the single best possible partner, but are constrained to evaluate the candidates in sequence. Examples include the search for mates, economic partnerships, or any other long-term ties where the choice to interact involves two parties. Surprisingly, however, previous theoretical work on mutual choice problems focuses on finding equilibrium solutions, while ignoring the evolutionary dynamics of decisions. Empirically, this may be of high importance, as some equilibrium solutions can never be reached unless the population undergoes radical changes and a sufficient number of individuals change their decisions simultaneously. To address this question, we apply a mutual choice sequential search problem in an evolutionary game-theoretical model that allows one to find solutions that are favored by evolution. As an example, we study the influence of sequential search on the evolutionary dynamics of cooperation. For this, we focus on the classic snowdrift game and the prisoner’s dilemma game.
Sequential biological process for molybdenum extraction from hydrodesulphurization spent catalyst.
Vyas, Shruti; Ting, Yen-Peng
2016-10-01
Spent catalyst bioleaching with Acidithiobacillus ferrooxidans has been widely studied and low Mo leaching has often been reported. This work describes an enhanced extraction of Mo via a two stage sequential process for the bioleaching of hydrodesulphurization spent catalyst containing Molybdenum, Nickel and, Aluminium. In the first stage, two-step bioleaching was performed using Acidithiobacillus ferrooxidans, and achieved 89.4% Ni, 20.9% Mo and 12.7% Al extraction in 15 days. To increase Mo extraction, the bioleached catalyst was subjected to a second stage bioleaching using Escherichia coli, during which 99% of the remaining Mo was extracted in 25 days. This sequential bioleaching strategy selectively extracted Ni in the first stage and Mo in the second stage, and is a more environmentally friendly alternative to sequential chemical leaching with alkaline reagents for improved Mo extraction. Kinetic modelling to establish the rate determining step in both stages of bioleaching showed that in the first stage, Mo extraction was chemical reaction controlled whereas in the subsequent stage, product layer diffusion model provided the best fit.
Fully vs. Sequentially Coupled Loads Analysis of Offshore Wind Turbines
Energy Technology Data Exchange (ETDEWEB)
Damiani, Rick; Wendt, Fabian; Musial, Walter; Finucane, Z.; Hulliger, L.; Chilka, S.; Dolan, D.; Cushing, J.; O' Connell, D.; Falk, S.
2017-06-19
The design and analysis methods for offshore wind turbines must consider the aerodynamic and hydrodynamic loads and response of the entire system (turbine, tower, substructure, and foundation) coupled to the turbine control system dynamics. Whereas a fully coupled (turbine and support structure) modeling approach is more rigorous, intellectual property concerns can preclude this approach. In fact, turbine control system algorithms and turbine properties are strictly guarded and often not shared. In many cases, a partially coupled analysis using separate tools and an exchange of reduced sets of data via sequential coupling may be necessary. In the sequentially coupled approach, the turbine and substructure designers will independently determine and exchange an abridged model of their respective subsystems to be used in their partners' dynamic simulations. Although the ability to achieve design optimization is sacrificed to some degree with a sequentially coupled analysis method, the central question here is whether this approach can deliver the required safety and how the differences in the results from the fully coupled method could affect the design. This work summarizes the scope and preliminary results of a study conducted for the Bureau of Safety and Environmental Enforcement aimed at quantifying differences between these approaches through aero-hydro-servo-elastic simulations of two offshore wind turbines on a monopile and jacket substructure.
Investigation of the Sequential Rotation Technique and its Application in Phased Arrays
DEFF Research Database (Denmark)
Larsen, Niels Vesterdal
2007-01-01
This report documents the investigations of the sequential rotation technique in application to phased array antennas. A spherical wave expansion for the far field of sequentially phased arrays is derived for general antenna elements. This model is approximate in that it assumes that the element...... patterns are identical and it does not included the effects of mutual coupling between the elements. For this reason it is compared with more accurate numerical models which include the coupling effects. The results show that the sequential rotation technique generally improves the performance...... of the phased array also when it is scanned off bore sight. For array topologies where the elements are not positioned rotationally symmetric the performance of the sequential rotation is to some extent impaired by the mutual coupling and non-identical element patterns. These effects are not evident from...
Modern Sequential Analysis and Its Applications to Computerized Adaptive Testing
Bartroff, Jay; Finkelman, Matthew; Lai, Tze Leung
2008-01-01
After a brief review of recent advances in sequential analysis involving sequential generalized likelihood ratio tests, we discuss their use in psychometric testing and extend the asymptotic optimality theory of these sequential tests to the case of sequentially generated experiments, of particular interest in computerized adaptive testing. We…
Continuous versus group sequential analysis for post-market drug and vaccine safety surveillance.
Silva, I R; Kulldorff, M
2015-09-01
The use of sequential statistical analysis for post-market drug safety surveillance is quickly emerging. Both continuous and group sequential analysis have been used, but consensus is lacking as to when to use which approach. We compare the statistical performance of continuous and group sequential analysis in terms of type I error probability; statistical power; expected time to signal when the null hypothesis is rejected; and the sample size required to end surveillance without rejecting the null. We present a mathematical proposition to show that for any group sequential design there always exists a continuous sequential design that is uniformly better. As a consequence, it is shown that more frequent testing is always better. Additionally, for a Poisson based probability model and a flat rejection boundary in terms of the log likelihood ratio, we compare the performance of various continuous and group sequential designs. Using exact calculations, we found that, for the parameter settings used, there is always a continuous design with shorter expected time to signal than the best group design. The two key conclusions from this article are (i) that any post-market safety surveillance system should attempt to obtain data as frequently as possible, and (ii) that sequential testing should always be performed when new data arrives without deliberately waiting for additional data. © 2015, The International Biometric Society.
Learning Orthographic Structure With Sequential Generative Neural Networks.
Testolin, Alberto; Stoianov, Ivilin; Sperduti, Alessandro; Zorzi, Marco
2016-04-01
Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain.
Sequential bargaining in a market with one seller and two different buyers
DEFF Research Database (Denmark)
Tranæs, Torben; Hendon, Ebbe
1991-01-01
A matching and bargaining model in a market with one seller and two buyers, differing only in their reservation price, is analyzed. No subgame perfect equilibrium exists for stationary strategies. We demonstrate the existence of inefficient equilibria in which the low buyer receives the good...... with large probability, even as friction becomes negligible. We investigate the relationship between the use of Nash and sequential bargaining. Nash bargaining seems applicable only when the sequential approach yields a unique stationary strategy subgame perfect equilibrium...
Sequential bargaining in a market with one seller and two different buyers
DEFF Research Database (Denmark)
Hendon, Ebbe; Tranæs, Torben
1991-01-01
A matching and bargaining model in a market with one seller and two buyers, differing only in their reservation price, is analyzed. No subgame perfect equilibrium exists for stationary strategies. We demonstrate the existence of inefficient equilibria in which the low buyer receives the good...... with large probability, even as friction becomes negligible. We investigate the relationship between the use of Nash and sequential bargaining. Nash bargaining seems applicable only when the sequential approach yields a unique stationary strategy subgame perfect equilibrium....
A survey of sequential Monte Carlo methods for economics and finance
Creal, D.D.
2009-01-01
This paper serves as an introduction and survey for economists to the field of sequential Monte Carlo methods which are also known as particle filters. Sequential Monte Carlo methods are simulation based algorithms used to compute the high-dimensional and/or complex integrals that arise regularly in applied work. These methods are becoming increasingly popular in economics and finance; from dynamic stochastic general equilibrium models in macro-economics to option pricing. The objective of th...
Modern Sequential Analysis and its Applications to Computerized Adaptive Testing
Bartroff, Jay; Lai, Tze Leung
2011-01-01
After a brief review of recent advances in sequential analysis involving sequential generalized likelihood ratio tests, we discuss their use in psychometric testing and extend the asymptotic optimality theory of these sequential tests to the case of sequentially generated experiments, of particular interest in computerized adaptive testing. We then show how these methods can be used to design adaptive mastery tests, which are asymptotically optimal and are also shown to provide substantial improvements over currently used sequential and fixed length tests.
An Overview of Markov Chain Methods for the Study of Stage-Sequential Developmental Processes
Kapland, David
2008-01-01
This article presents an overview of quantitative methodologies for the study of stage-sequential development based on extensions of Markov chain modeling. Four methods are presented that exemplify the flexibility of this approach: the manifest Markov model, the latent Markov model, latent transition analysis, and the mixture latent Markov model.…
Directory of Open Access Journals (Sweden)
Aaron T Wild
Full Text Available Sorafenib (SOR is the only systemic agent known to improve survival for hepatocellular carcinoma (HCC. However, SOR prolongs survival by less than 3 months and does not alter symptomatic progression. To improve outcomes, several phase I-II trials are currently examining SOR with radiation (RT for HCC utilizing heterogeneous concurrent and sequential treatment regimens. Our study provides preclinical data characterizing the effects of concurrent versus sequential RT-SOR on HCC cells both in vitro and in vivo. Concurrent and sequential RT-SOR regimens were tested for efficacy among 4 HCC cell lines in vitro by assessment of clonogenic survival, apoptosis, cell cycle distribution, and γ-H2AX foci formation. Results were confirmed in vivo by evaluating tumor growth delay and performing immunofluorescence staining in a hind-flank xenograft model. In vitro, concurrent RT-SOR produced radioprotection in 3 of 4 cell lines, whereas sequential RT-SOR produced decreased colony formation among all 4. Sequential RT-SOR increased apoptosis compared to RT alone, while concurrent RT-SOR did not. Sorafenib induced reassortment into less radiosensitive phases of the cell cycle through G1-S delay and cell cycle slowing. More double-strand breaks (DSBs persisted 24 h post-irradiation for RT alone versus concurrent RT-SOR. In vivo, sequential RT-SOR produced the greatest tumor growth delay, while concurrent RT-SOR was similar to RT alone. More persistent DSBs were observed in xenografts treated with sequential RT-SOR or RT alone versus concurrent RT-SOR. Sequential RT-SOR additionally produced a greater reduction in xenograft tumor vascularity and mitotic index than either concurrent RT-SOR or RT alone. In conclusion, sequential RT-SOR demonstrates greater efficacy against HCC than concurrent RT-SOR both in vitro and in vivo. These results may have implications for clinical decision-making and prospective trial design.
Sequential dependencies in magnitude scaling of loudness
DEFF Research Database (Denmark)
Joshi, Suyash Narendra; Jesteadt, Walt
2013-01-01
B were used to program the sone-potentiometer. The knob settings systematically influenced the form of the loudness function. Time series analysis was used to assess the sequential dependencies in the data, which increased with increasing exponent and were greatest for the log-law. It would be possible......, therefore, to choose knob properties that minimized these dependencies. When the sequential dependencies were removed from the data, the slope of the loudness functions did not change, but the variability decreased. Sequential dependencies were only present when the level of the tone on the previous trial...... was higher than on the current trial. According to the attention band hypothesis[Green and Luce, 1974, Perception & Psychophysics] these dependencies arise from a process similar to selective attention, but observations of rapid adaptation of neurons in the inferior colliculus based on stimulus level...
Correlation and Sequential Filtering with Doppler Measurements
Institute of Scientific and Technical Information of China (English)
WANGJianguo; HEPeikun; HANYueqiu; WUSiliang
2004-01-01
Two sequential filters are developed for Doppler radar measurements in the presence of correlation between range and range rate measurement errors. Two ideal linear measurement equations with the pseudo measurements are constructed via block-partitioned Cholesky factorization and the practical measurement equationswith the pseudo measurements are obtained through the direction cosine estimation and error compensation. The resulting sequential filters make the position measurement be possibly processed before the pseudo measurement and hence the more accurate direction cosine estimate can be obtained from the filtered position estimate rather than the predicted state estimate. The numerical simulations with different rangerange rate correlation coefficients show thatthe proposed two sequential filters are almost equivalent in performance but both superior to the conventional extended Kalman filter for different correlation coefficients.
Refinement-based verification of sequential implementations of Stateflow charts
Miyazawa, Alvaro; 10.4204/EPTCS.55.5
2011-01-01
Simulink/Stateflow charts are widely used in industry for the specification of control systems, which are often safety-critical. This suggests a need for a formal treatment of such models. In previous work, we have proposed a technique for automatic generation of formal models of Stateflow blocks to support refinement-based reasoning. In this article, we present a refinement strategy that supports the verification of automatically generated sequential C implementations of Stateflow charts. In particular, we discuss how this strategy can be specialised to take advantage of architectural features in order to allow a higher level of automation.
Modified sequential fully implicit scheme for compositional flow simulation
Moncorgé, A.; Tchelepi, H. A.; Jenny, P.
2017-05-01
The fully implicit (FI) method is widely used for numerical modeling of multiphase flow and transport in porous media. The FI method is unconditionally stable, but that comes at the cost of a low-order approximation and high computational cost. The FI method entails iterative linearization and solution of fully-coupled linear systems with mixed elliptic/hyperbolic character. However, in methods that treat the near-elliptic (flow) and hyperbolic (transport) separately, such as multiscale formulations, sequential solution strategies are used to couple the flow (pressures and velocities) and the transport (saturations/compositions). The most common sequential schemes are: the implicit pressure explicit saturation (IMPES), and the sequential fully implicit (SFI) schemes. Problems of practical interest often involve tightly coupled nonlinear interactions between the multiphase flow and the multi-component transport. For such problems, the IMPES approach usually suffers from prohibitively small timesteps in order to obtain stable numerical solutions. The SFI method, on the other hand, does not suffer from a temporal stability limit, but the convergence rate can be extremely slow. This slow convergence rate of SFI can offset the gains obtained from separate and specialized treatments of the flow and transport problems. In this paper, we analyze the nonlinear coupling between flow and transport for compressible, compositional systems with complex interphase mass transfer. We isolate the nonlinear effects related to transmissibility and compressibility from those due to interphase mass transfer, and we propose a modified SFI (m-SFI) method. The new scheme involves enriching the 'standard' pressure equation with coupling between the pressure and the saturations/compositions. The modification resolves the convergence problems associated with SFI and provides a strong basis for using sequential formulations for general-purpose simulation. For a wide parameter range, we show
Sequential Pattern Mining Using Formal Language Tools
Directory of Open Access Journals (Sweden)
R. S. Jadon
2012-09-01
Full Text Available In present scenario almost every system and working is computerized and hence all information and data are being stored in Computers. Huge collections of data are emerging. Retrieval of untouched, hidden and important information from this huge data is quite tedious work. Data Mining is a great technological solution which extracts untouched, hidden and important information from vast databases to investigate noteworthy knowledge in the data warehouse. An important problem in data mining is to discover patterns in various fields like medical science, world wide web, telecommunication etc. In the field of Data Mining, Sequential pattern mining is one of the method in which we retrieve hidden pattern linked with instant or other sequences. In sequential pattern mining we extract those sequential patterns whose support count are greater than or equal to given minimum support threshold value. In current scenario users are interested in only specific and interesting pattern instead of entire probable sequential pattern. To control the exploration space users can use many heuristics which can be represented as constraints. Many algorithms have been developed in the fields of constraint mining which generate patterns as per user expectation. In the present work we will be exploring and enhancing the regular expression constraints .Regular expression is one of the constraint and number of algorithm developed for sequential pattern mining which uses regular expression as a constraint. Some constraints are neither regular nor context free like cross-serial pattern anbmcndm used in Swiss German Data. We cannot construct equivalent deterministic finite automata (DFA or Push down automata (PDA for such type of patterns. We have proposed a new algorithm PMFLT (Pattern Mining using Formal Language Tools for sequential pattern mining using formal language tools as constraints. The proposed algorithm finds only user specific frequent sequence in efficient
Self arbitrated VLSI asynchronous sequential circuits
Whitaker, S.; Maki, G.
1990-01-01
A new class of asynchronous sequential circuits is introduced in this paper. The new design procedures are oriented towards producing asynchronous sequential circuits that are implemented with CMOS VLSI and take advantage of pass transistor technology. The first design algorithm utilizes a standard Single Transition Time (STT) state assignment. The second method introduces a new class of self synchronizing asynchronous circuits which eliminates the need for critical race free state assignments. These circuits arbitrate the transition path action by forcing the circuit to sequence through proper unstable states. These methods result in near minimum hardware since only the transition paths associated with state variable changes need to be implemented with pass transistor networks.
A Theory of Sequential Reciprocity
Dufwenberg, M.; Kirchsteiger, G.
1998-01-01
Many experimental studies indicate that people are motivated by reciprocity. Rabin (1993) develops techniques for incorporating such concerns into game theory and economics. His model, however, does not fare well when applied to situations with an interesting dynamic structure (like many experimenta
Adaptive designs for sequential experiments
Institute of Scientific and Technical Information of China (English)
林正炎; 张立新
2003-01-01
Various adaptive designs have been proposed and applied to clinical trials, bioassay, psychophysics, etc.Adaptive designs are also useful in high cost engineering trials.More and more people have been paying attention to these design methods. This paper introduces several broad families of designs, such as the play-the-winner rule, randomized play-the-winner rule and its generalization to the multi-arm case, doubly biased coin adaptive design, Markov chain model.
Dynamic simulation of a high-performance sequentially turbocharged marine diesel engine
Energy Technology Data Exchange (ETDEWEB)
Benvenuto, G. [Genova Univ., Dip. di Ingegneria Navale e Tecnologie Marine (DINAV), Genova (Italy); Campora, U. [Genova Univ., Dip. di Macchine, Sistemi Energetici e Trasporti (DIMSET), Genova (Italy)
2002-09-01
The sequential turbocharging technique is used to improve the performance of highly rated diesel engines in particular at part loads. However, the transient behaviour of the sequential turbocharging connection/disconnection phases may be difficult to calibrate and requires an accurate study and development. This may be accomplished, in addition to the necessary experimentation, by means of dynamic simulation techniques. In this paper a model for the dynamic simulation of a sequentially turbocharged diesel engine is presented. A two-zone, non-adiabatic, actual cycle approach is used for the chemical and thermodynamic phenomena simulation in the cylinder. Fluid mass and energy accumulation in the engine volumes are evaluated by means of a filling and emptying method. The simulation of the turbocharger dynamics combines the use of the compressor and turbine maps with a model of the sequential turbocharging connection/disconnection valves and of their governor system. The procedure is applied to the simulation of the Wartsila18V 26X engine, a highly rated, recently developed, sequentially turbocharged marine diesel engine, whose experimental results are used for the steady state and transient validation of the simulation code with particular reference to the sequential turbocharging connection/disconnection phases. The presented results show the time histories of some important variables during typical engine load variations. (Author)
Sequential decisions: a computational comparison of observational and reinforcement accounts.
Mohammadi Sepahvand, Nazanin; Stöttinger, Elisabeth; Danckert, James; Anderson, Britt
2014-01-01
Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space) was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.
Sequential decisions: a computational comparison of observational and reinforcement accounts.
Directory of Open Access Journals (Sweden)
Nazanin Mohammadi Sepahvand
Full Text Available Right brain damaged patients show impairments in sequential decision making tasks for which healthy people do not show any difficulty. We hypothesized that this difficulty could be due to the failure of right brain damage patients to develop well-matched models of the world. Our motivation is the idea that to navigate uncertainty, humans use models of the world to direct the decisions they make when interacting with their environment. The better the model is, the better their decisions are. To explore the model building and updating process in humans and the basis for impairment after brain injury, we used a computational model of non-stationary sequence learning. RELPH (Reinforcement and Entropy Learned Pruned Hypothesis space was able to qualitatively and quantitatively reproduce the results of left and right brain damaged patient groups and healthy controls playing a sequential version of Rock, Paper, Scissors. Our results suggests that, in general, humans employ a sub-optimal reinforcement based learning method rather than an objectively better statistical learning approach, and that differences between right brain damaged and healthy control groups can be explained by different exploration policies, rather than qualitatively different learning mechanisms.
Sequential protein NMR assignments in the liquid state via sequential data acquisition
Wiedemann, Christoph; Bellstedt, Peter; Kirschstein, Anika; Häfner, Sabine; Herbst, Christian; Görlach, Matthias; Ramachandran, Ramadurai
2014-02-01
Two different NMR pulse schemes involving sequential 1H data acquisition are presented for achieving protein backbone sequential resonance assignments: (i) acquisition of 3D {HCCNH and HNCACONH} and (ii) collection of 3D {HNCOCANH and HNCACONH} chemical shift correlation spectra using uniformly 13C,15N labelled proteins. The sequential acquisition of these spectra reduces the overall experimental time by a factor of ≈2 as compared to individual acquisitions. The suitability of this approach is experimentally demonstrated for the C-terminal winged helix (WH) domain of the minichromosome maintenance (MCM) complex of Sulfolobus solfataricus.
Adaptive designs for sequential experiments
Institute of Scientific and Technical Information of China (English)
林正炎; 张立新
2003-01-01
Various adaptive designe have been proposed and applied to clinical trials,bioassay,psycho-physics,etc.Adaptive designs are also useful in high cost engineering trials.More and More people have been paying attention to these desing methods.This paper introduces several broad families of designs,such as the play-the-winner rele,randomized play-the-winner rule and its generalization to the multi-arm case,doubly bi-ased coin adaptive design,Markov chain model.
Mathematical Problem Solving through Sequential Process Analysis
Codina, A.; Cañadas, M. C.; Castro, E.
2015-01-01
Introduction: The macroscopic perspective is one of the frameworks for research on problem solving in mathematics education. Coming from this perspective, our study addresses the stages of thought in mathematical problem solving, offering an innovative approach because we apply sequential relations and global interrelations between the different…
Comprehensive sequential interventional therapy for hepatocellular carcinoma
Institute of Scientific and Technical Information of China (English)
ZHANG Liang; FAN Wei-jun; HUANG Jin-hua; LI Chuan-xing; ZHAO Ming; WANG Li-gang; TANG Tian
2009-01-01
Background Since the 1980s, various approaches to interventional therapy have been developed, with the development and achievement of medical imaging technology. This study aimed to evaluate the effectiveness of comprehensive sequential interventional therapy especially personal therapeutic plan in 53 radical cure patients with hepatocellular carcinoma (HCC).Methods From January 2003 to January 2005, a total of 203 patients with HCC received sequential interventional treatment in our hospital. Fifty-three patients achieved radical cure outcomes. Those patients were treated with transcatheter arterial chemoembolization (TACE), radiofrequency ablation (RFA), percutaneous ethanol injection (PEI), or high intensity focused ultrasound (HIFU), sequentially and in combination depending on their clinical and pathological features. PET-CT was used to evaluate, assess, and guide treatment.Results Based on the imaging and serological data, all the patients had a personal therapeutic plan. The longest follow-up time was 24 months, the shortest was 6 months, and mean survival time was 16.5 months.Conclusion Comprehensive sequential interventional therapy especially personal therapeutic plan for HCC play roles in interventional treatment of HCC in middle or advanced stage.
Sequential spatial processes for image analysis
M.N.M. van Lieshout (Marie-Colette)
2009-01-01
htmlabstractWe give a brief introduction to sequential spatial processes. We discuss their definition, formulate a Markov property, and indicate why such processes are natural tools in tackling high level vision problems. We focus on the problem of tracking a variable number of moving objects throug
A Bayesian sequential design with binary outcome.
Zhu, Han; Yu, Qingzhao; Mercante, Donald E
2017-03-02
Several researchers have proposed solutions to control type I error rate in sequential designs. The use of Bayesian sequential design becomes more common; however, these designs are subject to inflation of the type I error rate. We propose a Bayesian sequential design for binary outcome using an alpha-spending function to control the overall type I error rate. Algorithms are presented for calculating critical values and power for the proposed designs. We also propose a new stopping rule for futility. Sensitivity analysis is implemented for assessing the effects of varying the parameters of the prior distribution and maximum total sample size on critical values. Alpha-spending functions are compared using power and actual sample size through simulations. Further simulations show that, when total sample size is fixed, the proposed design has greater power than the traditional Bayesian sequential design, which sets equal stopping bounds at all interim analyses. We also find that the proposed design with the new stopping for futility rule results in greater power and can stop earlier with a smaller actual sample size, compared with the traditional stopping rule for futility when all other conditions are held constant. Finally, we apply the proposed method to a real data set and compare the results with traditional designs.
Sequential motor skill: cognition, perception and action
Ruitenberg, M.F.L.
2013-01-01
Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role
On Sequentially Co-Cohen-Macaulay Modules
Institute of Scientific and Technical Information of China (English)
Nguyen Thi Dung
2007-01-01
In this paper,we define the notion of dimension filtration of an Artinian module and study a class of Artinian modules,called sequentially co-Cohen-Macaulay modules,which contains strictly all co-Cohen-Macaulay modules.Some characterizations of co-Cohen-Macaulayness in terms of the Matlis duality and of local homology are also given.
Sequential motor skill: cognition, perception and action
Ruitenberg, M.F.L.
2013-01-01
Discrete movement sequences are assumed to be the building blocks of more complex sequential actions that are present in our everyday behavior. The studies presented in this dissertation address the (neuro)cognitive underpinnings of such movement sequences, in particular in relationship to the role
Terminating Sequential Delphi Survey Data Collection
Kalaian, Sema A.; Kasim, Rafa M.
2012-01-01
The Delphi survey technique is an iterative mail or electronic (e-mail or web-based) survey method used to obtain agreement or consensus among a group of experts in a specific field on a particular issue through a well-designed and systematic multiple sequential rounds of survey administrations. Each of the multiple rounds of the Delphi survey…
Sequential auctions for full truckload allocation
Mes, Martijn R.K.
2008-01-01
In this thesis we examine the use of sequential auctions for the dynamic allocation of transportation jobs. For all players, buyers and sellers, we develop strategies and examine their performance both in terms of individual benefits and with respect to the global logistical performance (resource
Sequential Tests for Large Scale Learning
Korattikara, A.; Chen, Y.; Welling, M.
2016-01-01
We argue that when faced with big data sets, learning and inference algorithms should compute updates using only subsets of data items. We introduce algorithms that use sequential hypothesis tests to adaptively select such a subset of data points. The statistical properties of this subsampling proce
Sequential and simultaneous multiple explanation
Directory of Open Access Journals (Sweden)
Robert Litchfield
2007-02-01
Full Text Available This paper reports two experiments comparing variants of multiple explanation applied in the early stages of a judgment task (a case involving employee theft where participants are not given a menu of response options. Because prior research has focused on situations where response options are provided to judges, we identify relevant dependent variables that an intervention might affect when such options are not given. We use these variables to build a causal model of intervention that illustrates both the intended effects of multiple explanation and some potentially competing processes that it may trigger. Although multiple explanation clearly conveys some benefits (e.g., willingness to delay action to engage in information search, increased detail, quality and confidence in alternative explanations in the present experiments, we also found evidence that it may initiate or enhance processes that attenuate its advantages (e.g., feelings that one does not need more data if one has multiple good explanations.
Retailers and consumers in sequential auctions of collectibles
DEFF Research Database (Denmark)
Vincent Lyk-Jensen, Stéphanie; Chanel, Olivier
2007-01-01
We analyse an independent private-value model, where heterogeneous bidders compete for objects sold in sequential second-price auctions. In this heterogeneous game, bidders may have differently distributed valuations, and some have multi-unit demand with decreasing marginal values (retailers......); others have a specific single-unit demand (consumers). By examining equilibrium bidding strategies and price sequences, we show that the presence of consumers leads to more aggressive bidding from the retailers on average and heterogeneous bidders is a plausible explanation of the price decline effect...
A Conjugate Class of Utility Functions for Sequential Decision Problems.
Houlding, Brett; Coolen, Frank P A; Bolger, Donnacha
2015-09-01
The use of the conjugacy property for members of the exponential family of distributions is commonplace within Bayesian statistical analysis, allowing for tractable and simple solutions to problems of inference. However, despite a shared motivation, there has been little previous development of a similar property for using utility functions within a Bayesian decision analysis. As such, this article explores a class of utility functions that appear to be reasonable for modeling the preferences of a decisionmaker in many real-life situations, but that also permit a tractable and simple analysis within sequential decision problems. © 2015 Society for Risk Analysis.
Sequential causal inference: application to randomized trials of adaptive treatment strategies.
Dawson, Ree; Lavori, Philip W
2008-05-10
Clinical trials that randomize subjects to decision algorithms, which adapt treatments over time according to individual response, have gained considerable interest as investigators seek designs that directly inform clinical decision making. We consider designs in which subjects are randomized sequentially at decision points, among adaptive treatment options under evaluation. We present a sequential method to estimate the comparative effects of the randomized adaptive treatments, which are formalized as adaptive treatment strategies. Our causal estimators are derived using Bayesian predictive inference. We use analytical and empirical calculations to compare the predictive estimators to (i) the 'standard' approach that allocates the sequentially obtained data to separate strategy-specific groups as would arise from randomizing subjects at baseline; (ii) the semi-parametric approach of marginal mean models that, under appropriate experimental conditions, provides the same sequential estimator of causal differences as the proposed approach. Simulation studies demonstrate that sequential causal inference offers substantial efficiency gains over the standard approach to comparing treatments, because the predictive estimators can take advantage of the monotone structure of shared data among adaptive strategies. We further demonstrate that the semi-parametric asymptotic variances, which are marginal 'one-step' estimators, may exhibit significant bias, in contrast to the predictive variances. We show that the conditions under which the sequential method is attractive relative to the other two approaches are those most likely to occur in real studies.
Fluoroquinolone Sequential Therapy for Helicobacter pylori: A Meta-analysis.
Kale-Pradhan, Pramodini B; Mihaescu, Anela; Wilhelm, Sheila M
2015-08-01
As resistance of Helicobacter pylori to standard first-line therapy is increasing globally, alternative treatment regimens, such as a fluoroquinolone-based sequential regimen, have been explored. The objective of this meta-analysis was to compare the efficacy of fluoroquinolone-based sequential therapy with standard first-line treatment for H. pylori infection. Meta-analysis of six randomized controlled trials. A total of 738 H. pylori-infected, treatment-naive adults who received fluoroquinolone-based sequential therapy (5-7 days of a proton pump inhibitor [PPI] and amoxicillin therapy followed by 5-7 days of a PPI, a fluoroquinolone, and metronidazole or tinidazole or furazolidone therapy) and 733 H. pylori-infected, treatment-naive adults who received guideline-recommended, first-line therapy with standard triple therapy (7-14 days of a PPI plus amoxicillin and clarithromycin) or standard sequential therapy (5 days of a PPI plus amoxicillin, followed by an additional 5 days of triple therapy consisting of a PPI, clarithromycin, and metronidazole or tinidazole). A systematic literature search of the MEDLINE, PubMed, and Cochrane Central Register of Controlled Trials databases (from inception through January 2015) was conducted to identify randomized controlled trials that compared fluoroquinolone-based sequential therapy with guideline-recommended, first-line treatment regimens in H. pylori-infected, treatment-naive adults. All selected trials confirmed H. pylori infection prior to treatment as well as post-treatment eradication. A meta-analysis was performed by using Review Manager 5.2. Treatment effect was determined with a random-effects model by using the Mantel-Haenszel method and was reported as a risk ratio (RR) with 95% confidence interval (CI). In the six randomized controlled trials that met the inclusion criteria, 648 (87.8%) of 738 patients receiving fluoroquinolone-based sequential therapy and 521 (71.1%) of 733 patients receiving standard
Group sequential designs for stepped-wedge cluster randomised trials.
Grayling, Michael J; Wason, James Ms; Mander, Adrian P
2017-06-01
The stepped-wedge cluster randomised trial design has received substantial attention in recent years. Although various extensions to the original design have been proposed, no guidance is available on the design of stepped-wedge cluster randomised trials with interim analyses. In an individually randomised trial setting, group sequential methods can provide notable efficiency gains and ethical benefits. We address this by discussing how established group sequential methodology can be adapted for stepped-wedge designs. Utilising the error spending approach to group sequential trial design, we detail the assumptions required for the determination of stepped-wedge cluster randomised trials with interim analyses. We consider early stopping for efficacy, futility, or efficacy and futility. We describe first how this can be done for any specified linear mixed model for data analysis. We then focus on one particular commonly utilised model and, using a recently completed stepped-wedge cluster randomised trial, compare the performance of several designs with interim analyses to the classical stepped-wedge design. Finally, the performance of a quantile substitution procedure for dealing with the case of unknown variance is explored. We demonstrate that the incorporation of early stopping in stepped-wedge cluster randomised trial designs could reduce the expected sample size under the null and alternative hypotheses by up to 31% and 22%, respectively, with no cost to the trial's type-I and type-II error rates. The use of restricted error maximum likelihood estimation was found to be more important than quantile substitution for controlling the type-I error rate. The addition of interim analyses into stepped-wedge cluster randomised trials could help guard against time-consuming trials conducted on poor performing treatments and also help expedite the implementation of efficacious treatments. In future, trialists should consider incorporating early stopping of some kind into
Modified Sequential Kriging Optimization for Multidisciplinary Complex Product Simulation
Institute of Scientific and Technical Information of China (English)
Wang Hao; Wang Shaoping; Mileta M.Tomovic
2010-01-01
Directing to the high cost of computer simulation optimization problem,Kriging surrogate model is widely used to decrease the computation time.Since the sequential Kriging optimization is time consuming,this article extends the expected improvement and put forwards a modified sequential Kriging optimization (MSKO).This method changes the twice optimization problem into once by adding more than one point at the same time.Before re-fitting the Kriging model,the new sample points are verified to ensure that they do not overlap the previous one and the distance between two sample points is not too small.This article presents the double stopping criterion to keep the root mean square error (RMSE) of the final surrogate model at an ac-ceptable level.The example shows that MSKO can approach the global optimization quickly and accurately.MSKO can ensure global optimization no matter where the initial point is.Application of active suspension indicates that the proposed method is effective.
Institute of Scientific and Technical Information of China (English)
夏春明; 郑建荣; J.Howell
2007-01-01
Constrained spectral non-negative matrix factorization (NMF) analysis of perturbed oscillatory process control loop variable data is performed for the isolation of multiple plant-wide oscillatory sources.The technique is described and demonstrated by analyzing data from both simulated and real plant data of a chemical process plant.Results show that the proposed approach can map multiple oscillatory sources onto the most appropriate control loops, and has superior performance in terms of reconstruction accuracy and intuitive understanding compared with spectral independent component analysis (ICA).
Effect of Non-negative Chinese Affective Pictures on Patients with Chronic Pain%非负性中国情感图片对慢性疼痛患者的影响
Institute of Scientific and Technical Information of China (English)
王婷婷; 史婷奇
2016-01-01
Objective To study the effect of non-negative Chinese affective pictures on the pain of patients with chronic pain. Methods A total of 77 hospitalized patients with chronic pain, according to admission number, were divided into intervention group and control group. In intervention group, routine nursing for pain and non-negative pictures from CAPS (non-negative Chinese affective picture system) were conducted and applied but in control group only routine nursing for pain was performed. NRS (numerical rating scale) was used for pain assessment in two groups. Results After intervention for 6 times, the NRS score of intervention group was lower than that of control group and the difference indicated statistical significance ( P<0.05). Conclusion Non-negative Chinese affective pictures can reduce the pain of patients.%目的：研究非负性中国情感图片对慢性疼痛患者疼痛的影响。方法将77例慢性疼痛的住院患者按住院号单双号分为干预组和对照组。干预组采用疼痛科护理常规和非负性中国情感图片疗法，对照组采用疼痛科护理常规，采集2组疼痛数字评分法（Numerical Ratingscale，NRS）的结果并进行2组干预前后疼痛数字评分法得分比较。结果干预6次后，干预组的疼痛数字评分低于对照组，差异有统计学意义（P<0.05）。结论非负性中国情感图片可以减轻患者疼痛程度。
Sequential Analysis in High Dimensional Multiple Testing and Sparse Recovery
Malloy, Matthew; Nowak, Robert
2011-01-01
This paper studies the problem of high-dimensional multiple testing and sparse recovery from the perspective of sequential analysis. In this setting, the probability of error is a function of the dimension of the problem. A simple sequential testing procedure is proposed. We derive necessary conditions for reliable recovery in the non-sequential setting and contrast them with sufficient conditions for reliable recovery using the proposed sequential testing procedure. Applications of the main ...
Sequential motif profile of natural visibility graphs
Iacovacci, Jacopo
2016-01-01
The concept of sequential visibility graph motifs -subgraphs appearing with characteristic frequencies in the visibility graphs associated to time series- has been advanced recently along with a theoretical framework to compute analytically the motif profiles associated to Horizontal Visibility Graphs (HVGs). Here we develop a theory to compute the profile of sequential visibility graph motifs in the context of Natural Visibility Graphs (VGs). This theory gives exact results for deterministic aperiodic processes with a smooth invariant density or stochastic processes that fulfil the Markov property and have a continuous marginal distribution. The framework also allows for a linear time numerical estimation in the case of empirical time series. A comparison between the HVG and the VG case (including evaluation of their robustness for short series polluted with measurement noise) is also presented.
SMCTC: Sequential Monte Carlo in C++
Directory of Open Access Journals (Sweden)
Adam M. Johansen
2009-04-01
Full Text Available Sequential Monte Carlo methods are a very general class of Monte Carlo methodsfor sampling from sequences of distributions. Simple examples of these algorithms areused very widely in the tracking and signal processing literature. Recent developmentsillustrate that these techniques have much more general applicability, and can be appliedvery eectively to statistical inference problems. Unfortunately, these methods are oftenperceived as being computationally expensive and dicult to implement. This articleseeks to address both of these problems.A C++ template class library for the ecient and convenient implementation of verygeneral Sequential Monte Carlo algorithms is presented. Two example applications areprovided: a simple particle lter for illustrative purposes and a state-of-the-art algorithmfor rare event estimation.
Quantitative perceived depth from sequential monocular decamouflage.
Brooks, K R; Gillam, B J
2006-03-01
We present a novel binocular stimulus without conventional disparity cues whose presence and depth are revealed by sequential monocular stimulation (delay > or = 80 ms). Vertical white lines were occluded as they passed behind an otherwise camouflaged black rectangular target. The location (and instant) of the occlusion event, decamouflaging the target's edges, differed in the two eyes. Probe settings to match the depth of the black rectangular target showed a monotonic increase with simulated depth. Control tests discounted the possibility of subjects integrating retinal disparities over an extended temporal window or using temporal disparity. Sequential monocular decamouflage was found to be as precise and accurate as conventional simultaneous stereopsis with equivalent depths and exposure durations.
Sequential shrink photolithography for plastic microlens arrays.
Dyer, David; Shreim, Samir; Jayadev, Shreshta; Lew, Valerie; Botvinick, Elliot; Khine, Michelle
2011-07-18
Endeavoring to push the boundaries of microfabrication with shrinkable polymers, we have developed a sequential shrink photolithography process. We demonstrate the utility of this approach by rapidly fabricating plastic microlens arrays. First, we create a mask out of the children's toy Shrinky Dinks by simply printing dots using a standard desktop printer. Upon retraction of this pre-stressed thermoplastic sheet, the dots shrink to a fraction of their original size, which we then lithographically transfer onto photoresist-coated commodity shrink wrap film. This shrink film reduces in area by 95% when briefly heated, creating smooth convex photoresist bumps down to 30 µm. Taken together, this sequential shrink process provides a complete process to create microlenses, with an almost 99% reduction in area from the original pattern size. Finally, with a lithography molding step, we emboss these bumps into optical grade plastics such as cyclic olefin copolymer for functional microlens arrays.
Sequential pivotal mechanisms for public project problems
Apt, Krzysztof R
2008-01-01
It is well-known that for several natural decision problems no budget balanced Groves mechanisms exist. This motivated recent research on designing variants of feasible Groves mechanisms (termed as `redistribution of VCG (Vickrey-Clarke-Groves) payments') that generate reduced deficit. With this in mind, we study sequential Groves mechanisms and consider optimal strategies that can lower the taxes that the players would have to pay under the simultaneous mechanism. We show that in the sequential pivotal mechanism for several variants of public project problems such strategies do exist. These strategies differ from truth-telling. In particular we exhibit an optimal strategy with the property that when each player follows it a maximal social welfare is generated.
A minimax procedure in the context of sequential mastery testing
Vos, Hendrik J.
1999-01-01
The purpose of this paper is to derive optimal rules for sequential mastery tests. In a sequential mastery test, the decision is to classify a subject as a master or a nonmaster, or to continue sampling and administering another random test item. The framework of minimax sequential decision theory (
Lung Volume Measured during Sequential Swallowing in Healthy Young Adults
Hegland, Karen Wheeler; Huber, Jessica E.; Pitts, Teresa; Davenport, Paul W.; Sapienza, Christine M.
2011-01-01
Purpose: Outcomes from studying the coordinative relationship between respiratory and swallow subsystems are inconsistent for sequential swallows, and the lung volume at the initiation of sequential swallowing remains undefined. The first goal of this study was to quantify the lung volume at initiation of sequential swallowing ingestion cycles and…
A NEW INEXACT SEQUENTIAL QUADRATIC PROGRAMMING ALGORITHM
Institute of Scientific and Technical Information of China (English)
倪勤
2002-01-01
This paper represents an inexact sequential quadratic programming (SQP ) algorithm which can solve nonlinear programming (NLP ) problems. An inexact solution of the quadratic programming subproblem is determined by a projection and contraction method such that only matrix-vector product is required. Some truncated criteria are chosen such that the algorithm is suitable to large scale NLP problem. The global convergence of the algorithm is proved.
Sequential decision analysis for nonstationary stochastic processes
Schaefer, B.
1974-01-01
A formulation of the problem of making decisions concerning the state of nonstationary stochastic processes is given. An optimal decision rule, for the case in which the stochastic process is independent of the decisions made, is derived. It is shown that this rule is a generalization of the Bayesian likelihood ratio test; and an analog to Wald's sequential likelihood ratio test is given, in which the optimal thresholds may vary with time.
Compressive Sequential Learning for Action Similarity Labeling.
Qin, Jie; Liu, Li; Zhang, Zhaoxiang; Wang, Yunhong; Shao, Ling
2016-02-01
Human action recognition in videos has been extensively studied in recent years due to its wide range of applications. Instead of classifying video sequences into a number of action categories, in this paper, we focus on a particular problem of action similarity labeling (ASLAN), which aims at verifying whether a pair of videos contain the same type of action or not. To address this challenge, a novel approach called compressive sequential learning (CSL) is proposed by leveraging the compressive sensing theory and sequential learning. We first project data points to a low-dimensional space by effectively exploring an important property in compressive sensing: the restricted isometry property. In particular, a very sparse measurement matrix is adopted to reduce the dimensionality efficiently. We then learn an ensemble classifier for measuring similarities between pairwise videos by iteratively minimizing its empirical risk with the AdaBoost strategy on the training set. Unlike conventional AdaBoost, the weak learner for each iteration is not explicitly defined and its parameters are learned through greedy optimization. Furthermore, an alternative of CSL named compressive sequential encoding is developed as an encoding technique and followed by a linear classifier to address the similarity-labeling problem. Our method has been systematically evaluated on four action data sets: ASLAN, KTH, HMDB51, and Hollywood2, and the results show the effectiveness and superiority of our method for ASLAN.
Shteingart, Hanan; Loewenstein, Yonatan
2016-01-01
There is a long history of experiments in which participants are instructed to generate a long sequence of binary random numbers. The scope of this line of research has shifted over the years from identifying the basic psychological principles and/or the heuristics that lead to deviations from randomness, to one of predicting future choices. In this paper, we used generalized linear regression and the framework of Reinforcement Learning in order to address both points. In particular, we used logistic regression analysis in order to characterize the temporal sequence of participants’ choices. Surprisingly, a population analysis indicated that the contribution of the most recent trial has only a weak effect on behavior, compared to more preceding trials, a result that seems irreconcilable with standard sequential effects that decay monotonously with the delay. However, when considering each participant separately, we found that the magnitudes of the sequential effect are a monotonous decreasing function of the delay, yet these individual sequential effects are largely averaged out in a population analysis because of heterogeneity. The substantial behavioral heterogeneity in this task is further demonstrated quantitatively by considering the predictive power of the model. We show that a heterogeneous model of sequential dependencies captures the structure available in random sequence generation. Finally, we show that the results of the logistic regression analysis can be interpreted in the framework of reinforcement learning, allowing us to compare the sequential effects in the random sequence generation task to those in an operant learning task. We show that in contrast to the random sequence generation task, sequential effects in operant learning are far more homogenous across the population. These results suggest that in the random sequence generation task, different participants adopt different cognitive strategies to suppress sequential dependencies when
Strategic Path Planning by Sequential Parametric Bayesian Decisions
Directory of Open Access Journals (Sweden)
Baro Hyun
2013-11-01
Full Text Available The objective of this research is to generate a path for a mobile agent that carries sensors used for classification, where the path is to optimize strategic objectives that account for misclassification and the consequences of misclassification, and where the weights assigned to these consequences are chosen by a strategist. We propose a model that accounts for the interaction between the agent kinematics (i.e., the ability to move, informatics (i.e., the ability to process data to information, classification (i.e., the ability to classify objects based on the information, and strategy (i.e., the mission objective. Within this model, we pose and solve a sequential decision problem that accounts for strategist preferences and the solution to the problem yields a sequence of kinematic decisions of a moving agent. The solution of the sequential decision problem yields the following flying tactics: “approach only objects whose suspected identity matters to the strategy”. These tactics are numerically illustrated in several scenarios.
HASM-AD Algorithm Based on the Sequential Least Squares
Institute of Scientific and Technical Information of China (English)
WANG Shihai; YUE Tianxiang
2010-01-01
The HASM (high accuracy surface modeling) technique is based on the fundamental theory of surfaces, which has been proved to improve the interpolation accuracy in surface fitting. However, the integral iterative solution in previous studies resulted in high temporal complexity in computation and huge memory usage so that it became difficult to put the technique into application,especially for large-scale datasets. In the study, an innovative model (HASM-AD) is developed according to the sequential least squares on the basis of data adjustment theory. Sequential division is adopted in the technique, so that linear equations can be divided into groups to be processed in sequence with the temporal complexity reduced greatly in computation. The experiment indicates that the HASM-AD technique surpasses the traditional spatial interpolation methods in accuracy. Also, the cross-validation result proves the same conclusion for the spatial interpolation of soil PH property with the data sampled in Jiangxi province. Moreover, it is demonstrated in the study that the HASM-AD technique significantly reduces the computational complexity and lessens memory usage in computation.
Mean-Variance-Validation Technique for Sequential Kriging Metamodels
Energy Technology Data Exchange (ETDEWEB)
Lee, Tae Hee; Kim, Ho Sung [Hanyang University, Seoul (Korea, Republic of)
2010-05-15
The rigorous validation of the accuracy of metamodels is an important topic in research on metamodel techniques. Although a leave-k-out cross-validation technique involves a considerably high computational cost, it cannot be used to measure the fidelity of metamodels. Recently, the mean{sub 0} validation technique has been proposed to quantitatively determine the accuracy of metamodels. However, the use of mean{sub 0} validation criterion may lead to premature termination of a sampling process even if the kriging model is inaccurate. In this study, we propose a new validation technique based on the mean and variance of the response evaluated when sequential sampling method, such as maximum entropy sampling, is used. The proposed validation technique is more efficient and accurate than the leave-k-out cross-validation technique, because instead of performing numerical integration, the kriging model is explicitly integrated to accurately evaluate the mean and variance of the response evaluated. The error in the proposed validation technique resembles a root mean squared error, thus it can be used to determine a stop criterion for sequential sampling of metamodels.
Strategic Path Planning by Sequential Parametric Bayesian Decisions
Directory of Open Access Journals (Sweden)
Baro Hyun
2013-11-01
Full Text Available The objective of this research is to generate a path for a mobile agent that carries sensors used for classification, where the path is to optimize strategic objectives that account for misclassification and the consequences of misclassification, and where the weights assigned to these consequences are chosen by a strategist. We propose a model that accounts for the interaction between the agent kinematics (i.e., the ability to move, informatics (i.e., the ability to process data to information, classification (i.e., the ability to classify objects based on the information, and strategy (i.e., the mission objective. Within this model, we pose and solve a sequential decision problem that accounts for strategist preferences and the solution to the problem yields a sequence of kinematic decisions of a moving agent. The solution of the sequential decision problem yields the following flying tactics: “approach only objects whose suspected identity matters to the strategy". These tactics are numerically illustrated in several scenarios.
Lemaire, Vincent; Lee, Chiu Fan; Lei, Jinzhi; Métivier, Raphaël; Glass, Leon
2006-05-01
In human cells, estrogenic signals induce cyclical association and dissociation of specific proteins with the DNA in order to activate transcription of estrogen-responsive genes. These oscillations can be modeled by assuming a large number of sequential reactions represented by linear kinetics with random kinetic rates. Application of the model to experimental data predicts robust binding sequences in which proteins associate with the DNA at several different phases of the oscillation. Our methods circumvent the need to derive detailed kinetic graphs, and are applicable to other oscillatory biological processes involving a large number of sequential steps.
Trans-dimensional Bayesian inference for large sequential data sets
Mandolesi, E.; Dettmer, J.; Dosso, S. E.; Holland, C. W.
2015-12-01
This work develops a sequential Monte Carlo method to infer seismic parameters of layered seabeds from large sequential reflection-coefficient data sets. The approach provides parameter estimates and uncertainties along survey tracks with the goal to aid in the detection of unexploded ordnance in shallow water. The sequential data are acquired by a moving platform with source and receiver array towed close to the seabed. This geometry requires consideration of spherical reflection coefficients, computed efficiently by massively parallel implementation of the Sommerfeld integral via Levin integration on a graphics processing unit. The seabed is parametrized with a trans-dimensional model to account for changes in the environment (i.e. changes in layering) along the track. The method combines advanced Markov chain Monte Carlo methods (annealing) with particle filtering (resampling). Since data from closely-spaced source transmissions (pings) often sample similar environments, the solution from one ping can be utilized to efficiently estimate the posterior for data from subsequent pings. Since reflection-coefficient data are highly informative, the likelihood function can be extremely peaked, resulting in little overlap between posteriors of adjacent pings. This is addressed by adding bridging distributions (via annealed importance sampling) between pings for more efficient transitions. The approach assumes the environment to be changing slowly enough to justify the local 1D parametrization. However, bridging allows rapid changes between pings to be addressed and we demonstrate the method to be stable in such situations. Results are in terms of trans-D parameter estimates and uncertainties along the track. The algorithm is examined for realistic simulated data along a track and applied to a dataset collected by an autonomous underwater vehicle on the Malta Plateau, Mediterranean Sea. [Work supported by the SERDP, DoD.
Sequential Probability Ratio Test for Spacecraft Collision Avoidance Maneuver Decisions
Carpenter, J. Russell; Markley, F. Landis
2013-01-01
A document discusses sequential probability ratio tests that explicitly allow decision-makers to incorporate false alarm and missed detection risks, and are potentially less sensitive to modeling errors than a procedure that relies solely on a probability of collision threshold. Recent work on constrained Kalman filtering has suggested an approach to formulating such a test for collision avoidance maneuver decisions: a filter bank with two norm-inequality-constrained epoch-state extended Kalman filters. One filter models the null hypotheses that the miss distance is inside the combined hard body radius at the predicted time of closest approach, and one filter models the alternative hypothesis. The epoch-state filter developed for this method explicitly accounts for any process noise present in the system. The method appears to work well using a realistic example based on an upcoming, highly elliptical orbit formation flying mission.
Liu, Xu; Liu, Tiao-Tiao; Bai, Wen-Wen; Yi, Hu; Li, Shuang-Yan; Tian, Xin
2013-06-01
Working memory plays an important role in human cognition. This study investigated how working memory was encoded by the power of multi-channel local field potentials (LFPs) based on sparse nonnegative matrix factorization (SNMF). SNMF was used to extract features from LFPs recorded from the prefrontal cortex of four Sprague-Dawley rats during a memory task in a Y maze, with 10 trials for each rat. Then the power-increased LFP components were selected as working memory-related features and the other components were removed. After that, the inverse operation of SNMF was used to study the encoding of working memory in the time-frequency domain. We demonstrated that theta and gamma power increased significantly during the working memory task. The results suggested that postsynaptic activity was simulated well by the sparse activity model. The theta and gamma bands were meaningful for encoding working memory.
Optimal Energy Management of Multi-Microgrids with Sequentially Coordinated Operations
Directory of Open Access Journals (Sweden)
Nah-Oak Song
2015-08-01
Full Text Available We propose an optimal electric energy management of a cooperative multi-microgrid community with sequentially coordinated operations. The sequentially coordinated operations are suggested to distribute computational burden and yet to make the optimal 24 energy management of multi-microgrids possible. The sequential operations are mathematically modeled to find the optimal operation conditions and illustrated with physical interpretation of how to achieve optimal energy management in the cooperative multi-microgrid community. This global electric energy optimization of the cooperative community is realized by the ancillary internal trading between the microgrids in the cooperative community which reduces the extra cost from unnecessary external trading by adjusting the electric energy production amounts of combined heat and power (CHP generators and amounts of both internal and external electric energy trading of the cooperative community. A simulation study is also conducted to validate the proposed mathematical energy management models.
Sequential monitoring of response-adaptive randomized clinical trials
Zhu, Hongjian; 10.1214/10-AOS796
2010-01-01
Clinical trials are complex and usually involve multiple objectives such as controlling type I error rate, increasing power to detect treatment difference, assigning more patients to better treatment, and more. In literature, both response-adaptive randomization (RAR) procedures (by changing randomization procedure sequentially) and sequential monitoring (by changing analysis procedure sequentially) have been proposed to achieve these objectives to some degree. In this paper, we propose to sequentially monitor response-adaptive randomized clinical trial and study it's properties. We prove that the sequential test statistics of the new procedure converge to a Brownian motion in distribution. Further, we show that the sequential test statistics asymptotically satisfy the canonical joint distribution defined in Jennison and Turnbull (\\citeyearJT00). Therefore, type I error and other objectives can be achieved theoretically by selecting appropriate boundaries. These results open a door to sequentially monitor res...
Sequential nonlinear tracking filter without requirement of measurement decorrelation
Institute of Scientific and Technical Information of China (English)
Taifan Quan
2015-01-01
Sequential measurement processing is of benefit to both estimation accuracy and computational efficiency. When the noises are correlated across the measurement components, decorrelation based on covariance matrix factorization is required in the previous methods in order to perform sequential updates properly. A new sequential processing method, which carries out the sequential updates directly using the correlated measurement components, is proposed. And a typical sequential processing example is investigated, where the converted position measure-ments are used to estimate target states by standard Kalman filtering equations and the converted Doppler measurements are then incorporated into a minimum mean squared error (MMSE) estimator with the updated cross-covariance involved to account for the correlated errors. Numerical simulations demonstrate the superiority of the proposed new sequential processing in terms of better accuracy and consistency than the conventional sequential filter based on measurement decorrelation.
Transition from non-sequential to sequential double ionisation in many-electron systems
Pullen, Michael G; Wang, Xu; Tong, Xiao-Min; Sclafani, Michele; Baudisch, Matthias; Pires, Hugo; Schröter, Claus Dieter; Ullrich, Joachim; Pfeifer, Thomas; Moshammer, Robert; Eberly, J H; Biegert, Jens
2016-01-01
Obtaining a detailed understanding of strong-field double ionisation of many-electron systems (heavy atoms and molecules) remains a challenging task. By comparing experimental and theoretical results in the mid-IR regime, we have unambiguously identified the transition from non-sequential (e,2e) to sequential double ionisation in Xe and shown that it occurs at an intensity below $10^{14}$ Wcm$^{-2}$. In addition, our data demonstrate that ionisation from the Xe 5s orbital is decisive at low intensities. Moreover, using the acetylene molecule, we propose how sequential double ionisation in the mid-IR can be used to study molecular dynamics and fragmentation on unprecedented few-femtosecond timescales.
The statistical decay of very hot nuclei: from sequential decay to multifragmentation
Energy Technology Data Exchange (ETDEWEB)
Carlson, B.V. [Centro Tecnico Aeroespacial (ITA/CTA), Sao Jose dos Campos, SP (Brazil). Inst. Tecnologico de Aeronautica; Donangelo, R. [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica; Universidad de la Republica, Montevideo (Uruguay). Facultad de Ingenieria. Inst. de Fisica; Souza, S.R. [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica; Universidade Federal do Rio Grande do Sul (IF/UFRGS), Porto Alegre, RS (Brazil). Inst. de Fisica; Lynch, W.G.; Steiner, A.W.; Tsang, M.B. [Michigan State University (NSCL/MSU), East Lansing, MI (United States). National Superconducting Cyclotron Lab.
2010-07-01
Full text. At low excitation energies, the compound nucleus typically decays through the sequential emission of light particles. As the energy increases, the emission probability of heavier fragments increases until, at sufficiently high energies, several heavy complex fragments are emitted during the decay. The extent to which this fragment emission is simultaneous or sequential has been a subject of theoretical and experimental study for almost 30 years. The Statistical Multifragmentation Model, an equilibrium model of simultaneous fragment emission, uses the configurations of a statistical ensemble to determine the distribution of primary fragments of a compound nucleus. The primary fragments are then assumed to decay by sequential compound emission or Fermi breakup. As the first step toward a more unified model of these processes, we demonstrate the equivalence of a generalized Fermi breakup model, in which densities of excited states are taken into account, to the microcanonical version of the statistical multifragmentation model. We then establish a link between this unified Fermi breakup / statistical multifragmentation model and the well-known process of compound nucleus emission, which permits to consider simultaneous and sequential emission on the same footing. Within this unified framework, we analyze the increasing importance of simultaneous, multifragment decay with increasing excitation energy and decreasing lifetime of the compound nucleus. (author)
Poage, J. L.
1975-01-01
A sequential nonparametric pattern classification procedure is presented. The method presented is an estimated version of the Wald sequential probability ratio test (SPRT). This method utilizes density function estimates, and the density estimate used is discussed, including a proof of convergence in probability of the estimate to the true density function. The classification procedure proposed makes use of the theory of order statistics, and estimates of the probabilities of misclassification are given. The procedure was tested on discriminating between two classes of Gaussian samples and on discriminating between two kinds of electroencephalogram (EEG) responses.
Isolation of Polyvalent Bacteriophages by Sequential Multiple-Host Approaches.
Yu, Pingfeng; Mathieu, Jacques; Li, Mengyan; Dai, Zhaoyi; Alvarez, Pedro J J
2015-11-20
Many studies on phage biology are based on isolation methods that may inadvertently select for narrow-host-range phages. Consequently, broad-host-range phages, whose ecological significance is largely unexplored, are consistently overlooked. To enhance research on such polyvalent phages, we developed two sequential multihost isolation methods and tested both culture-dependent and culture-independent phage libraries for broad infectivity. Lytic phages isolated from activated sludge were capable of interspecies or even interorder infectivity without a significant reduction in the efficiency of plating (0.45 to 1.15). Two polyvalent phages (PX1 of the Podoviridae family and PEf1 of the Siphoviridae family) were characterized in terms of adsorption rate (3.54 × 10(-10) to 8.53 × 10(-10) ml/min), latent time (40 to 55 min), and burst size (45 to 99 PFU/cell), using different hosts. These phages were enriched with a nonpathogenic host (Pseudomonas putida F1 or Escherichia coli K-12) and subsequently used to infect model problematic bacteria. By using a multiplicity of infection of 10 in bacterial challenge tests, >60% lethality was observed for Pseudomonas aeruginosa relative to uninfected controls. The corresponding lethality for Pseudomonas syringae was ∼ 50%. Overall, this work suggests that polyvalent phages may be readily isolated from the environment by using different sequential hosts, and this approach should facilitate the study of their ecological significance as well as enable novel applications.
Sequential release of nanoparticle payloads from ultrasonically burstable capsules.
Kennedy, Stephen; Hu, Jennifer; Kearney, Cathal; Skaat, Hadas; Gu, Luo; Gentili, Marco; Vandenburgh, Herman; Mooney, David
2016-01-01
In many biomedical contexts ranging from chemotherapy to tissue engineering, it is beneficial to sequentially present bioactive payloads. Explicit control over the timing and dose of these presentations is highly desirable. Here, we present a capsule-based delivery system capable of rapidly releasing multiple payloads in response to ultrasonic signals. In vitro, these alginate capsules exhibited excellent payload retention for up to 1 week when unstimulated and delivered their entire payloads when ultrasonically stimulated for 10-100 s. Shorter exposures (10 s) were required to trigger delivery from capsules embedded in hydrogels placed in a tissue model and did not result in tissue heating or death of encapsulated cells. Different types of capsules were tuned to rupture in response to different ultrasonic stimuli, thus permitting the sequential, on-demand delivery of nanoparticle payloads. As a proof of concept, gold nanoparticles were decorated with bone morphogenetic protein-2 to demonstrate the potential bioactivity of nanoparticle payloads. These nanoparticles were not cytotoxic and induced an osteogenic response in mouse mesenchymal stem cells. This system may enable researchers and physicians to remotely regulate the timing, dose, and sequence of drug delivery on-demand, with a wide range of clinical applications ranging from tissue engineering to cancer treatment. Copyright © 2015 Elsevier Ltd. All rights reserved.
Online Sequential Projection Vector Machine with Adaptive Data Mean Update
Directory of Open Access Journals (Sweden)
Lin Chen
2016-01-01
Full Text Available We propose a simple online learning algorithm especial for high-dimensional data. The algorithm is referred to as online sequential projection vector machine (OSPVM which derives from projection vector machine and can learn from data in one-by-one or chunk-by-chunk mode. In OSPVM, data centering, dimension reduction, and neural network training are integrated seamlessly. In particular, the model parameters including (1 the projection vectors for dimension reduction, (2 the input weights, biases, and output weights, and (3 the number of hidden nodes can be updated simultaneously. Moreover, only one parameter, the number of hidden nodes, needs to be determined manually, and this makes it easy for use in real applications. Performance comparison was made on various high-dimensional classification problems for OSPVM against other fast online algorithms including budgeted stochastic gradient descent (BSGD approach, adaptive multihyperplane machine (AMM, primal estimated subgradient solver (Pegasos, online sequential extreme learning machine (OSELM, and SVD + OSELM (feature selection based on SVD is performed before OSELM. The results obtained demonstrated the superior generalization performance and efficiency of the OSPVM.
Sequential Combination Methods forData Clustering Analysis
Institute of Scientific and Technical Information of China (English)
钱 涛; Ching Y.Suen; 唐远炎
2002-01-01
This paper proposes the use of more than one clustering method to improve clustering performance. Clustering is an optimization procedure based on a specific clustering criterion. Clustering combination can be regardedasatechnique that constructs and processes multiple clusteringcriteria.Sincetheglobalandlocalclusteringcriteriaarecomplementary rather than competitive, combining these two types of clustering criteria may enhance theclustering performance. In our past work, a multi-objective programming based simultaneous clustering combination algorithmhasbeenproposed, which incorporates multiple criteria into an objective function by a weighting method, and solves this problem with constrained nonlinear optimization programming. But this algorithm has high computationalcomplexity.Hereasequential combination approach is investigated, which first uses the global criterion based clustering to produce an initial result, then uses the local criterion based information to improve the initial result with aprobabilisticrelaxation algorithm or linear additive model.Compared with the simultaneous combination method, sequential combination haslow computational complexity. Results on some simulated data and standard test data arereported.Itappearsthatclustering performance improvement can be achieved at low cost through sequential combination.
Sequential release of nanoparticle payloads from ultrasonically burstable capsules
Kennedy, Stephen; Hu, Jennifer; Kearney, Cathal; Skaat, Hadas; Gu, Luo; Gentili, Marco; Vandenburgh, Herman; Mooney, David
2015-01-01
In many biomedical contexts ranging from chemotherapy to tissue engineering, it is beneficial to sequentially present bioactive payloads. Explicit control over the timing and dose of these presentations is highly desirable. Here, we present a capsule-based delivery system capable of rapidly releasing multiple payloads in response to ultrasonic signals. In vitro, these alginate capsules exhibited excellent payload retention for up to 1 week when unstimulated and delivered their entire payloads when ultrasonically stimulated for 10 to 100 s. Shorter exposures (10 s) were required to trigger delivery from capsules embedded in hydrogels placed in a tissue model and did not result in tissue heating or death of encapsulated cells. Different types of capsules were tuned to rupture in response to different ultrasonic stimuli, thus permitting the sequential, on-demand delivery of nanoparticle payloads. As a proof of concept, gold nanoparticles were decorated with bone morphogenetic protein-2 to demonstrate the potential bioactivity of nanoparticle payloads. These nanoparticles were not cytotoxic and induced an osteogenic response in mouse mesenchymal stem cells. This system may enable researchers and physicians to remotely regulate the timing, dose, and sequence of drug delivery on-demand, with a wide range of clinical applications ranging from tissue engineering to cancer treatment. PMID:26496382
Breaking from binaries - using a sequential mixed methods design.
Larkin, Patricia Mary; Begley, Cecily Marion; Devane, Declan
2014-03-01
To outline the traditional worldviews of healthcare research and discuss the benefits and challenges of using mixed methods approaches in contributing to the development of nursing and midwifery knowledge. There has been much debate about the contribution of mixed methods research to nursing and midwifery knowledge in recent years. A sequential exploratory design is used as an exemplar of a mixed methods approach. The study discussed used a combination of focus-group interviews and a quantitative instrument to obtain a fuller understanding of women's experiences of childbirth. In the mixed methods study example, qualitative data were analysed using thematic analysis and quantitative data using regression analysis. Polarised debates about the veracity, philosophical integrity and motivation for conducting mixed methods research have largely abated. A mixed methods approach can contribute to a deeper, more contextual understanding of a variety of subjects and experiences; as a result, it furthers knowledge that can be used in clinical practice. The purpose of the research study should be the main instigator when choosing from an array of mixed methods research designs. Mixed methods research offers a variety of models that can augment investigative capabilities and provide richer data than can a discrete method alone. This paper offers an example of an exploratory, sequential approach to investigating women's childbirth experiences. A clear framework for the conduct and integration of the different phases of the mixed methods research process is provided. This approach can be used by practitioners and policy makers to improve practice.
The Methodology of Testability Prediction for Sequential Circuits
Institute of Scientific and Technical Information of China (English)
徐拾义; 陈斯
1996-01-01
Increasingly,test generation algorithms are being developed with the continuous creations of incredibly sophisticated computing systems.Of all the developments of testable as well as reliable designs for computing systems,the test generation for sequential circuits is usually viewed as one of the hard nuts to be solved for its complexity and time-consuming issue.Although dozens of algorithms have been proposed to cope with this issue,it still remains much to be desired in solving such problems as to determin 1) which of the existing test generation algorithms could be the most efficient for some particular circuits(by efficiency,we mean the Fault Coverage the algorithm offers,CPU time when executing,the number of test patterns to be applied,ectc.)since different algorithms would be preferable for different circuits;2)which parameters(such as the number of gates,flip-flops and loops,etc., in the circuit)will have the most or least influences on test generation so that the designers of circuits can have a global understanding during the stage of designing for testability.Testability forecastin methodology for the sequential circuits using regression models is presented which a user usually needs for analyzing his own circuits and selecting the most suitable test generation algorithm from all possible algorithms available.Some examples and experiment results are also provided in order to show how helpful and practical the method is.
Sequential Multiple Response Optimization for Manufacturing Flexible Printed Circuit
Directory of Open Access Journals (Sweden)
Pichpimon Kanchanasuttisang
2012-01-01
Full Text Available Problem statement: Flexible Printed Circuit or FPC, one of automotive electronic parts, has been developed for lighting automotive vehicles by assembling with the LED. The quality performances or responses of lighting vehicles are relied on the circuit width of an FPC and the etched rate of acid solution. According to the current operating condition of an FPC company, the capability of the manufacturing process is under the company requirement. The standard deviation of FPC circuit widths is at higher levels and the mean is also worse than specifications. Approach: In this process improvement there was four sequential steps based on the designed experiments, steepest descent and interchangeable linear constrained response surface optimization or IC-LCRSOM. An investigation aims to determine the preferable levels of significant process variables affecting multiple responses. Results: The new settings from the IC-LCRSOM improved all performance measures in terms of both the mean and the standard deviation on all process patterns. Conclusion: From this sequential optimization the developed mathematical model has tested for adequacy using analysis of variance and other adequacy measures. In the actual investigation, the new operating conditions lead to higher levels of the etched rate and process capability including lower levels of the standard deviation of the circuit widths and etched rate when compared.
Robust inference for group sequential trials.
Ganju, Jitendra; Lin, Yunzhi; Zhou, Kefei
2017-03-01
For ethical reasons, group sequential trials were introduced to allow trials to stop early in the event of extreme results. Endpoints in such trials are usually mortality or irreversible morbidity. For a given endpoint, the norm is to use a single test statistic and to use that same statistic for each analysis. This approach is risky because the test statistic has to be specified before the study is unblinded, and there is loss in power if the assumptions that ensure optimality for each analysis are not met. To minimize the risk of moderate to substantial loss in power due to a suboptimal choice of a statistic, a robust method was developed for nonsequential trials. The concept is analogous to diversification of financial investments to minimize risk. The method is based on combining P values from multiple test statistics for formal inference while controlling the type I error rate at its designated value.This article evaluates the performance of 2 P value combining methods for group sequential trials. The emphasis is on time to event trials although results from less complex trials are also included. The gain or loss in power with the combination method relative to a single statistic is asymmetric in its favor. Depending on the power of each individual test, the combination method can give more power than any single test or give power that is closer to the test with the most power. The versatility of the method is that it can combine P values from different test statistics for analysis at different times. The robustness of results suggests that inference from group sequential trials can be strengthened with the use of combined tests. Copyright © 2017 John Wiley & Sons, Ltd.
Reference priors of nuisance parameters in Bayesian sequential population analysis
Bousquet, Nicolas
2010-01-01
Prior distributions elicited for modelling the natural fluctuations or the uncertainty on parameters of Bayesian fishery population models, can be chosen among a vast range of statistical laws. Since the statistical framework is defined by observational processes, observational parameters enter into the estimation and must be considered random, similarly to parameters or states of interest like population levels or real catches. The former are thus perceived as nuisance parameters whose values are intrinsically linked to the considered experiment, which also require noninformative priors. In fishery research Jeffreys methodology has been presented by Millar (2002) as a practical way to elicit such priors. However they can present wrong properties in multiparameter contexts. Therefore we suggest to use the elicitation method proposed by Berger and Bernardo to avoid paradoxical results raised by Jeffreys priors. These benchmark priors are derived here in the framework of sequential population analysis.
Transaction costs and sequential bargaining in transferable discharge permit markets.
Netusil, N R; Braden, J B
2001-03-01
Market-type mechanisms have been introduced and are being explored for various environmental programs. Several existing programs, however, have not attained the cost savings that were initially projected. Modeling that acknowledges the role of transactions costs and the discrete, bilateral, and sequential manner in which trades are executed should provide a more realistic basis for calculating potential cost savings. This paper presents empirical evidence on potential cost savings by examining a market for the abatement of sediment from farmland. Empirical results based on a market simulation model find no statistically significant change in mean abatement costs under several transaction cost levels when contracts are randomly executed. An alternative method of contract execution, gain-ranked, yields similar results. At the highest transaction cost level studied, trading reduces the total cost of compliance relative to a uniform standard that reflects current regulations.
Sequential data access with Oracle and Hadoop: a performance comparison
Baranowski, Zbigniew; Canali, Luca; Grancher, Eric
2014-06-01
The Hadoop framework has proven to be an effective and popular approach for dealing with "Big Data" and, thanks to its scaling ability and optimised storage access, Hadoop Distributed File System-based projects such as MapReduce or HBase are seen as candidates to replace traditional relational database management systems whenever scalable speed of data processing is a priority. But do these projects deliver in practice? Does migrating to Hadoop's "shared nothing" architecture really improve data access throughput? And, if so, at what cost? Authors answer these questions-addressing cost/performance as well as raw performance- based on a performance comparison between an Oracle-based relational database and Hadoop's distributed solutions like MapReduce or HBase for sequential data access. A key feature of our approach is the use of an unbiased data model as certain data models can significantly favour one of the technologies tested.
Nonlinear sequential laminates reproducing hollow sphere assemblages
Idiart, Martín I.
2007-07-01
A special class of nonlinear porous materials with isotropic 'sequentially laminated' microstructures is found to reproduce exactly the hydrostatic behavior of 'hollow sphere assemblages'. It is then argued that this result supports the conjecture that Gurson's approximate criterion for plastic porous materials, and its viscoplastic extension of Leblond et al. (1994), may actually yield rigorous upper bounds for the hydrostatic flow stress of porous materials containing an isotropic, but otherwise arbitrary, distribution of porosity. To cite this article: M.I. Idiart, C. R. Mecanique 335 (2007).
Automatic differentiation for reduced sequential quadratic programming
Institute of Scientific and Technical Information of China (English)
Liao Liangcai; Li Jin; Tan Yuejin
2007-01-01
In order to slove the large-scale nonlinear programming (NLP) problems efficiently, an efficient optimization algorithm based on reduced sequential quadratic programming (rSQP) and automatic differentiation (AD) is presented in this paper. With the characteristics of sparseness, relatively low degrees of freedom and equality constraints utilized, the nonlinear programming problem is solved by improved rSQP solver. In the solving process, AD technology is used to obtain accurate gradient information. The numerical results show that the combined algorithm, which is suitable for large-scale process optimization problems, can calculate more efficiently than rSQP itself.
A Sequential Algorithm for Training Text Classifiers
Lewis, D D; Lewis, David D.; Gale, William A.
1994-01-01
The ability to cheaply train text classifiers is critical to their use in information retrieval, content analysis, natural language processing, and other tasks involving data which is partly or fully textual. An algorithm for sequential sampling during machine learning of statistical classifiers was developed and tested on a newswire text categorization task. This method, which we call uncertainty sampling, reduced by as much as 500-fold the amount of training data that would have to be manually classified to achieve a given level of effectiveness.
THE DEVELOPMENT OF SPECIAL SEQUENTIALLY-TIMED
Directory of Open Access Journals (Sweden)
Stanislav LICHOROBIEC
2016-06-01
Full Text Available This article documents the development of the noninvasive use of explosives during the destruction of ice mass in river flows. The system of special sequentially-timed charges utilizes the increase in efficiency of cutting charges by covering them with bags filled with water, while simultaneously increasing the effect of the entire system of timed charges. Timing, spatial combinations during placement, and the linking of these charges results in the loosening of ice barriers on a frozen waterway, while at the same time regulating the size of the ice fragments. The developed charges will increase the operability and safety of IRS units.
Sequential cooling insert for turbine stator vane
Jones, Russell B; Krueger, Judson J; Plank, William L
2014-04-01
A sequential impingement cooling insert for a turbine stator vane that forms a double impingement for the pressure and suction sides of the vane or a triple impingement. The insert is formed from a sheet metal formed in a zigzag shape that forms a series of alternating impingement cooling channels with return air channels, where pressure side and suction side impingement cooling plates are secured over the zigzag shaped main piece. Another embodiment includes the insert formed from one or two blocks of material in which the impingement channels and return air channels are machined into each block.
Institute of Scientific and Technical Information of China (English)
李永华; 唐先超
2014-01-01
近年来国内民航迅速发展，机场的新建、扩建和航空运输量的持续增长使得机场噪声污染事件不仅持续上升，而且噪声污染程度也日益加重，因而强化机场附近噪声污染的监测对机场建设及其环境评估十分重要。针对机场噪声污染监测问题，提出一种基于非负矩阵分解(NMF)方法对机场噪声监测点布局问题进行优化求解。该方法以大量网格点作为候选监测点，对单个飞机噪声事件候选监测点的噪声值所形成的矩阵按非负矩阵分解进行区域划分，得到噪声影响子区域。进一步以各子区域的中心点作为该区域的噪声影响代表点，以此确定机场噪声监测点数目和位置。研究结果表明所获得的解比贪心算法得到的解更优，需要的监测点更少。%With the rapid development of domestic civil aviation in recent years, noise pollution of airports is becoming a serious problem. Thus, strengthening the noise monitoring in the airport vicinity is very important for airport construction and environmental evaluation. Aiming at the monitoring of airport noise pollution, an optimization method based on non-negative matrix factorization (NMF) is put forward to optimize the layout of the noise monitoring sites. In this method, large number of the grid nodes is employed as the candidate monitoring points, and the non-negative matrix composed of the noise values from single flight event at the candidate monitoring points is formed. Then, the non-negative matrix is factorized to obtain the effective subdivision of noise. Furthermore, the location and number of the noise monitoring sites are determined by the representative central point of the effective subdivisions. It is shown that this method needs fewer monitoring locating sites and can get better results than the greedy algorithm.
Energy Technology Data Exchange (ETDEWEB)
Filip, Valeriu, E-mail: vfilip@gmail.com [Faculty of Physics, University of Bucharest, 405 Atomistilor Str., Magurele 077125, P.O. Box MG-11 (Romania); Institute of Microelectronics and Photonics, Zhejiang University, 38 Zheda Road, Hangzhou 310027 (China); Wong, Hei, E-mail: xiwang@zju.edu.cn [Institute of Microelectronics and Photonics, Zhejiang University, 38 Zheda Road, Hangzhou 310027 (China)
2016-06-01
A simple model of a layered hetero-structure was developed and used to simultaneously compute and compare resonant and sequential electron field emission currents. It was found that, while various slope changes appear in both current-field characteristics, for the sequential tunneling type of emission, such features are merely interference effects. They occur in parts of the structure, prior to the electrons' lingering in the quasi-bound states from which field emission proceeds. These purely quantum effects further combine with the flow effects resulting from the steady current requirement and give corresponding field variations of the electron population of the quasi-bound states, which further react on the resonant part of the current. A spectral approach of the two types of field emission is also considered by computing the total energy distribution of electrons in each case. The differences between these possible spectra are pointed out and discussed. - Highlights: • The relationship between resonant and sequential field emission is studied. • Sequential current–voltage characteristics show barrier-controlled undulations. • Resonant characteristics depend mainly on the width/shape of the topmost well. • The resonant and sequential total energy distributions differ widely.
A sequential algorithm of inverse heat conduction problems using singular value decomposition
Energy Technology Data Exchange (ETDEWEB)
Gutierrez Cabeza, J.M. [Dep. of Applied Physics of Univ. of Cadiz, Escuela Politecnica Superior de Algeciras, Cadiz (Spain); Garcia, Juan Andres Martin [Department of Electrical Engineering of University of Cadiz, Escuela Politecnica Superior de Algeciras, Avda. Ramon Puyol, s/n, 11202 Algeciras (Cadiz) (Spain); Rodriguez, Alfonso Corz [Department of Industrial and Civil Engineering of University of Cadiz, Escuela Politecnica Superior de Algeciras, Avda. Ramon Puyol, s/n, 11202 Algeciras (Cadiz) (Spain)
2005-03-01
This paper examines numerically and theoretically the application of truncated Singular Value Decomposition (SVD) in a sequential form. The Sequential SVD algorithm presents two tunable hyper-parameters: the number of future temperature (r) and the rank of the truncated sensitivity matrix (p). The regularization effect of both hyper-parameters is consistent with the data filtering interpretation by truncated SVD (reported by Shenefelt [Internat. J. Heat Mass Transfer 45 (2002) 67]). This study reveals that the most suitable reduced rank is ''one''. Under this assumption (p=1), the sequential procedure proposed, presents several advantages with respect to the standard whole-domain procedure: The search of the optimum rank value is not required. The simplification of the model is the maximum that can be achieved. The unique tunable hyper-parameter is the number of future temperatures, and a very simple algorithm is obtained. This algorithm has been compared to: Function Specification Method (FSM) proposed by Beck and the standard whole-domain SVD. In this comparative study, the parameters considered have been: the shape of the input, the noise level of measurement and the size of time step. In all cases, the FSM and sequential SVD algorithm give very similar results. In one case, the results obtained by the sequential SVD algorithm are clearly superior to the ones obtained by the whole-domain algorithm. (authors)
Out—of—Order Execution is Sequentially Consistent Shared—Memory Systems：Theory and Experiments
Institute of Scientific and Technical Information of China (English)
胡伟武; 夏培肃
1998-01-01
Traditional implementation of sequential consistency in shared-memory systems requires memory accesses to be globally performed in program order.Based on an event ordering model for correct executions in shared-memory systems,this paper proposes and proves that out-of-order execution does not influence the correctness of an execution providing certain condition is met.Simulation results show that out-of-order execution proposed in this paper is an effective way to improve the performance of a sequentially consistent shared-memory system.
Directory of Open Access Journals (Sweden)
Wodziński Marek
2017-06-01
Full Text Available This paper presents an alternative approach to the sequential data classification, based on traditional machine learning algorithms (neural networks, principal component analysis, multivariate Gaussian anomaly detector and finding the shortest path in a directed acyclic graph, using A* algorithm with a regression-based heuristic. Palm gestures were used as an example of the sequential data and a quadrocopter was the controlled object. The study includes creation of a conceptual model and practical construction of a system using the GPU to ensure the realtime operation. The results present the classification accuracy of chosen gestures and comparison of the computation time between the CPU- and GPU-based solutions.
Physics-based, Bayesian sequential detection method and system for radioactive contraband
Candy, James V; Axelrod, Michael C; Breitfeller, Eric F; Chambers, David H; Guidry, Brian L; Manatt, Douglas R; Meyer, Alan W; Sale, Kenneth E
2014-03-18
A distributed sequential method and system for detecting and identifying radioactive contraband from highly uncertain (noisy) low-count, radionuclide measurements, i.e. an event mode sequence (EMS), using a statistical approach based on Bayesian inference and physics-model-based signal processing based on the representation of a radionuclide as a monoenergetic decomposition of monoenergetic sources. For a given photon event of the EMS, the appropriate monoenergy processing channel is determined using a confidence interval condition-based discriminator for the energy amplitude and interarrival time and parameter estimates are used to update a measured probability density function estimate for a target radionuclide. A sequential likelihood ratio test is then used to determine one of two threshold conditions signifying that the EMS is either identified as the target radionuclide or not, and if not, then repeating the process for the next sequential photon event of the EMS until one of the two threshold conditions is satisfied.
Sequential algorithm for fast clique percolation.
Kumpula, Jussi M; Kivelä, Mikko; Kaski, Kimmo; Saramäki, Jari
2008-08-01
In complex network research clique percolation, introduced by Palla, Derényi, and Vicsek [Nature (London) 435, 814 (2005)], is a deterministic community detection method which allows for overlapping communities and is purely based on local topological properties of a network. Here we present a sequential clique percolation algorithm (SCP) to do fast community detection in weighted and unweighted networks, for cliques of a chosen size. This method is based on sequentially inserting the constituent links to the network and simultaneously keeping track of the emerging community structure. Unlike existing algorithms, the SCP method allows for detecting k -clique communities at multiple weight thresholds in a single run, and can simultaneously produce a dendrogram representation of hierarchical community structure. In sparse weighted networks, the SCP algorithm can also be used for implementing the weighted clique percolation method recently introduced by Farkas [New J. Phys. 9, 180 (2007)]. The computational time of the SCP algorithm scales linearly with the number of k -cliques in the network. As an example, the method is applied to a product association network, revealing its nested community structure.
Modifications of sequential designs in bioequivalence trials.
Zheng, Cheng; Zhao, Lihui; Wang, Jixian
2015-01-01
Bioequivalence (BE) studies are designed to show that two formulations of one drug are equivalent and they play an important role in drug development. When in a design stage, it is possible that there is a high degree of uncertainty on variability of the formulations and the actual performance of the test versus reference formulation. Therefore, an interim look may be desirable to stop the study if there is no chance of claiming BE at the end (futility), or claim BE if evidence is sufficient (efficacy), or adjust the sample size. Sequential design approaches specially for BE studies have been proposed previously in publications. We applied modification to the existing methods focusing on simplified multiplicity adjustment and futility stopping. We name our method modified sequential design for BE studies (MSDBE). Simulation results demonstrate comparable performance between MSDBE and the original published methods while MSDBE offers more transparency and better applicability. The R package MSDBE is available at https://sites.google.com/site/modsdbe/.
Study on Non-Sequential Double Ionization of Aligned Diatomic Molecules in Strong Laser Fields
Institute of Scientific and Technical Information of China (English)
LI Yan; CHEN Jing; YANG Shi-Ping; LIU Jie
2007-01-01
We develop a semiclassical model to describe the non-sequential double ionization of aligned diatomic molecules in an intense linearly polarized field. It is found that in the tunnelling regime, the oriented molecule shows geometric effects on double ionization process when aligned parallel and perpendicular to the external Geld. Our results are qualitatively consistent with the recent experimental observations.
Ad-hoc network DOA tracking via sequential Monte Carlo filtering
Institute of Scientific and Technical Information of China (English)
GUO Li; GUO Yan; LIN Jia-ru; LI Ning
2007-01-01
A novel sequential Monte Carlo (SMC) algorithm is provided for the multiple maneuvering Ad-hoc network terminals direction of arrival (DOA) tracking. A nonlinear mobility and observation model is adopted, which can describe the motion features of the Ad-hoc network terminal more practically. The algorithm does not need any additional measurement equipment. Simulation result shows its significant tracking accuracy.
Miller, Ronald Mellado; Capaldi, E. John
2006-01-01
Sequential theory's memory model of learning has been successfully applied in response contingent instrumental conditioning experiments (Capaldi, 1966, 1967, 1994; Capaldi & Miller, 2003). However, it has not been systematically tested in nonresponse contingent Pavlovian conditioning experiments. The present experiments attempted to determine if…
Efficient Inversion in Underwater Acoustics with Analytic, Iterative and Sequential Bayesian Methods
2015-09-30
Iterative and Sequential Bayesian Methods Zoi-Heleni Michalopoulou Department of Mathematical Sciences New Jersey Institute of Technology...exploiting (fully or partially) the physics of the propagation medium. Algorithms are designed for inversion via the extraction of features of the...statistical modeling. • Develop methods for passive localization and inversion of environmental parameters that select features of propagation that are
Extending the Simultaneous-Sequential Paradigm to Measure Perceptual Capacity for Features and Words
Scharff, Alec; Palmer, John; Moore, Cathleen M.
2011-01-01
In perception, divided attention refers to conditions in which multiple stimuli are relevant to an observer. To measure the effect of divided attention in terms of perceptual capacity, we introduce an extension of the simultaneous-sequential paradigm. The extension makes predictions for fixed-capacity models as well as for unlimited-capacity…
Decentralized enforcement, sequential bargaining, and the clean development mechanism
Energy Technology Data Exchange (ETDEWEB)
Hovi, Jon
2001-07-01
While there is a vast literature both on international bargaining and on how international agreements can be enforced, very little work has been done on how bargaining and enforcement interact. An important exception is Fearon (1998), who models international cooperation as a two-stage process in which the bargaining process is constrained by a need for decentralized enforcement (meaning that the agreement must be enforced by the parties themselves rather than a third party, such as a court). Using the Clean Development Mechanism as an example, the present paper proposes a different model of this kind of interaction. The model follows Fearon's in so far as we both use the infinitely repeated Prisoners' Dilemma to capture the enforcement phase of the game. However, while Fearon depicts the bargaining stage as a War of Attrition, the present model sees that stage as a sequential bargaining game of the Staahl-Rubinstein type. The implications of the present model are compared both to those of the Staahl-Rubinstein model and to those of the Fearon model. A surprising conclusion is that a need for decentralized enforcement tends to make the bargaining outcome more symmetrical than otherwise. Thus, the impact of bargaining power is actually smaller when the resulting agreement must be enforced by the parties themselves than it is if enforcement is taken care of by a third party. (author)
G-sequentially connectedness for topological groups with operations
Mucuk, Osman; Cakalli, Huseyin
2016-08-01
It is a well-known fact that for a Hausdorff topological group X, the limits of convergent sequences in X define a function denoted by lim from the set of all convergent sequences in X to X. This notion has been modified by Connor and Grosse-Erdmann for real functions by replacing lim with an arbitrary linear functional G defined on a linear subspace of the vector space of all real sequences. Recently some authors have extended the concept to the topological group setting and introduced the concepts of G-sequential continuity, G-sequential compactness and G-sequential connectedness. In this work, we present some results about G-sequentially closures, G-sequentially connectedness and fundamental system of G-sequentially open neighbourhoods for topological group with operations which include topological groups, topological rings without identity, R-modules, Lie algebras, Jordan algebras, and many others.
Method of sequential mesh on Koopman-Darmois distributions
Institute of Scientific and Technical Information of China (English)
无
2010-01-01
For costly and/or destructive tests,the sequential method with a proper maximum sample size is needed.Based on Koopman-Darmois distributions,this paper proposes the method of sequential mesh,which has an acceptable maximum sample size.In comparison with the popular truncated sequential probability ratio test,our method has the advantage of a smaller maximum sample size and is especially applicable for costly and/or destructive tests.
SEQUENTIAL CLUSTERING-BASED EVENT DETECTION FOR NONINTRUSIVE LOAD MONITORING
Directory of Open Access Journals (Sweden)
Karim Said Barsim
2016-01-01
Full Text Available The problem of change-point detection has been well studied and adopted in many signal processing applications. In such applications, the informative segments of the signal are the stationary ones before and after the change-point. However, for some novel signal processing and machine learning applications such as Non-Intrusive Load Monitoring (NILM, the information contained in the non-stationary transient intervals is of equal or even more importance to the recognition process. In this paper, we introduce a novel clustering-based sequential detection of abrupt changes in an aggregate electricity consumption profile with accurate decomposition of the input signal into stationary and non-stationary segments. We also introduce various event models in the context of clustering analysis. The proposed algorithm is applied to building-level energy profiles with promising results for the residential BLUED power dataset.
Optimal sequential change-detection for fractional stochastic differential equations
Chronopoulou, Alexandra
2011-01-01
The sequential detection of an abrupt and persistent change in the dynamics of an arbitrary continuous-path stochastic process is considered; the optimality of the cumulative sums (CUSUM) test is established with respect to a modified Lorden's criterion. As a corollary, sufficient conditions are obtained for the optimality of the CUSUM test when the observed process is described by a fractional stochastic differential equation. Moreover, a novel family of model-free, Lorden-like criteria is introduced and it is shown that these criteria are optimized by the CUSUM test when a fractional Brownian motion adopts a polynomial drift. Finally, a modification of the continuous-time CUSUM test is proposed for the case that only discrete-time observations are available.
Schedule-based sequential localization in asynchronous wireless networks
Zachariah, Dave; De Angelis, Alessio; Dwivedi, Satyam; Händel, Peter
2014-12-01
In this paper, we consider the schedule-based network localization concept, which does not require synchronization among nodes and does not involve communication overhead. The concept makes use of a common transmission sequence, which enables each node to perform self-localization and to localize the entire network, based on noisy propagation-time measurements. We formulate the schedule-based localization problem as an estimation problem in a Bayesian framework. This provides robustness with respect to uncertainty in such system parameters as anchor locations and timing devices. Moreover, we derive a sequential approximate maximum a posteriori (AMAP) estimator. The estimator is fully decentralized and copes with varying noise levels. By studying the fundamental constraints given by the considered measurement model, we provide a system design methodology which enables a scalable solution. Finally, we evaluate the performance of the proposed AMAP estimator by numerical simulations emulating an impulse-radio ultra-wideband (IR-UWB) wireless network.
Increased efficacy of photodynamic therapy via sequential targeting
Kessel, David; Aggarwal, Neha; Sloane, Bonnie F.
2014-03-01
Photokilling depends on the generation of death signals after photosensitized cells are irradiated. A variety of intracellular organelles can be targeted for photodamage, often with a high degree of specificity. We have discovered that a low level of photodamage directed against lysosomes can sensitize both a murine hepatoma cell line (in 2D culture) and an inflammatory breast cancer line of human origin (in a 3D model) to subsequent photodamage directed at mitochondria. Additional studies were carried out with hepatoma cells to explore possible mechanisms. The phototoxic effect of the `sequential targeting' approach was associated with an increased apoptotic response. The low level of lysosomal photodamage did not lead to any detectable migration of Fe++ from lysosomes to mitochondria or increased reactive oxygen species (ROS) formation after subsequent mitochondrial photodamage. Instead, there appears to be a signal generated that can amplify the pro-apoptotic effect of subsequent mitochondrial photodamage.
Power measures derived from the sequential query process
Pritchard, Geoffrey; Wilson, Mark C
2012-01-01
We study a basic sequential model for the discovery of winning coalitions in a simple game, well known from its use in defining the Shapley-Shubik power index. We derive in a uniform way a family of measures of collective and individual power in simple games, and show that, as for the Shapley-Shubik index, they extend naturally to measures for TU-games. In particular, the individual measures include all weighted semivalues. We single out the simplest measure in our family for more investigation, as it is new to the literature as far as we know. Although it is very different from the Shapley value, it is closely related in several ways, and is the natural analogue of the Shapley value under a nonstandard, but natural, definition of simple game. We illustrate this new measure by calculating its values on some standard examples.
Phase Space Structures of k-threshold Sequential Dynamical Systems
Rani, Raffaele
2011-01-01
Sequential dynamical systems (SDS) are used to model a wide range of processes occurring on graphs or networks. The dynamics of such discrete dynamical systems is completely encoded by their phase space, a directed graph whose vertices and edges represent all possible system configurations and transitions between configurations respectively. Direct calculation of the phase space is in most cases a computationally demanding task. However, for some classes of SDS one can extract information on the connected component structure of phase space from the constituent elements of the SDS, such as its base graph and vertex functions. We present a number of novel results about the connected component structure of the phase space for k-threshold dynamical system with binary state spaces. We establish relations between the structure of the components, the threshold value, and the update sequence. Also fixed-point reachability from garden of eden configurations is investigated and upper bounds for the length of paths in t...
STATE OF THE ART - MODERN SEQUENTIAL RULE MINING TECHNIQUES
Directory of Open Access Journals (Sweden)
Anjali Paliwal
2015-10-01
Full Text Available This paper is state of the art of existing sequential rule mining algorithms. Extracting sequential rule is a very popular and computationally expensive task. We also explain the fundamentals of sequential rule mining. We describe today’s approaches for sequential rule mining. From the broad variety of efficient algorithms that have been developed we will compare the most important ones. We will systematize the algorithms and analyze their performance based on both their run t ime performance and theoretical considerations. Their strengths and weaknesses are also investigated.
State of The Art - Modern Sequential Rule Mining Techniques
Directory of Open Access Journals (Sweden)
Ms. Anjali Paliwal
2014-08-01
Full Text Available This paper is state of the art of existing sequential rule mining algorithms. Extracting sequential rule is a very popular and computationally expensive task. We also explain the fundamentals of sequential rule mining. We describe today’s approaches for sequential rule mining. From the broad variety of efficient algorithms that have been developed we will compare the most important ones. We will systematize the algorithms and analyze their performance based on both their run time performance and theoretical considerations. Their strengths and weaknesses are also investigated.
Advancing the objective structured clinical examination: sequential testing in theory and practice.
Pell, Godfrey; Fuller, Richard; Homer, Matthew; Roberts, Trudie
2013-06-01
Models of short-term remediation for failing students are typically associated with improvements in candidate performance at retest. However, the process is costly to deliver, particularly for performance retests with objective structured clinical examinations (OSCEs), and there is increasing evidence that these traditional models are associated with the longitudinal underperformance of candidates. Rather than a traditional OSCE model, sequential testing involves a shorter 'screening' format, with an additional 'sequential' test for candidates who fail to meet the screening standard. For those tested twice, overall pass/fail decisions are then based on results on the full sequence of tests. In this study, the impacts of sequential assessment on student performance, cost of assessment delivery and overall reliability were modelled using data sourced from a final graduating OSCE in an undergraduate medical degree programme. Initial modelling using pre-existing OSCE data predicted significant improvements in reliability in the critical area, reflected in pilot results: 13.5% of students (n = 228) were required to sit the sequential OSCE. One student (0.4%) was identified as representing a false positive result (i.e. under the previous system this student would have passed the OSCE but failed on extended testing). Nine students (3.9%) who would have required OSCE retests under the prior system passed the full sequence and were therefore able to graduate at the normal time without loss of earnings. Overall reliability was estimated as 0.79 for the full test sequence. Significant cost savings were realised. Sequential testing in OSCEs increases reliability for borderline students because the increased number of observations implies that 'observed' student marks are closer to 'true' marks. However, the station-level quality of the assessment needs to be sufficiently high for the full benefits in terms of reliability to be achieved. The introduction of such a system has
Wiegmann, Daniel D; Seubert, Steven M; Wade, Gordon A
2010-02-21
The behavior of a female in search of a mate determines the likelihood that she encounters a high-quality male in the search process. The fixed sample (best-of-n) search strategy and the sequential search (fixed threshold) strategy are two prominent models of search behavior. The sequential search strategy dominates the former strategy--yields an equal or higher expected net fitness return to searchers--when search costs are nontrivial and the distribution of quality among prospective mates is uniform or truncated normal. In this paper our objective is to determine whether there are any search costs or distributions of male quality for which the sequential search strategy is inferior to the fixed sample search strategy. The two search strategies are derived under general conditions in which females evaluate encountered males by inspection of an indicator character that has some functional relationship to male quality. The solutions are identical to the original models when the inspected male attribute is itself male quality. The sequential search strategy is shown to dominate the fixed sample search strategy for all search costs and distributions of male quality. Low search costs have been implicated to explain empirical observations that are consistent with the use of a fixed sample search strategy, but under conditions in which the original models were derived there is no search cost or distribution of male quality that favors the fixed sample search strategy. Plausible alternative explanations for the apparent use of this search strategy are discussed.
Pérez-López, Rafael; Sáez, Reinaldo; Alvarez-Valero, Antonio M; Miguel Nieto, José; Pace, Gaetano
2009-10-15
The Sotiel-Coronada abandoned mining district (Iberian Pyrite Belt) produced complex massive sulphide ores which were processed by flotation to obtain Cu, Zn and Pb concentrates. The crude pyrite refuses were roasted for sulphuric acid production in a plant located close to the flotation site, and waste stored in a tailing dam. The present study was focused on the measurements of flow properties, chemical characterization and mineralogical determination of the roasted pyrite refuses with the aim of assessing the potential environmental impact in case of dam collapse. Chemical studies include the determination of the total contaminant content and information about their bio-availability or mobility using sequential extraction techniques. In the hypothetical case of the tailing dam breaking up and waste spilling (ca. 4.54Mt), a high density mud flow would flood the Odiel river valley and reach both Estuary of Huelva (Biosphere Reserve by UNESCO, 1983) and Atlantic Ocean in matter of a couple of days, as it was predicted by numerical simulations of dam-break waves propagation through the river valley based on quasi-2D Saint-Venant equations. The total amount of mobile pollutants that would be released into the surrounding environment is approximately of 7.1.10(4)t of S, 1.6.10(4)t of Fe, 1.4.10(4)t of As, 1.2.10(4)t of Zn, 1.0.10(4)t of Pb, 7.4.10(3)t of Mn, 2.2.10(3)t of Cu, 1.5.10(2)t of Co, 36t of Cd and 17t of Ni. Around 90-100% of S, Zn, Co and Ni, 60-70% of Mn and Cd, 30-40% of Fe and Cu, and 5% of As and Pb of the mobile fraction would be easily in the most labile fraction (water-soluble pollutants), and therefore, the most dangerous and bio-available for the environment. This gives an idea of the extreme potential risk of roasted pyrite ashes to the environment, until now little-described in the scientific literature.
Sequential cooling insert for turbine stator vane
Energy Technology Data Exchange (ETDEWEB)
Jones, Russel B
2017-04-04
A sequential flow cooling insert for a turbine stator vane of a small gas turbine engine, where the impingement cooling insert is formed as a single piece from a metal additive manufacturing process such as 3D metal printing, and where the insert includes a plurality of rows of radial extending impingement cooling air holes alternating with rows of radial extending return air holes on a pressure side wall, and where the insert includes a plurality of rows of chordwise extending second impingement cooling air holes on a suction side wall. The insert includes alternating rows of radial extending cooling air supply channels and return air channels that form a series of impingement cooling on the pressure side followed by the suction side of the insert.
Sequential Stereotype Priming: A Meta-Analysis.
Kidder, Ciara K; White, Katherine R; Hinojos, Michelle R; Sandoval, Mayra; Crites, Stephen L
2017-08-01
Psychological interest in stereotype measurement has spanned nearly a century, with researchers adopting implicit measures in the 1980s to complement explicit measures. One of the most frequently used implicit measures of stereotypes is the sequential priming paradigm. The current meta-analysis examines stereotype priming, focusing specifically on this paradigm. To contribute to ongoing discussions regarding methodological rigor in social psychology, one primary goal was to identify methodological moderators of the stereotype priming effect-whether priming is due to a relation between the prime and target stimuli, the prime and target response, participant task, stereotype dimension, stimulus onset asynchrony (SOA), and stimuli type. Data from 39 studies yielded 87 individual effect sizes from 5,497 participants. Analyses revealed that stereotype priming is significantly moderated by the presence of prime-response relations, participant task, stereotype dimension, target stimulus type, SOA, and prime repetition. These results carry both practical and theoretical implications for future research on stereotype priming.
Multilevel sequential Monte-Carlo samplers
Jasra, Ajay
2016-01-05
Multilevel Monte-Carlo methods provide a powerful computational technique for reducing the computational cost of estimating expectations for a given computational effort. They are particularly relevant for computational problems when approximate distributions are determined via a resolution parameter h, with h=0 giving the theoretical exact distribution (e.g. SDEs or inverse problems with PDEs). The method provides a benefit by coupling samples from successive resolutions, and estimating differences of successive expectations. We develop a methodology that brings Sequential Monte-Carlo (SMC) algorithms within the framework of the Multilevel idea, as SMC provides a natural set-up for coupling samples over different resolutions. We prove that the new algorithm indeed preserves the benefits of the multilevel principle, even if samples at all resolutions are now correlated.
Steganography Based on Baseline Sequential JPEG Compression
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Information hiding in Joint Photographic Experts Group (JPEG) compressed images are investigated in this paper. Quantization is the source of information loss in JPEG compression process. Therefore, information hidden in images is probably destroyed by JPEG compression. This paper presents an algorithm to reliably embed information into the JPEG bit streams in the process of JPEG encoding. Information extraction is performed in the process of JPEG decoding. The basic idea of our algorithm is to modify the quantized direct current (DC) coefficients and non-zero alternating current (AC) coefficients to represent one bit information (0 or 1). Experimental results on gray images using baseline sequential JPEG encoding show that the cover images (images without secret information) and the stego-images (images with secret information) are perceptually indiscernible.
Mechanistic studies on a sequential PDT protocol
Kessel, David
2016-03-01
A low (~LD15) PDT dose resulting in selective lysosomal photodamage can markedly promote photokilling by subsequent photodamage targeted to mitochondria. Experimental data are consistent with the proposal that cleavage of the autophagyassociated protein ATG5 to a pro-apoptotic fragment is responsible for this effect. This process is known to be dependent on the proteolytic activity of calpain. We have proposed that Ca2+ released from photodamaged lysosomes is the trigger for ATG5 cleavage. We can now document the conversion of ATG5 to the truncated form after lysosomal photodamage. Photofrin, a photosensitizer that targets both mitochondria and lysosomes, can be used for either phase of the sequential PDT process. The ability of Photofrin to target both loci may explain the well-documented efficacy of this agent.
Sequential infiltration synthesis for advanced lithography
Darling, Seth B.; Elam, Jeffrey W.; Tseng, Yu-Chih; Peng, Qing
2015-03-17
A plasma etch resist material modified by an inorganic protective component via sequential infiltration synthesis (SIS) and methods of preparing the modified resist material. The modified resist material is characterized by an improved resistance to a plasma etching or related process relative to the unmodified resist material, thereby allowing formation of patterned features into a substrate material, which may be high-aspect ratio features. The SIS process forms the protective component within the bulk resist material through a plurality of alternating exposures to gas phase precursors which infiltrate the resist material. The plasma etch resist material may be initially patterned using photolithography, electron-beam lithography or a block copolymer self-assembly process.