WorldWideScience

Sample records for adaptive regularization methods

  1. On Comparison of Adaptive Regularization Methods

    DEFF Research Database (Denmark)

    Sigurdsson, Sigurdur; Larsen, Jan; Hansen, Lars Kai

    2000-01-01

    Modeling with flexible models, such as neural networks, requires careful control of the model complexity and generalization ability of the resulting model which finds expression in the ubiquitous bias-variance dilemma. Regularization is a tool for optimizing the model structure reducing variance ...

  2. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  3. Adaptive L1/2 Shooting Regularization Method for Survival Analysis Using Gene Expression Data

    Directory of Open Access Journals (Sweden)

    Xiao-Ying Liu

    2013-01-01

    Full Text Available A new adaptive L1/2 shooting regularization method for variable selection based on the Cox’s proportional hazards mode being proposed. This adaptive L1/2 shooting algorithm can be easily obtained by the optimization of a reweighed iterative series of L1 penalties and a shooting strategy of L1/2 penalty. Simulation results based on high dimensional artificial data show that the adaptive L1/2 shooting regularization method can be more accurate for variable selection than Lasso and adaptive Lasso methods. The results from real gene expression dataset (DLBCL also indicate that the L1/2 regularization method performs competitively.

  4. A self-adapting and altitude-dependent regularization method for atmospheric profile retrievals

    Directory of Open Access Journals (Sweden)

    M. Ridolfi

    2009-03-01

    Full Text Available MIPAS is a Fourier transform spectrometer, operating onboard of the ENVISAT satellite since July 2002. The online retrieval algorithm produces geolocated profiles of temperature and of volume mixing ratios of six key atmospheric constituents: H2O, O3, HNO3, CH4, N2O and NO2. In the validation phase, oscillations beyond the error bars were observed in several profiles, particularly in CH4 and N2O.

    To tackle this problem, a Tikhonov regularization scheme has been implemented in the retrieval algorithm. The applied regularization is however rather weak in order to preserve the vertical resolution of the profiles.

    In this paper we present a self-adapting and altitude-dependent regularization approach that detects whether the analyzed observations contain information about small-scale profile features, and determines the strength of the regularization accordingly. The objective of the method is to smooth out artificial oscillations as much as possible, while preserving the fine detail features of the profile when related information is detected in the observations.

    The proposed method is checked for self consistency, its performance is tested on MIPAS observations and compared with that of some other regularization schemes available in the literature. In all the considered cases the proposed scheme achieves a good performance, thanks to its altitude dependence and to the constraints employed, which are specific of the inversion problem under consideration. The proposed method is generally applicable to iterative Gauss-Newton algorithms for the retrieval of vertical distribution profiles from atmospheric remote sounding measurements.

  5. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermore...

  6. Total variation regularization for seismic waveform inversion using an adaptive primal dual hybrid gradient method

    Science.gov (United States)

    Yong, Peng; Liao, Wenyuan; Huang, Jianping; Li, Zhenchuan

    2018-04-01

    Full waveform inversion is an effective tool for recovering the properties of the Earth from seismograms. However, it suffers from local minima caused mainly by the limited accuracy of the starting model and the lack of a low-frequency component in the seismic data. Because of the high velocity contrast between salt and sediment, the relation between the waveform and velocity perturbation is strongly nonlinear. Therefore, salt inversion can easily get trapped in the local minima. Since the velocity of salt is nearly constant, we can make the most of this characteristic with total variation regularization to mitigate the local minima. In this paper, we develop an adaptive primal dual hybrid gradient method to implement total variation regularization by projecting the solution onto a total variation norm constrained convex set, through which the total variation norm constraint is satisfied at every model iteration. The smooth background velocities are first inverted and the perturbations are gradually obtained by successively relaxing the total variation norm constraints. Numerical experiment of the projection of the BP model onto the intersection of the total variation norm and box constraints has demonstrated the accuracy and efficiency of our adaptive primal dual hybrid gradient method. A workflow is designed to recover complex salt structures in the BP 2004 model and the 2D SEG/EAGE salt model, starting from a linear gradient model without using low-frequency data below 3 Hz. The salt inversion processes demonstrate that wavefield reconstruction inversion with a total variation norm and box constraints is able to overcome local minima and inverts the complex salt velocity layer by layer.

  7. Lower Tropospheric Ozone Retrievals from Infrared Satellite Observations Using a Self-Adapting Regularization Method

    Science.gov (United States)

    Eremenko, M.; Sgheri, L.; Ridolfi, M.; Dufour, G.; Cuesta, J.

    2017-12-01

    Lower tropospheric ozone (O3) retrievals from nadir sounders is challenging due to the lack of vertical sensitivity of the measurements and towards the lowest layers. If improvements have been made during the last decade, it is still important to explore possibilities to improve the retrieval algorithms themselves. O3 retrieval from nadir satellite observations is an ill-conditioned problem, which requires regularization using constraint matrices. Up to now, most of the retrieval algorithms rely on a fixed constraint. The constraint is determined and fixed beforehand, on the basis of sensitivity tests. This does not allow ones to take advantage of the entire capabilities of the satellite measurements, which vary with the thermal conditions of the observed scenes. To overcome this limitation, we developed a self-adapting and altitude-dependent regularization scheme. A crucial step is the choice of the strength of the constraint. This choice is done during an iterative process and depends on the measurement errors and on the sensitivity of the measurements to the target parameters at the different altitudes. The challenge is to limit the use of a priori constraints to the minimal amount needed to perform the inversion. The algorithm has been tested on synthetic observations matching the future IASI-NG satellite instrument. IASI-NG measurements are simulated on the basis of O3 concentrations taken from an atmospheric model and retrieved using two retrieval schemes (the standard and self-adapting ones). Comparison of the results shows that the sensitivity of the observations to the O3 amount in the lowest layers (given by the degrees of freedom for the solution) is increased, which allows a better description of the ozone distribution, especially in the case of large ozone plumes. Biases are reduced and the spatial correlation is improved. Tentative of application to real observations from IASI, currently onboard the Metop satellite will also be presented.

  8. Low-Rank Matrix Factorization With Adaptive Graph Regularizer.

    Science.gov (United States)

    Lu, Gui-Fu; Wang, Yong; Zou, Jian

    2016-05-01

    In this paper, we present a novel low-rank matrix factorization algorithm with adaptive graph regularizer (LMFAGR). We extend the recently proposed low-rank matrix with manifold regularization (MMF) method with an adaptive regularizer. Different from MMF, which constructs an affinity graph in advance, LMFAGR can simultaneously seek graph weight matrix and low-dimensional representations of data. That is, graph construction and low-rank matrix factorization are incorporated into a unified framework, which results in an automatically updated graph rather than a predefined one. The experimental results on some data sets demonstrate that the proposed algorithm outperforms the state-of-the-art low-rank matrix factorization methods.

  9. Image Super-Resolution via Adaptive Regularization and Sparse Representation.

    Science.gov (United States)

    Cao, Feilong; Cai, Miaomiao; Tan, Yuanpeng; Zhao, Jianwei

    2016-07-01

    Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.

  10. Regularization methods in Banach spaces

    CERN Document Server

    Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S

    2012-01-01

    Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B

  11. An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)

    2011-11-15

    Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used

  12. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  13. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  14. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  15. Adaptive Regularization of Neural Networks Using Conjugate Gradient

    DEFF Research Database (Denmark)

    Goutte, Cyril; Larsen, Jan

    1998-01-01

    Andersen et al. (1997) and Larsen et al. (1996, 1997) suggested a regularization scheme which iteratively adapts regularization parameters by minimizing validation error using simple gradient descent. In this contribution we present an improved algorithm based on the conjugate gradient technique........ Numerical experiments with feedforward neural networks successfully demonstrate improved generalization ability and lower computational cost...

  16. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  17. Wavelet domain image restoration with adaptive edge-preserving regularization.

    Science.gov (United States)

    Belge, M; Kilmer, M E; Miller, E L

    2000-01-01

    In this paper, we consider a wavelet based edge-preserving regularization scheme for use in linear image restoration problems. Our efforts build on a collection of mathematical results indicating that wavelets are especially useful for representing functions that contain discontinuities (i.e., edges in two dimensions or jumps in one dimension). We interpret the resulting theory in a statistical signal processing framework and obtain a highly flexible framework for adapting the degree of regularization to the local structure of the underlying image. In particular, we are able to adapt quite easily to scale-varying and orientation-varying features in the image while simultaneously retaining the edge preservation properties of the regularizer. We demonstrate a half-quadratic algorithm for obtaining the restorations from observed data.

  18. Regularized logistic regression with adjusted adaptive elastic net for gene selection in high dimensional cancer classification.

    Science.gov (United States)

    Algamal, Zakariya Yahya; Lee, Muhammad Hisyam

    2015-12-01

    Cancer classification and gene selection in high-dimensional data have been popular research topics in genetics and molecular biology. Recently, adaptive regularized logistic regression using the elastic net regularization, which is called the adaptive elastic net, has been successfully applied in high-dimensional cancer classification to tackle both estimating the gene coefficients and performing gene selection simultaneously. The adaptive elastic net originally used elastic net estimates as the initial weight, however, using this weight may not be preferable for certain reasons: First, the elastic net estimator is biased in selecting genes. Second, it does not perform well when the pairwise correlations between variables are not high. Adjusted adaptive regularized logistic regression (AAElastic) is proposed to address these issues and encourage grouping effects simultaneously. The real data results indicate that AAElastic is significantly consistent in selecting genes compared to the other three competitor regularization methods. Additionally, the classification performance of AAElastic is comparable to the adaptive elastic net and better than other regularization methods. Thus, we can conclude that AAElastic is a reliable adaptive regularized logistic regression method in the field of high-dimensional cancer classification. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Optimal Design of the Adaptive Normalized Matched Filter Detector using Regularized Tyler Estimators

    KAUST Repository

    Kammoun, Abla

    2017-10-25

    This article addresses improvements on the design of the adaptive normalized matched filter (ANMF) for radar detection. It is well-acknowledged that the estimation of the noise-clutter covariance matrix is a fundamental step in adaptive radar detection. In this paper, we consider regularized estimation methods which force by construction the eigenvalues of the covariance estimates to be greater than a positive regularization parameter ρ. This makes them more suitable for high dimensional problems with a limited number of secondary data samples than traditional sample covariance estimates. The motivation behind this work is to understand the effect and properly set the value of ρthat improves estimate conditioning while maintaining a low estimation bias. More specifically, we consider the design of the ANMF detector for two kinds of regularized estimators, namely the regularized sample covariance matrix (RSCM), the regularized Tyler estimator (RTE). The rationale behind this choice is that the RTE is efficient in mitigating the degradation caused by the presence of impulsive noises while inducing little loss when the noise is Gaussian. Based on asymptotic results brought by recent tools from random matrix theory, we propose a design for the regularization parameter that maximizes the asymptotic detection probability under constant asymptotic false alarm rates. Provided Simulations support the efficiency of the proposed method, illustrating its gain over conventional settings of the regularization parameter.

  20. Feature selection and multi-kernel learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-20

    Nonnegative matrix factorization (NMF), a popular part-based representation technique, does not capture the intrinsic local geometric structure of the data space. Graph regularized NMF (GNMF) was recently proposed to avoid this limitation by regularizing NMF with a nearest neighbor graph constructed from the input data set. However, GNMF has two main bottlenecks. First, using the original feature space directly to construct the graph is not necessarily optimal because of the noisy and irrelevant features and nonlinear distributions of data samples. Second, one possible way to handle the nonlinear distribution of data samples is by kernel embedding. However, it is often difficult to choose the most suitable kernel. To solve these bottlenecks, we propose two novel graph-regularized NMF methods, AGNMFFS and AGNMFMK, by introducing feature selection and multiple-kernel learning to the graph regularized NMF, respectively. Instead of using a fixed graph as in GNMF, the two proposed methods learn the nearest neighbor graph that is adaptive to the selected features and learned multiple kernels, respectively. For each method, we propose a unified objective function to conduct feature selection/multi-kernel learning, NMF and adaptive graph regularization simultaneously. We further develop two iterative algorithms to solve the two optimization problems. Experimental results on two challenging pattern classification tasks demonstrate that the proposed methods significantly outperform state-of-the-art data representation methods.

  1. Dynamic experiment design regularization approach to adaptive imaging with array radar/SAR sensor systems.

    Science.gov (United States)

    Shkvarko, Yuriy; Tuxpan, José; Santos, Stewart

    2011-01-01

    We consider a problem of high-resolution array radar/SAR imaging formalized in terms of a nonlinear ill-posed inverse problem of nonparametric estimation of the power spatial spectrum pattern (SSP) of the random wavefield scattered from a remotely sensed scene observed through a kernel signal formation operator and contaminated with random Gaussian noise. First, the Sobolev-type solution space is constructed to specify the class of consistent kernel SSP estimators with the reproducing kernel structures adapted to the metrics in such the solution space. Next, the "model-free" variational analysis (VA)-based image enhancement approach and the "model-based" descriptive experiment design (DEED) regularization paradigm are unified into a new dynamic experiment design (DYED) regularization framework. Application of the proposed DYED framework to the adaptive array radar/SAR imaging problem leads to a class of two-level (DEED-VA) regularized SSP reconstruction techniques that aggregate the kernel adaptive anisotropic windowing with the projections onto convex sets to enforce the consistency and robustness of the overall iterative SSP estimators. We also show how the proposed DYED regularization method may be considered as a generalization of the MVDR, APES and other high-resolution nonparametric adaptive radar sensing techniques. A family of the DYED-related algorithms is constructed and their effectiveness is finally illustrated via numerical simulations.

  2. PREDICTORS OF SOCIAL AND PSYCHOLOGICAL ADAPTATION OF THE UNEMPLOYED AND PEOPLE WITH REGULAR EMPLOYMENT

    Directory of Open Access Journals (Sweden)

    Rail M Shamionov

    2017-12-01

    Full Text Available The article discusses the results of a study on the socio-psychological adaptation predictors of the unemployed in relation to people with regular employment. It is assumed that adaptation of the employed and the unemployed is determined by various socio-psychological phenomena; definition of the phenomena will allow to develop programmes of adaptation for the unemployed with preservation of motivation for self-realization. In total, 362 people (33% of whom were male took part in the study, including 196 unemployed. Standardized methods and scales developed by the authors for assessing the subject position characteristics and adaptive readiness of a person were used. It was found that the unemployed are characterized by lower indicators of socio-psychological adaptation and characteristics that are of paramount importance for adaptation - self-acceptance, acceptance of others, emotional comfort. Socio-demographic characteristics, scales of subjective position, adaptive readiness, subjective well-being and values were consistently introduced to the regression equation. It is shown that adaptive readiness and values are the strongest predictors for the employed, while indicators of subjective well-being and value are more significant for the unemployed. The general predictors of adaptation are the level of education, happiness (positively and negative affect (negatively. In other cases, the predictors are strictly differentiated.

  3. Regularized Speaker Adaptation of KL-HMM for Dysarthric Speech Recognition

    Science.gov (United States)

    Kim, Myungjong; Kim, Younggwan; Yoo, Joohong; Wang, Jun; Kim, Hoirin

    2017-01-01

    This paper addresses the problem of recognizing the speech uttered by patients with dysarthria, which is a motor speech disorder impeding the physical production of speech. Patients with dysarthria have articulatory limitation, and therefore, they often have trouble in pronouncing certain sounds, resulting in undesirable phonetic variation. Modern automatic speech recognition systems designed for regular speakers are ineffective for dysarthric sufferers due to the phonetic variation. To capture the phonetic variation, Kullback-Leibler divergence based hidden Markov model (KL-HMM) is adopted, where the emission probability of state is parametrized by a categorical distribution using phoneme posterior probabilities obtained from a deep neural network-based acoustic model. To further reflect speaker-specific phonetic variation patterns, a speaker adaptation method based on a combination of L2 regularization and confusion-reducing regularization which can enhance discriminability between categorical distributions of KL-HMM states while preserving speaker-specific information is proposed. Evaluation of the proposed speaker adaptation method on a database of several hundred words for 30 speakers consisting of 12 mildly dysarthric, 8 moderately dysarthric, and 10 non-dysarthric control speakers showed that the proposed approach significantly outperformed the conventional deep neural network based speaker adapted system on dysarthric as well as non-dysarthric speech. PMID:28320669

  4. Multiple Kernel Learning for adaptive graph regularized nonnegative matrix factorization

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-01-01

    Nonnegative Matrix Factorization (NMF) has been continuously evolving in several areas like pattern recognition and information retrieval methods. It factorizes a matrix into a product of 2 low-rank non-negative matrices that will define parts-based, and linear representation of non-negative data. Recently, Graph regularized NMF (GrNMF) is proposed to find a compact representation, which uncovers the hidden semantics and simultaneously respects the intrinsic geometric structure. In GNMF, an affinity graph is constructed from the original data space to encode the geometrical information. In this paper, we propose a novel idea which engages a Multiple Kernel Learning approach into refining the graph structure that reflects the factorization of the matrix and the new data space. The GrNMF is improved by utilizing the graph refined by the kernel learning, and then a novel kernel learning method is introduced under the GrNMF framework. Our approach shows encouraging results of the proposed algorithm in comparison to the state-of-the-art clustering algorithms like NMF, GrNMF, SVD etc.

  5. L1-norm locally linear representation regularization multi-source adaptation learning.

    Science.gov (United States)

    Tao, Jianwen; Wen, Shiting; Hu, Wenjun

    2015-09-01

    In most supervised domain adaptation learning (DAL) tasks, one has access only to a small number of labeled examples from target domain. Therefore the success of supervised DAL in this "small sample" regime needs the effective utilization of the large amounts of unlabeled data to extract information that is useful for generalization. Toward this end, we here use the geometric intuition of manifold assumption to extend the established frameworks in existing model-based DAL methods for function learning by incorporating additional information about the target geometric structure of the marginal distribution. We would like to ensure that the solution is smooth with respect to both the ambient space and the target marginal distribution. In doing this, we propose a novel L1-norm locally linear representation regularization multi-source adaptation learning framework which exploits the geometry of the probability distribution, which has two techniques. Firstly, an L1-norm locally linear representation method is presented for robust graph construction by replacing the L2-norm reconstruction measure in LLE with L1-norm one, which is termed as L1-LLR for short. Secondly, considering the robust graph regularization, we replace traditional graph Laplacian regularization with our new L1-LLR graph Laplacian regularization and therefore construct new graph-based semi-supervised learning framework with multi-source adaptation constraint, which is coined as L1-MSAL method. Moreover, to deal with the nonlinear learning problem, we also generalize the L1-MSAL method by mapping the input data points from the input space to a high-dimensional reproducing kernel Hilbert space (RKHS) via a nonlinear mapping. Promising experimental results have been obtained on several real-world datasets such as face, visual video and object. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. CT Image Reconstruction from Sparse Projections Using Adaptive TpV Regularization

    Directory of Open Access Journals (Sweden)

    Hongliang Qi

    2015-01-01

    Full Text Available Radiation dose reduction without losing CT image quality has been an increasing concern. Reducing the number of X-ray projections to reconstruct CT images, which is also called sparse-projection reconstruction, can potentially avoid excessive dose delivered to patients in CT examination. To overcome the disadvantages of total variation (TV minimization method, in this work we introduce a novel adaptive TpV regularization into sparse-projection image reconstruction and use FISTA technique to accelerate iterative convergence. The numerical experiments demonstrate that the proposed method suppresses noise and artifacts more efficiently, and preserves structure information better than other existing reconstruction methods.

  7. An Adaptive Ridge Procedure for L0 Regularization.

    Directory of Open Access Journals (Sweden)

    Florian Frommlet

    Full Text Available Penalized selection criteria like AIC or BIC are among the most popular methods for variable selection. Their theoretical properties have been studied intensively and are well understood, but making use of them in case of high-dimensional data is difficult due to the non-convex optimization problem induced by L0 penalties. In this paper we introduce an adaptive ridge procedure (AR, where iteratively weighted ridge problems are solved whose weights are updated in such a way that the procedure converges towards selection with L0 penalties. After introducing AR its specific shrinkage properties are studied in the particular case of orthogonal linear regression. Based on extensive simulations for the non-orthogonal case as well as for Poisson regression the performance of AR is studied and compared with SCAD and adaptive LASSO. Furthermore an efficient implementation of AR in the context of least-squares segmentation is presented. The paper ends with an illustrative example of applying AR to analyze GWAS data.

  8. An Adaptive Ridge Procedure for L0 Regularization.

    Science.gov (United States)

    Frommlet, Florian; Nuel, Grégory

    2016-01-01

    Penalized selection criteria like AIC or BIC are among the most popular methods for variable selection. Their theoretical properties have been studied intensively and are well understood, but making use of them in case of high-dimensional data is difficult due to the non-convex optimization problem induced by L0 penalties. In this paper we introduce an adaptive ridge procedure (AR), where iteratively weighted ridge problems are solved whose weights are updated in such a way that the procedure converges towards selection with L0 penalties. After introducing AR its specific shrinkage properties are studied in the particular case of orthogonal linear regression. Based on extensive simulations for the non-orthogonal case as well as for Poisson regression the performance of AR is studied and compared with SCAD and adaptive LASSO. Furthermore an efficient implementation of AR in the context of least-squares segmentation is presented. The paper ends with an illustrative example of applying AR to analyze GWAS data.

  9. Hierarchical image segmentation via recursive superpixel with adaptive regularity

    Science.gov (United States)

    Nakamura, Kensuke; Hong, Byung-Woo

    2017-11-01

    A fast and accurate segmentation algorithm in a hierarchical way based on a recursive superpixel technique is presented. We propose a superpixel energy formulation in which the trade-off between data fidelity and regularization is dynamically determined based on the local residual in the energy optimization procedure. We also present an energy optimization algorithm that allows a pixel to be shared by multiple regions to improve the accuracy and appropriate the number of segments. The qualitative and quantitative evaluations demonstrate that our algorithm, combining the proposed energy and optimization, outperforms the conventional k-means algorithm by up to 29.10% in F-measure. We also perform comparative analysis with state-of-the-art algorithms in the hierarchical segmentation. Our algorithm yields smooth regions throughout the hierarchy as opposed to the others that include insignificant details. Our algorithm overtakes the other algorithms in terms of balance between accuracy and computational time. Specifically, our method runs 36.48% faster than the region-merging approach, which is the fastest of the comparing algorithms, while achieving a comparable accuracy.

  10. Adaptive multiresolution methods

    Directory of Open Access Journals (Sweden)

    Schneider Kai

    2011-12-01

    Full Text Available These lecture notes present adaptive multiresolution schemes for evolutionary PDEs in Cartesian geometries. The discretization schemes are based either on finite volume or finite difference schemes. The concept of multiresolution analyses, including Harten’s approach for point and cell averages, is described in some detail. Then the sparse point representation method is discussed. Different strategies for adaptive time-stepping, like local scale dependent time stepping and time step control, are presented. Numerous numerical examples in one, two and three space dimensions validate the adaptive schemes and illustrate the accuracy and the gain in computational efficiency in terms of CPU time and memory requirements. Another aspect, modeling of turbulent flows using multiresolution decompositions, the so-called Coherent Vortex Simulation approach is also described and examples are given for computations of three-dimensional weakly compressible mixing layers. Most of the material concerning applications to PDEs is assembled and adapted from previous publications [27, 31, 32, 34, 67, 69].

  11. A New Method for Optimal Regularization Parameter Determination in the Inverse Problem of Load Identification

    Directory of Open Access Journals (Sweden)

    Wei Gao

    2016-01-01

    Full Text Available According to the regularization method in the inverse problem of load identification, a new method for determining the optimal regularization parameter is proposed. Firstly, quotient function (QF is defined by utilizing the regularization parameter as a variable based on the least squares solution of the minimization problem. Secondly, the quotient function method (QFM is proposed to select the optimal regularization parameter based on the quadratic programming theory. For employing the QFM, the characteristics of the values of QF with respect to the different regularization parameters are taken into consideration. Finally, numerical and experimental examples are utilized to validate the performance of the QFM. Furthermore, the Generalized Cross-Validation (GCV method and the L-curve method are taken as the comparison methods. The results indicate that the proposed QFM is adaptive to different measuring points, noise levels, and types of dynamic load.

  12. Adaptive method of lines

    CERN Document Server

    Saucez, Ph

    2001-01-01

    The general Method of Lines (MOL) procedure provides a flexible format for the solution of all the major classes of partial differential equations (PDEs) and is particularly well suited to evolutionary, nonlinear wave PDEs. Despite its utility, however, there are relatively few texts that explore it at a more advanced level and reflect the method''s current state of development.Written by distinguished researchers in the field, Adaptive Method of Lines reflects the diversity of techniques and applications related to the MOL. Most of its chapters focus on a particular application but also provide a discussion of underlying philosophy and technique. Particular attention is paid to the concept of both temporal and spatial adaptivity in solving time-dependent PDEs. Many important ideas and methods are introduced, including moving grids and grid refinement, static and dynamic gridding, the equidistribution principle and the concept of a monitor function, the minimization of a functional, and the moving finite elem...

  13. An Edge-Preserved Image Denoising Algorithm Based on Local Adaptive Regularization

    Directory of Open Access Journals (Sweden)

    Li Guo

    2016-01-01

    Full Text Available Image denoising methods are often based on the minimization of an appropriately defined energy function. Many gradient dependent energy functions, such as Potts model and total variation denoising, regard image as piecewise constant function. In these methods, some important information such as edge sharpness and location is well preserved, but some detailed image feature like texture is often compromised in the process of denoising. For this reason, an image denoising method based on local adaptive regularization is proposed in this paper, which can adaptively adjust denoising degree of noisy image by adding spatial variable fidelity term, so as to better preserve fine scale features of image. Experimental results show that the proposed denoising method can achieve state-of-the-art subjective visual effect, and the signal-noise-ratio (SNR is also objectively improved by 0.3–0.6 dB.

  14. Joint Adaptive Mean-Variance Regularization and Variance Stabilization of High Dimensional Data.

    Science.gov (United States)

    Dazard, Jean-Eudes; Rao, J Sunil

    2012-07-01

    The paper addresses a common problem in the analysis of high-dimensional high-throughput "omics" data, which is parameter estimation across multiple variables in a set of data where the number of variables is much larger than the sample size. Among the problems posed by this type of data are that variable-specific estimators of variances are not reliable and variable-wise tests statistics have low power, both due to a lack of degrees of freedom. In addition, it has been observed in this type of data that the variance increases as a function of the mean. We introduce a non-parametric adaptive regularization procedure that is innovative in that : (i) it employs a novel "similarity statistic"-based clustering technique to generate local-pooled or regularized shrinkage estimators of population parameters, (ii) the regularization is done jointly on population moments, benefiting from C. Stein's result on inadmissibility, which implies that usual sample variance estimator is improved by a shrinkage estimator using information contained in the sample mean. From these joint regularized shrinkage estimators, we derived regularized t-like statistics and show in simulation studies that they offer more statistical power in hypothesis testing than their standard sample counterparts, or regular common value-shrinkage estimators, or when the information contained in the sample mean is simply ignored. Finally, we show that these estimators feature interesting properties of variance stabilization and normalization that can be used for preprocessing high-dimensional multivariate data. The method is available as an R package, called 'MVR' ('Mean-Variance Regularization'), downloadable from the CRAN website.

  15. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  16. Iterative regularization methods for nonlinear ill-posed problems

    CERN Document Server

    Scherzer, Otmar; Kaltenbacher, Barbara

    2008-01-01

    Nonlinear inverse problems appear in many applications, and typically they lead to mathematical models that are ill-posed, i.e., they are unstable under data perturbations. Those problems require a regularization, i.e., a special numerical treatment. This book presents regularization schemes which are based on iteration methods, e.g., nonlinear Landweber iteration, level set methods, multilevel methods and Newton type methods.

  17. Iterative approach to self-adapting and altitude-dependent regularization for atmospheric profile retrievals.

    Science.gov (United States)

    Ridolfi, Marco; Sgheri, Luca

    2011-12-19

    In this paper we present the IVS (Iterative Variable Strength) method, an altitude-dependent, self-adapting Tikhonov regularization scheme for atmospheric profile retrievals. The method is based on a similar scheme we proposed in 2009. The new method does not need any specifically tuned minimization routine, hence it is more robust and faster. We test the self-consistency of the method using simulated observations of the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS). We then compare the new method with both our previous scheme and the scalar method currently implemented in the MIPAS on-line processor, using both synthetic and real atmospheric limb measurements. The IVS method shows very good performances.

  18. Robust Single-Image Super-Resolution Based on Adaptive Edge-Preserving Smoothing Regularization.

    Science.gov (United States)

    Huang, Shuying; Sun, Jun; Yang, Yong; Fang, Yuming; Lin, Pan; Que, Yue

    2018-06-01

    Single-image super-resolution (SR) reconstruction via sparse representation has recently attracted broad interest. It is known that a low-resolution (LR) image is susceptible to noise or blur due to the degradation of the observed image, which would lead to a poor SR performance. In this paper, we propose a novel robust edge-preserving smoothing SR (REPS-SR) method in the framework of sparse representation. An EPS regularization term is designed based on gradient-domain-guided filtering to preserve image edges and reduce noise in the reconstructed image. Furthermore, a smoothing-aware factor adaptively determined by the estimation of the noise level of LR images without manual interference is presented to obtain an optimal balance between the data fidelity term and the proposed EPS regularization term. An iterative shrinkage algorithm is used to obtain the SR image results for LR images. The proposed adaptive smoothing-aware scheme makes our method robust to different levels of noise. Experimental results indicate that the proposed method can preserve image edges and reduce noise and outperforms the current state-of-the-art methods for noisy images.

  19. The regularized monotonicity method: detecting irregular indefinite inclusions

    DEFF Research Database (Denmark)

    Garde, Henrik; Staboulis, Stratos

    2018-01-01

    inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...

  20. R package MVR for Joint Adaptive Mean-Variance Regularization and Variance Stabilization.

    Science.gov (United States)

    Dazard, Jean-Eudes; Xu, Hua; Rao, J Sunil

    2011-01-01

    We present an implementation in the R language for statistical computing of our recent non-parametric joint adaptive mean-variance regularization and variance stabilization procedure. The method is specifically suited for handling difficult problems posed by high-dimensional multivariate datasets ( p ≫ n paradigm), such as in 'omics'-type data, among which are that the variance is often a function of the mean, variable-specific estimators of variances are not reliable, and tests statistics have low powers due to a lack of degrees of freedom. The implementation offers a complete set of features including: (i) normalization and/or variance stabilization function, (ii) computation of mean-variance-regularized t and F statistics, (iii) generation of diverse diagnostic plots, (iv) synthetic and real 'omics' test datasets, (v) computationally efficient implementation, using C interfacing, and an option for parallel computing, (vi) manual and documentation on how to setup a cluster. To make each feature as user-friendly as possible, only one subroutine per functionality is to be handled by the end-user. It is available as an R package, called MVR ('Mean-Variance Regularization'), downloadable from the CRAN.

  1. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  2. Better prediction by use of co-data: adaptive group-regularized ridge regression.

    Science.gov (United States)

    van de Wiel, Mark A; Lien, Tonje G; Verlaat, Wina; van Wieringen, Wessel N; Wilting, Saskia M

    2016-02-10

    For many high-dimensional studies, additional information on the variables, like (genomic) annotation or external p-values, is available. In the context of binary and continuous prediction, we develop a method for adaptive group-regularized (logistic) ridge regression, which makes structural use of such 'co-data'. Here, 'groups' refer to a partition of the variables according to the co-data. We derive empirical Bayes estimates of group-specific penalties, which possess several nice properties: (i) They are analytical. (ii) They adapt to the informativeness of the co-data for the data at hand. (iii) Only one global penalty parameter requires tuning by cross-validation. In addition, the method allows use of multiple types of co-data at little extra computational effort. We show that the group-specific penalties may lead to a larger distinction between 'near-zero' and relatively large regression parameters, which facilitates post hoc variable selection. The method, termed GRridge, is implemented in an easy-to-use R-package. It is demonstrated on two cancer genomics studies, which both concern the discrimination of precancerous cervical lesions from normal cervix tissues using methylation microarray data. For both examples, GRridge clearly improves the predictive performances of ordinary logistic ridge regression and the group lasso. In addition, we show that for the second study, the relatively good predictive performance is maintained when selecting only 42 variables. Copyright © 2015 John Wiley & Sons, Ltd.

  3. A simple regularization method for stable analytic continuation

    Science.gov (United States)

    Fu, Chu-Li; Dou, Fang-Fang; Feng, Xiao-Li; Qian, Zhi

    2008-12-01

    The problems of analytic continuation are frequently encountered in many practical applications. These problems are well known to be severely ill-posed and therefore several regularization methods have been suggested for solving them. In this paper we consider the problem of analytic continuation of the analytic function f(z) = f(x + iy) on a strip domain \\Omega=\\{z=x+iy\\in {\\bb C}|x\\in{\\bb R},|y|\\leq y_0\\} , where the data are given only on the line y = 0. We use a very simple and convenient method—the Fourier regularization method to solve this problem. Some sharp error estimates between the exact solution and its approximation are given and numerical examples show the method works effectively. The project is supported by the National Natural Science Foundation of China (Nos. 10671085, 10571079 and 10726017).

  4. A two-way regularization method for MEG source reconstruction

    KAUST Repository

    Tian, Tian Siva

    2012-09-01

    The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.

  5. Sound Attenuation in Elliptic Mufflers Using a Regular Perturbation Method

    OpenAIRE

    Banerjee, Subhabrata; Jacobi, Anthony M.

    2012-01-01

    The study of sound attenuation in an elliptical chamber involves the solution of the Helmholtz equation in elliptic coordinate systems. The Eigen solutions for such problems involve the Mathieu and the modified Mathieu functions. The computation of such functions poses considerable challenge. An alternative method to solve such problems had been proposed in this paper. The elliptical cross-section of the muffler has been treated as a perturbed circle, enabling the use of a regular perturbatio...

  6. Global regularization method for planar restricted three-body problem

    Directory of Open Access Journals (Sweden)

    Sharaf M.A.

    2015-01-01

    Full Text Available In this paper, global regularization method for planar restricted three-body problem is purposed by using the transformation z = x+iy = ν cos n(u+iv, where i = √−1, 0 < ν ≤ 1 and n is a positive integer. The method is developed analytically and computationally. For the analytical developments, analytical solutions in power series of the pseudotime τ are obtained for positions and velocities (u, v, u', v' and (x, y, x˙, y˙ in both regularized and physical planes respectively, the physical time t is also obtained as power series in τ. Moreover, relations between the coefficients of the power series are obtained for two consequent values of n. Also, we developed analytical solutions in power series form for the inverse problem of finding τ in terms of t. As typical examples, three symbolic expressions for the coefficients of the power series were developed in terms of initial values. As to the computational developments, the global regularized equations of motion are developed together with their initial values in forms suitable for digital computations using any differential equations solver. On the other hand, for numerical evolutions of power series, an efficient method depending on the continued fraction theory is provided.

  7. Global Regularization Method for Planar Restricted Three-body Problem

    Science.gov (United States)

    Sharaf, M. A.; Dwidar, H. R.

    2015-12-01

    In this paper, global regularization method for planar restricted three-body problem is purposed by using the transformation z=x+iy=ν cos n(u+iv), where i=√{-1}, 0 < ν ≤ 1 and n is a positive integer. The method is developed analytically and computationally. For the analytical developments, analytical solutions in power series of the pseudo-time τ are obtained for positions and velocities (u,v,u',v') and (x,y,dot{x},dot{y}) in both regularized and physical planes respectively, the physical time {t} is also obtained as power series in τ. Moreover, relations between the coefficients of the power series are obtained for two consequent values of {n}. Also, we developed analytical solutions in power series form for the inverse problem of finding τ in terms of {t}. As typical examples, three symbolic expressions for the coefficients of the power series were developed in terms of the initial values. As to the computational developments, the global regularized equations of motion are developed together with their initial values in forms suitable for digital computations using any differential equations solver. On the other hand, for the numerical evolutions of power series, an efficient method depending on the continued fraction theory is provided.

  8. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    Science.gov (United States)

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-01-01

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures. PMID:26007744

  9. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images

    Directory of Open Access Journals (Sweden)

    Wonseok Kang

    2015-05-01

    Full Text Available In various unmanned aerial vehicle (UAV imaging applications, the multisensor super-resolution (SR technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR images to make a higher resolution (HR image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  10. Multisensor Super Resolution Using Directionally-Adaptive Regularization for UAV Images.

    Science.gov (United States)

    Kang, Wonseok; Yu, Soohwan; Ko, Seungyong; Paik, Joonki

    2015-05-22

    In various unmanned aerial vehicle (UAV) imaging applications, the multisensor super-resolution (SR) technique has become a chronic problem and attracted increasing attention. Multisensor SR algorithms utilize multispectral low-resolution (LR) images to make a higher resolution (HR) image to improve the performance of the UAV imaging system. The primary objective of the paper is to develop a multisensor SR method based on the existing multispectral imaging framework instead of using additional sensors. In order to restore image details without noise amplification or unnatural post-processing artifacts, this paper presents an improved regularized SR algorithm by combining the directionally-adaptive constraints and multiscale non-local means (NLM) filter. As a result, the proposed method can overcome the physical limitation of multispectral sensors by estimating the color HR image from a set of multispectral LR images using intensity-hue-saturation (IHS) image fusion. Experimental results show that the proposed method provides better SR results than existing state-of-the-art SR methods in the sense of objective measures.

  11. REGULARIZED D-BAR METHOD FOR THE INVERSE CONDUCTIVITY PROBLEM

    DEFF Research Database (Denmark)

    Knudsen, Kim; Lassas, Matti; Mueller, Jennifer

    2009-01-01

    A strategy for regularizing the inversion procedure for the two-dimensional D-bar reconstruction algorithm based on the global uniqueness proof of Nachman [Ann. Math. 143 (1996)] for the ill-posed inverse conductivity problem is presented. The strategy utilizes truncation of the boundary integral...... equation and the scattering transform. It is shown that this leads to a bound on the error in the scattering transform and a stable reconstruction of the conductivity; an explicit rate of convergence in appropriate Banach spaces is derived as well. Numerical results are also included, demonstrating...... the convergence of the reconstructed conductivity to the true conductivity as the noise level tends to zero. The results provide a link between two traditions of inverse problems research: theory of regularization and inversion methods based on complex geometrical optics. Also, the procedure is a novel...

  12. Teacher Effectiveness in Adapting Instruction to the Needs of Pupils With Learning Difficulties in Regular Primary Schools in Ghana

    Directory of Open Access Journals (Sweden)

    Abdul-Razak Kuyini Alhassan

    2014-01-01

    Full Text Available Ghana education system has failed to effectively address the needs of pupils with learning difficulties (LDs in regular classrooms. Underachievement, school dropout, streetism, and antisocial behaviors are the consequences. Teachers’ lack of adequate competence in adaptive instruction is one of the fundamental reasons responsible for this anomaly. This study aims to examine teachers’ competence in adapting instructions to teach pupils with LDs in the regular classroom in Ghana. The data were gathered from 387 sampled teachers in a cross-sectional survey using questionnaires and structured observation methods. We analyzed the data using descriptive statistic, chi-square test, correlation, t test, and ANOVA. The results show that (a teachers have limited to moderate competence in adaptive instruction, (b adaptive teaching is strongly associated with teachers’ competence in teaching pupils with LDs in the regular classroom, and (c apart from gender and class size, teachers’ background variables such as school location and teaching experience differ significantly. The study has serious implications for Ghana’s inclusive education policy and teaching practice.

  13. Regularized binormal ROC method in disease classification using microarray data

    Directory of Open Access Journals (Sweden)

    Huang Jian

    2006-05-01

    Full Text Available Abstract Background An important application of microarrays is to discover genomic biomarkers, among tens of thousands of genes assayed, for disease diagnosis and prognosis. Thus it is of interest to develop efficient statistical methods that can simultaneously identify important biomarkers from such high-throughput genomic data and construct appropriate classification rules. It is also of interest to develop methods for evaluation of classification performance and ranking of identified biomarkers. Results The ROC (receiver operating characteristic technique has been widely used in disease classification with low dimensional biomarkers. Compared with the empirical ROC approach, the binormal ROC is computationally more affordable and robust in small sample size cases. We propose using the binormal AUC (area under the ROC curve as the objective function for two-sample classification, and the scaled threshold gradient directed regularization method for regularized estimation and biomarker selection. Tuning parameter selection is based on V-fold cross validation. We develop Monte Carlo based methods for evaluating the stability of individual biomarkers and overall prediction performance. Extensive simulation studies show that the proposed approach can generate parsimonious models with excellent classification and prediction performance, under most simulated scenarios including model mis-specification. Application of the method to two cancer studies shows that the identified genes are reasonably stable with satisfactory prediction performance and biologically sound implications. The overall classification performance is satisfactory, with small classification errors and large AUCs. Conclusion In comparison to existing methods, the proposed approach is computationally more affordable without losing the optimality possessed by the standard ROC method.

  14. Structural damage detection by a new iterative regularization method and an improved sensitivity function

    Science.gov (United States)

    Entezami, Alireza; Shariatmadar, Hashem; Sarmadi, Hassan

    2017-07-01

    A new sensitivity-based damage detection method is proposed to identify and estimate the location and severity of structural damage using incomplete noisy modal data. For these purposes, an improved sensitivity function of modal strain energy (MSE) based on Lagrange optimization problem is derived to adapt the initial sensitivity formulation of MSE to damage detection problem with the aid of new mathematical approaches. In the presence of incomplete noisy modal data, the sensitivity matrix is sparse, rectangular, and ill-conditioned, which leads to an ill-posed damage equation. To overcome this issue, a new regularization method named as Regularized Least Squares Minimal Residual (RLSMR) is proposed to solve the ill-posed damage equation. This method relies on Krylov subspace and exploits bidiagonalization and iterative algorithms to solve linear mathematical systems. For the majority of Krylov subspace methods, conventional direct methods for the determination of an optimal regularization parameter may not be proper. To cope with this limitation, a hybrid technique is introduced that depends on the residual of RLSMR method, the number of iterations, and the bidiagonalization algorithm. The accuracy and performance of the improved and proposed methods are numerically examined by a planner truss by incorporating incomplete noisy modal parameters and finite element modeling errors. A comparative study on the initial and improved sensitivity functions is conduced to investigate damage detectability of these sensitivity formulations. Furthermore, the accuracy and robustness of RLSMR method in detecting damage are compared with the well-known Tikhonov regularization method. Results show that the improved sensitivity of MSE is an efficient tool for using in the damage detection problem due to a high sensitivity to damage and reliable damage detectability in comparison with the initial sensitivity function. Additionally, it is observed that the RLSMR method with the aid

  15. Comparison of Regularization Methods in Fluorescence Molecular Tomography

    Directory of Open Access Journals (Sweden)

    Dianwen Zhu

    2014-04-01

    Full Text Available In vivo fluorescence molecular tomography (FMT has been a popular functional imaging modality in research labs in the past two decades. One of the major difficulties of FMT lies in the ill-posed and ill-conditioned nature of the inverse problem in reconstructing the distribution of fluorophores inside objects. The popular regularization methods based on L2, L1 and total variation (TV norms have been applied in FMT reconstructions. The non-convex Lq(0 < q < 1 semi-norm and Log function have also been studied recently. In this paper, we adopt a uniform optimization transfer framework for these regularization methods in FMT and compare their individual, as well as the combined effects on both small, localized targets, such as tumors in the early stage, and large targets, such as liver. Numerical simulation studies and phantom experiments have been carried out, and we found that Lq with q near 1/2 performs the best in reconstructing small targets, while joint L2 and Log performs the best for large targets.

  16. [The physiological analysis of cross adaptation to regular cold exposure and physical activities].

    Science.gov (United States)

    Son'kin, V D; Iakushkin, A V; Akimov, E B; Andreev, R S; Kalenov, Iu N; Kozlov, A V

    2014-01-01

    Research is devoted to the comparative analysis of results of cold adaptation and physical training. The adaptive shifts occurring in an organism under the influence of a hardening (douche by a cold shower 2 times a day 2 minutes long within 6 weeks) and running training on the treadmill (30 minutes at 70-80% of individual VO2max, 3 times a week, within 6 weeks) were compared at 6 the same subjects. The interval between the two cycles of training was no less than 3 months. The indicators registered during ramp test and standard cold exposure test before and after each cycle of trainings were compared. It is shown that patterns of adaptive shifts at adaptation to factors of various modality strongly differ. Shifts at adaptation to physical activities were as a whole more expressed, than at adaptation to regular cold exposition. An individual variety of adaptive reactions suggests the feasibility of developing new approaches to the theory of the adaptation, connected with studying of physiological individuality.

  17. Robust dynamic myocardial perfusion CT deconvolution using adaptive-weighted tensor total variation regularization

    Science.gov (United States)

    Gong, Changfei; Zeng, Dong; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Feng, Qianjin; Liang, Zhengrong; Ma, Jianhua

    2016-03-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for diagnosis and risk stratification of coronary artery disease by assessing the myocardial perfusion hemodynamic maps (MPHM). Meanwhile, the repeated scanning of the same region results in a relatively large radiation dose to patients potentially. In this work, we present a robust MPCT deconvolution algorithm with adaptive-weighted tensor total variation regularization to estimate residue function accurately under the low-dose context, which is termed `MPD-AwTTV'. More specifically, the AwTTV regularization takes into account the anisotropic edge property of the MPCT images compared with the conventional total variation (TV) regularization, which can mitigate the drawbacks of TV regularization. Subsequently, an effective iterative algorithm was adopted to minimize the associative objective function. Experimental results on a modified XCAT phantom demonstrated that the present MPD-AwTTV algorithm outperforms and is superior to other existing deconvolution algorithms in terms of noise-induced artifacts suppression, edge details preservation and accurate MPHM estimation.

  18. From Matched Spatial Filtering towards the Fused Statistical Descriptive Regularization Method for Enhanced Radar Imaging

    Directory of Open Access Journals (Sweden)

    Shkvarko Yuriy

    2006-01-01

    Full Text Available We address a new approach to solve the ill-posed nonlinear inverse problem of high-resolution numerical reconstruction of the spatial spectrum pattern (SSP of the backscattered wavefield sources distributed over the remotely sensed scene. An array or synthesized array radar (SAR that employs digital data signal processing is considered. By exploiting the idea of combining the statistical minimum risk estimation paradigm with numerical descriptive regularization techniques, we address a new fused statistical descriptive regularization (SDR strategy for enhanced radar imaging. Pursuing such an approach, we establish a family of the SDR-related SSP estimators, that encompass a manifold of existing beamforming techniques ranging from traditional matched filter to robust and adaptive spatial filtering, and minimum variance methods.

  19. An Improved Traffic Matrix Decomposition Method with Frequency-Domain Regularization

    OpenAIRE

    Wang, Zhe; Hu, Kai; Yin, Baolin

    2012-01-01

    We propose a novel network traffic matrix decomposition method named Stable Principal Component Pursuit with Frequency-Domain Regularization (SPCP-FDR), which improves the Stable Principal Component Pursuit (SPCP) method by using a frequency-domain noise regularization function. An experiment demonstrates the feasibility of this new decomposition method.

  20. Online Adaptive Replanning Method for Prostate Radiotherapy

    International Nuclear Information System (INIS)

    Ahunbay, Ergun E.; Peng Cheng; Holmes, Shannon; Godley, Andrew; Lawton, Colleen; Li, X. Allen

    2010-01-01

    Purpose: To report the application of an adaptive replanning technique for prostate cancer radiotherapy (RT), consisting of two steps: (1) segment aperture morphing (SAM), and (2) segment weight optimization (SWO), to account for interfraction variations. Methods and Materials: The new 'SAM+SWO' scheme was retroactively applied to the daily CT images acquired for 10 prostate cancer patients on a linear accelerator and CT-on-Rails combination during the course of RT. Doses generated by the SAM+SWO scheme based on the daily CT images were compared with doses generated after patient repositioning using the current planning target volume (PTV) margin (5 mm, 3 mm toward rectum) and a reduced margin (2 mm), along with full reoptimization scans based on the daily CT images to evaluate dosimetry benefits. Results: For all cases studied, the online replanning method provided significantly better target coverage when compared with repositioning with reduced PTV (13% increase in minimum prostate dose) and improved organ sparing when compared with repositioning with regular PTV (13% decrease in the generalized equivalent uniform dose of rectum). The time required to complete the online replanning process was 6 ± 2 minutes. Conclusion: The proposed online replanning method can be used to account for interfraction variations for prostate RT with a practically acceptable time frame (5-10 min) and with significant dosimetric benefits. On the basis of this study, the developed online replanning scheme is being implemented in the clinic for prostate RT.

  1. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  2. Relaxation Methods for Strictly Convex Regularizations of Piecewise Linear Programs

    International Nuclear Information System (INIS)

    Kiwiel, K. C.

    1998-01-01

    We give an algorithm for minimizing the sum of a strictly convex function and a convex piecewise linear function. It extends several dual coordinate ascent methods for large-scale linearly constrained problems that occur in entropy maximization, quadratic programming, and network flows. In particular, it may solve exact penalty versions of such (possibly inconsistent) problems, and subproblems of bundle methods for nondifferentiable optimization. It is simple, can exploit sparsity, and in certain cases is highly parallelizable. Its global convergence is established in the recent framework of B -functions (generalized Bregman functions)

  3. Smoothing-Norm Preconditioning for Regularizing Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Jensen, Toke Koldborg

    2006-01-01

    take into account a smoothing norm for the solution. This technique is well established for CGLS, but it does not immediately carry over to minimum-residual methods when the smoothing norm is a seminorm or a Sobolev norm. We develop a new technique which works for any smoothing norm of the form $\\|L...

  4. Regularization methods for inferential sensing in nuclear power plants

    International Nuclear Information System (INIS)

    Hines, J.W.; Gribok, A.V.; Attieh, I.; Uhrig, R.E.

    2000-01-01

    Inferential sensing is the use of information related to a plant parameter to infer its actual value. The most common method of inferential sensing uses a mathematical model to infer a parameter value from correlated sensor values. Collinearity in the predictor variables leads to an ill-posed problem that causes inconsistent results when data based models such as linear regression and neural networks are used. This chapter presents several linear and non-linear inferential sensing methods including linear regression and neural networks. Both of these methods can be modified from their original form to solve ill-posed problems and produce more consistent results. We will compare these techniques using data from Florida Power Corporation's Crystal River Nuclear Power Plant to predict the drift in a feedwater flow sensor. According to a report entitled 'Feedwater Flow Measurement in U.S. Nuclear Power Generation Stations' that was commissioned by the Electric Power Research Institute, venturi meter fouling is 'the single most frequent cause' for derating in Pressurized Water Reactors. This chapter presents several viable solutions to this problem. (orig.)

  5. Cardiac Adaptations (Structural and Functional to Regular Mountain Activities in Middle-aged Men

    Directory of Open Access Journals (Sweden)

    Abbas Saremi

    2017-09-01

    Full Text Available Abstract Background: Physical exercise is an important and effective part of comprehensive care of seniors, which declines aging progression. Because of the importance of physical activity in cardiovascular diseases prevention this study intends to investigate the comparision of structural and functional characterictics of the heart between middle- aged montaineer men and non-athlete peers. Materials and Methods: In this cross-sectional and descriptive–analytical study, 13 middle- aged montaineer (age: 54.5±2.0 y, body mass index: 25.59±2.4 kg/m2 who have continues mountain activities during previous 24 months for at least 2 sessions per week, each session lasted 120 minute, and 14 sedentary, healthy peers (age: 54.1±2.2 y, body mass index: 26.8±2.3 kg/m2 who were not currently experiencing any regular physical activity (at least 6 months, were selected. All subjects underwent standard two-dimensional and Doppler echocardiography at rest. Cardio respiratory fitness was assessed using Bruce test. T test was used to compare groups with α=0.05. Results: The results showed that mountain activities significantly increased left ventricular mass (p=0.03 and left-ventricular-end-diastolic-diameter (p=0.04. We also observed that systolic blood pressure (p=0.04, ejection fraction (p=0.05, stroke volume (p=0.03 and cardio respiratory fitness (p=0.03 were significantly improved by mountain climbing. In some of parameters such as shortening fraction, interventicular septum and left ventricular posterior wall there were no significant differences between groups (p>0.05. Conclusion: These results suggest that regular mountain sports activities can have beneficial effects on structural and functional characterictics of the heart in middle-aged men.

  6. A discrepancy-based parameter adaptation and stopping rule for minimization algorithms aiming at Tikhonov-type regularization

    International Nuclear Information System (INIS)

    Bredies, Kristian; Zhariy, Mariya

    2013-01-01

    We present a discrepancy-based parameter choice and stopping rule for iterative algorithms performing approximate Tikhonov-functional minimization which adapts the regularization parameter value during the optimization procedure. The suggested parameter choice and stopping rule can be applied to a wide class of penalty terms and iterative algorithms which aim at Tikhonov regularization with a fixed parameter value. It leads, in particular, to computable guaranteed estimates for the regularized exact discrepancy in terms of numerical approximations. Based on these estimates, convergence to a solution is shown. As an example, the developed theory and the algorithm is applied to the case of sparse regularization. We prove order optimal convergence rates in the case of sparse regularization, i.e. weighted ℓ p norms, which turn out to be the same as for the a priori parameter choice rule already obtained in the literature as well as for Morozov’s principle applied to exact regularized solutions. Finally, numerical results for two different minimization techniques, iterative soft thresholding algorithm and monotone fast iterative soft thresholding algorithm, are presented, confirming, in particular, the results from the theory. (paper)

  7. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  8. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  9. Adaptive Method Using Controlled Grid Deformation

    Directory of Open Access Journals (Sweden)

    Florin FRUNZULICA

    2011-09-01

    Full Text Available The paper presents an adaptive method using the controlled grid deformation over an elastic, isotropic and continuous domain. The adaptive process is controlled with the principal strains and principal strain directions and uses the finite elements method. Numerical results are presented for several test cases.

  10. Improving thoracic four-dimensional cone-beam CT reconstruction with anatomical-adaptive image regularization (AAIR)

    International Nuclear Information System (INIS)

    Shieh, Chun-Chien; Kipritidis, John; O'Brien, Ricky T; Cooper, Benjamin J; Keall, Paul J; Kuncic, Zdenka

    2015-01-01

    Total-variation (TV) minimization reconstructions can significantly reduce noise and streaks in thoracic four-dimensional cone-beam computed tomography (4D CBCT) images compared to the Feldkamp–Davis–Kress (FDK) algorithm currently used in practice. TV minimization reconstructions are, however, prone to over-smoothing anatomical details and are also computationally inefficient. The aim of this study is to demonstrate a proof of concept that these disadvantages can be overcome by incorporating the general knowledge of the thoracic anatomy via anatomy segmentation into the reconstruction. The proposed method, referred as the anatomical-adaptive image regularization (AAIR) method, utilizes the adaptive-steepest-descent projection-onto-convex-sets (ASD-POCS) framework, but introduces an additional anatomy segmentation step in every iteration. The anatomy segmentation information is implemented in the reconstruction using a heuristic approach to adaptively suppress over-smoothing at anatomical structures of interest. The performance of AAIR depends on parameters describing the weighting of the anatomy segmentation prior and segmentation threshold values. A sensitivity study revealed that the reconstruction outcome is not sensitive to these parameters as long as they are chosen within a suitable range. AAIR was validated using a digital phantom and a patient scan and was compared to FDK, ASD-POCS and the prior image constrained compressed sensing (PICCS) method. For the phantom case, AAIR reconstruction was quantitatively shown to be the most accurate as indicated by the mean absolute difference and the structural similarity index. For the patient case, AAIR resulted in the highest signal-to-noise ratio (i.e. the lowest level of noise and streaking) and the highest contrast-to-noise ratios for the tumor and the bony anatomy (i.e. the best visibility of anatomical details). Overall, AAIR was much less prone to over-smoothing anatomical details compared to ASD-POCS and

  11. A regularization method for solving the Poisson equation for mixed unbounded-periodic domains

    Science.gov (United States)

    Juul Spietz, Henrik; Mølholm Hejlesen, Mads; Walther, Jens Honoré

    2018-03-01

    Regularized Green's functions for mixed unbounded-periodic domains are derived. The regularization of the Green's function removes its singularity by introducing a regularization radius which is related to the discretization length and hence imposes a minimum resolved scale. In this way the regularized unbounded-periodic Green's functions can be implemented in an FFT-based Poisson solver to obtain a convergence rate corresponding to the regularization order of the Green's function. The high order is achieved without any additional computational cost from the conventional FFT-based Poisson solver and enables the calculation of the derivative of the solution to the same high order by direct spectral differentiation. We illustrate an application of the FFT-based Poisson solver by using it with a vortex particle mesh method for the approximation of incompressible flow for a problem with a single periodic and two unbounded directions.

  12. A New Method for Determining Optimal Regularization Parameter in Near-Field Acoustic Holography

    Directory of Open Access Journals (Sweden)

    Yue Xiao

    2018-01-01

    Full Text Available Tikhonov regularization method is effective in stabilizing reconstruction process of the near-field acoustic holography (NAH based on the equivalent source method (ESM, and the selection of the optimal regularization parameter is a key problem that determines the regularization effect. In this work, a new method for determining the optimal regularization parameter is proposed. The transfer matrix relating the source strengths of the equivalent sources to the measured pressures on the hologram surface is augmented by adding a fictitious point source with zero strength. The minimization of the norm of this fictitious point source strength is as the criterion for choosing the optimal regularization parameter since the reconstructed value should tend to zero. The original inverse problem in calculating the source strengths is converted into a univariate optimization problem which is solved by a one-dimensional search technique. Two numerical simulations with a point driven simply supported plate and a pulsating sphere are investigated to validate the performance of the proposed method by comparison with the L-curve method. The results demonstrate that the proposed method can determine the regularization parameter correctly and effectively for the reconstruction in NAH.

  13. Solution adaptive mesh using moving mesh method

    International Nuclear Information System (INIS)

    Tilak, A.S.; Tong, A.Y.; Liao, G.

    2004-01-01

    This work deals with mesh adaptation strategy to enhance the accuracy of numerical solution of partial differential equations. This was achieved economically by employing the Moving Grid Finite Difference Method. The method was reformulated as first order div-curl system. This system was then solved using the Least Square Finite Element method (LSFEM). The reformulation of the method has two desirable effects. Firstly, it eliminates the expensive gradient computation in the original method and secondly it allows the method to be employed for mesh adaptation with dynamic boundaries. A 2-D general finite element code implementing the mesh adaptation method based on LSFEM, capable of analyzing self-adjoint problems in elasticity and heat transfer with variety of boundary conditions, sources or sinks was developed and thoroughly validated. The code was used to analyze and adapt mesh for problems in heat transfer and elasticity. The method was found to perform satisfactorily in all test cases. (author)

  14. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  15. Solving ill-posed control problems by stabilized finite element methods: an alternative to Tikhonov regularization

    Science.gov (United States)

    Burman, Erik; Hansbo, Peter; Larson, Mats G.

    2018-03-01

    Tikhonov regularization is one of the most commonly used methods for the regularization of ill-posed problems. In the setting of finite element solutions of elliptic partial differential control problems, Tikhonov regularization amounts to adding suitably weighted least squares terms of the control variable, or derivatives thereof, to the Lagrangian determining the optimality system. In this note we show that the stabilization methods for discretely ill-posed problems developed in the setting of convection-dominated convection-diffusion problems, can be highly suitable for stabilizing optimal control problems, and that Tikhonov regularization will lead to less accurate discrete solutions. We consider some inverse problems for Poisson’s equation as an illustration and derive new error estimates both for the reconstruction of the solution from the measured data and reconstruction of the source term from the measured data. These estimates include both the effect of the discretization error and error in the measurements.

  16. Solution of damped generalized regularized long-wave equation using a modified homotopy analysis method

    Science.gov (United States)

    Akram, Ghazala; Sadaf, Maasoomah

    2018-02-01

    A modified algorithm for homotopy analysis method (MHAM) is presented for the solution of nonlinear damped generalized regularized long-wave equation. The modified algorithm has less computational cost than standard HAM and also overcomes the difficulty in calculating complicated integrals. The MHAM is applied on different cases of the damped generalized regularized long-wave equation subject to suitable initial conditions. The numerical results show that the approximate solutions are in good agreement with the exact solutions.

  17. The Method of Adaptive Comparative Judgement

    Science.gov (United States)

    Pollitt, Alastair

    2012-01-01

    Adaptive Comparative Judgement (ACJ) is a modification of Thurstone's method of comparative judgement that exploits the power of adaptivity, but in scoring rather than testing. Professional judgement by teachers replaces the marking of tests; a judge is asked to compare the work of two students and simply to decide which of them is the better.…

  18. Analysis of the iteratively regularized Gauss–Newton method under a heuristic rule

    Science.gov (United States)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss–Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  19. Construction Method of Regularization by Singular Value Decomposition of Design Matrix

    Directory of Open Access Journals (Sweden)

    LIN Dongfang

    2016-08-01

    Full Text Available Tikhonov regularization introduces regularization parameter and stable functional to improve the ill-condition. When the stable functional expressed as two-norm constraint, the regularization method is the same as ridge estimation. The analysis of the variance and bias of the ridge estimation shows that ridge estimation improved the ill-condition but introduced more bias. The estimation reliability is lowered. We get that correct the larger singular values cannot decrease the variance effectively but introduced more bias, correcting the smaller singular values can decrease the variance effectively. We choose the eigenvectors of the smaller singular values to construct the regularization matrix. It can adjust the correction of the singular values, decrease the variance and biases and finally get a more reliable estimation.

  20. An Iterative Regularization Method for Identifying the Source Term in a Second Order Differential Equation

    Directory of Open Access Journals (Sweden)

    Fairouz Zouyed

    2015-01-01

    Full Text Available This paper discusses the inverse problem of determining an unknown source in a second order differential equation from measured final data. This problem is ill-posed; that is, the solution (if it exists does not depend continuously on the data. In order to solve the considered problem, an iterative method is proposed. Using this method a regularized solution is constructed and an a priori error estimate between the exact solution and its regularized approximation is obtained. Moreover, numerical results are presented to illustrate the accuracy and efficiency of this method.

  1. Adaptative mixed methods to axisymmetric shells

    International Nuclear Information System (INIS)

    Malta, S.M.C.; Loula, A.F.D.; Garcia, E.L.M.

    1989-09-01

    The mixed Petrov-Galerkin method is applied to axisymmetric shells with uniform and non uniform meshes. Numerical experiments with a cylindrical shell showed a significant improvement in convergence and accuracy with adaptive meshes. (A.C.A.S.) [pt

  2. Adaptive Control Methods for Soft Robots

    Data.gov (United States)

    National Aeronautics and Space Administration — I propose to develop methods for soft and inflatable robots that will allow the control system to adapt and change control parameters based on changing conditions...

  3. Technical Note: Regularization performances with the error consistency method in the case of retrieved atmospheric profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2007-01-01

    Full Text Available The retrieval of concentration vertical profiles of atmospheric constituents from spectroscopic measurements is often an ill-conditioned problem and regularization methods are frequently used to improve its stability. Recently a new method, that provides a good compromise between precision and vertical resolution, was proposed to determine analytically the value of the regularization parameter. This method is applied for the first time to real measurements with its implementation in the operational retrieval code of the satellite limb-emission measurements of the MIPAS instrument and its performances are quantitatively analyzed. The adopted regularization improves the stability of the retrieval providing smooth profiles without major degradation of the vertical resolution. In the analyzed measurements the retrieval procedure provides a vertical resolution that, in the troposphere and low stratosphere, is smaller than the vertical field of view of the instrument.

  4. Adaptive finite element methods for differential equations

    CERN Document Server

    Bangerth, Wolfgang

    2003-01-01

    These Lecture Notes discuss concepts of `self-adaptivity' in the numerical solution of differential equations, with emphasis on Galerkin finite element methods. The key issues are a posteriori error estimation and it automatic mesh adaptation. Besides the traditional approach of energy-norm error control, a new duality-based technique, the Dual Weighted Residual method for goal-oriented error estimation, is discussed in detail. This method aims at economical computation of arbitrary quantities of physical interest by properly adapting the computational mesh. This is typically required in the design cycles of technical applications. For example, the drag coefficient of a body immersed in a viscous flow is computed, then it is minimized by varying certain control parameters, and finally the stability of the resulting flow is investigated by solving an eigenvalue problem. `Goal-oriented' adaptivity is designed to achieve these tasks with minimal cost. At the end of each chapter some exercises are posed in order ...

  5. Use of regularization method in the determination of ring parameters and orbit correction

    International Nuclear Information System (INIS)

    Tang, Y.N.; Krinsky, S.

    1993-01-01

    We discuss applying the regularization method of Tikhonov to the solution of inverse problems arising in accelerator operations. This approach has been successfully used for orbit correction on the NSLS storage rings, and is presently being applied to the determination of betatron functions and phases from the measured response matrix. The inverse problem of differential equation often leads to a set of integral equations of the first kind which are ill-conditioned. The regularization method is used to combat the ill-posedness

  6. Anomaly detection in homogenous populations: A sparse multiple kernel-based regularization method

    DEFF Research Database (Denmark)

    Chen, Tianshi; Andersen, Martin S.; Chiuso, Alessandro

    2014-01-01

    A problem of anomaly detection in homogenous populations consisting of linear stable systems is studied. The recently introduced sparse multiple kernel based regularization method is applied to solve the problem. A common problem with the existing regularization methods is that there lacks......, both the parameter and hyper-parameter estimation problems can be cast as convex and sequential convex optimization problems. It is possible to derive scalable solutions to both the parameter and hyper-parameter estimation problems and thus provide a scalable solution to the anomaly detection....

  7. Learning Unknown Structure in CRFs via Adaptive Gradient Projection Method

    Directory of Open Access Journals (Sweden)

    Wei Xue

    2016-08-01

    Full Text Available We study the problem of fitting probabilistic graphical models to the given data when the structure is not known. More specifically, we focus on learning unknown structure in conditional random fields, especially learning both the structure and parameters of a conditional random field model simultaneously. To do this, we first formulate the learning problem as a convex minimization problem by adding an l_2-regularization to the node parameters and a group l_1-regularization to the edge parameters, and then a gradient-based projection method is proposed to solve it which combines an adaptive stepsize selection strategy with a nonmonotone line search. Extensive simulation experiments are presented to show the performance of our approach in solving unknown structure learning problems.

  8. Kernel Fisher Discriminant Analysis Based on a Regularized Method for Multiclassification and Application in Lithological Identification

    Directory of Open Access Journals (Sweden)

    Dejiang Luo

    2015-01-01

    Full Text Available This study aimed to construct a kernel Fisher discriminant analysis (KFDA method from well logs for lithology identification purposes. KFDA, via the use of a kernel trick, greatly improves the multiclassification accuracy compared with Fisher discriminant analysis (FDA. The optimal kernel Fisher projection of KFDA can be expressed as a generalized characteristic equation. However, it is difficult to solve the characteristic equation; therefore, a regularized method is used for it. In the absence of a method to determine the value of the regularized parameter, it is often determined based on expert human experience or is specified by tests. In this paper, it is proposed to use an improved KFDA (IKFDA to obtain the optimal regularized parameter by means of a numerical method. The approach exploits the optimal regularized parameter selection ability of KFDA to obtain improved classification results. The method is simple and not computationally complex. The IKFDA was applied to the Iris data sets for training and testing purposes and subsequently to lithology data sets. The experimental results illustrated that it is possible to successfully separate data that is nonlinearly separable, thereby confirming that the method is effective.

  9. Information operator approach and iterative regularization methods for atmospheric remote sensing

    Energy Technology Data Exchange (ETDEWEB)

    Doicu, A. [German Aerospace Center, Remote Sensing Technology Institute, Oberpfaffenhofen (Germany)]. E-mail: adrian.doicu@dlr.de; Hilgers, S. [German Aerospace Center, Remote Sensing Technology Institute, Oberpfaffenhofen (Germany); Bargen, A. von [German Aerospace Center, Remote Sensing Technology Institute, Oberpfaffenhofen (Germany); Rozanov, A. [Institute of Environmental Physics, University of Bremen (Germany); Eichmann, K.-U. [Institute of Environmental Physics, University of Bremen (Germany); Savigny, C. von [Institute of Environmental Physics, University of Bremen (Germany); Burrows, J.P. [Institute of Environmental Physics, University of Bremen (Germany)

    2007-01-15

    In this study, we present the main features of the information operator approach for solving linear inverse problems arising in atmospheric remote sensing. This method is superior to the stochastic version of the Tikhonov regularization (or the optimal estimation method) due to its capability to filter out the noise-dominated components of the solution generated by an inappropriate choice of the regularization parameter. We extend this approach to iterative methods for nonlinear ill-posed problems and derive the truncated versions of the Gauss-Newton and Levenberg-Marquardt methods. Although the paper mostly focuses on discussing the mathematical details of the inverse method, retrieval results have been provided, which exemplify the performances of the methods. These results correspond to the NO{sub 2} retrieval from SCIAMACHY limb scatter measurements and have been obtained by using the retrieval processors developed at the German Aerospace Center Oberpfaffenhofen and Institute of Environmental Physics of the University of Bremen.

  10. Segmentation of Brain Tissues from Magnetic Resonance Images Using Adaptively Regularized Kernel-Based Fuzzy C-Means Clustering.

    Science.gov (United States)

    Elazab, Ahmed; Wang, Changmiao; Jia, Fucang; Wu, Jianhuang; Li, Guanglin; Hu, Qingmao

    2015-01-01

    An adaptively regularized kernel-based fuzzy C-means clustering framework is proposed for segmentation of brain magnetic resonance images. The framework can be in the form of three algorithms for the local average grayscale being replaced by the grayscale of the average filter, median filter, and devised weighted images, respectively. The algorithms employ the heterogeneity of grayscales in the neighborhood and exploit this measure for local contextual information and replace the standard Euclidean distance with Gaussian radial basis kernel functions. The main advantages are adaptiveness to local context, enhanced robustness to preserve image details, independence of clustering parameters, and decreased computational costs. The algorithms have been validated against both synthetic and clinical magnetic resonance images with different types and levels of noises and compared with 6 recent soft clustering algorithms. Experimental results show that the proposed algorithms are superior in preserving image details and segmentation accuracy while maintaining a low computational complexity.

  11. A Highly Accurate Regular Domain Collocation Method for Solving Potential Problems in the Irregular Doubly Connected Domains

    Directory of Open Access Journals (Sweden)

    Zhao-Qing Wang

    2014-01-01

    Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.

  12. Simulations of a single vortex ring using an unbounded, regularized particle-mesh based vortex method

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore

    2014-01-01

    In resent work we have developed a new FFT based Poisson solver, which uses regularized Greens functions to obtain arbitrary high order convergence to the unbounded Poisson equation. The high order Poisson solver has been implemented in an unbounded particle-mesh based vortex method which uses a re...

  13. Convergence rates for the iteratively regularized Gauss–Newton method in Banach spaces

    International Nuclear Information System (INIS)

    Kaltenbacher, Barbara; Hofmann, Bernd

    2010-01-01

    In this paper we consider the iteratively regularized Gauss–Newton method (IRGNM) in a Banach space setting and prove optimal convergence rates under approximate source conditions. These are related to the classical concept of source conditions that is available only in Hilbert space. We provide results in the framework of general index functions, which include, e.g. Hölder and logarithmic rates. Concerning the regularization parameters in each Newton step as well as the stopping index, we provide both a priori and a posteriori strategies, the latter being based on the discrepancy principle

  14. A regularization method for solving the Poisson equation for mixed unbounded-periodic domains

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Mølholm Hejlesen, Mads; Walther, Jens Honoré

    2018-01-01

    the regularized unbounded-periodic Green's functions can be implemented in an FFT-based Poisson solver to obtain a convergence rate corresponding to the regularization order of the Green's function. The high order is achieved without any additional computational cost from the conventional FFT-based Poisson solver...... and enables the calculation of the derivative of the solution to the same high order by direct spectral differentiation. We illustrate an application of the FFT-based Poisson solver by using it with a vortex particle mesh method for the approximation of incompressible flow for a problem with a single periodic...

  15. Global convergence of damped semismooth Newton methods for ℓ1 Tikhonov regularization

    International Nuclear Information System (INIS)

    Hans, Esther; Raasch, Thorsten

    2015-01-01

    We are concerned with Tikhonov regularization of linear ill-posed problems with ℓ 1 coefficient penalties. Griesse and Lorenz (2008 Inverse Problems 24 035007) proposed a semismooth Newton method for the efficient minimization of the corresponding Tikhonov functionals. In the class of high-precision solvers for such problems, semismooth Newton methods are particularly competitive due to their superlinear convergence properties and their ability to solve piecewise affine equations exactly within finitely many iterations. However, the convergence of semismooth Newton schemes is only local in general. In this work, we discuss the efficient globalization of B(ouligand)-semismooth Newton methods for ℓ 1 Tikhonov regularization by means of damping strategies and suitable descent with respect to an associated merit functional. Numerical examples are provided which show that our method compares well with existing iterative, globally convergent approaches. (paper)

  16. Ventricular action potential adaptation to regular exercise: role of β-adrenergic and KATP channel function.

    Science.gov (United States)

    Wang, Xinrui; Fitts, Robert H

    2017-08-01

    Regular exercise training is known to affect the action potential duration (APD) and improve heart function, but involvement of β-adrenergic receptor (β-AR) subtypes and/or the ATP-sensitive K + (K ATP ) channel is unknown. To address this, female and male Sprague-Dawley rats were randomly assigned to voluntary wheel-running or control groups; they were anesthetized after 6-8 wk of training, and myocytes were isolated. Exercise training significantly increased APD of apex and base myocytes at 1 Hz and decreased APD at 10 Hz. Ca 2+ transient durations reflected the changes in APD, while Ca 2+ transient amplitudes were unaffected by wheel running. The nonselective β-AR agonist isoproterenol shortened the myocyte APD, an effect reduced by wheel running. The isoproterenol-induced shortening of APD was largely reversed by the selective β 1 -AR blocker atenolol, but not the β 2 -AR blocker ICI 118,551, providing evidence that wheel running reduced the sensitivity of the β 1 -AR. At 10 Hz, the K ATP channel inhibitor glibenclamide prolonged the myocyte APD more in exercise-trained than control rats, implicating a role for this channel in the exercise-induced APD shortening at 10 Hz. A novel finding of this work was the dual importance of altered β 1 -AR responsiveness and K ATP channel function in the training-induced regulation of APD. Of physiological importance to the beating heart, the reduced response to adrenergic agonists would enhance cardiac contractility at resting rates, where sympathetic drive is low, by prolonging APD and Ca 2+ influx; during exercise, an increase in K ATP channel activity would shorten APD and, thus, protect the heart against Ca 2+ overload or inadequate filling. NEW & NOTEWORTHY Our data demonstrated that regular exercise prolonged the action potential and Ca 2+ transient durations in myocytes isolated from apex and base regions at 1-Hz and shortened both at 10-Hz stimulation. Novel findings were that wheel running shifted the

  17. New method for minimizing regular functions with constraints on parameter region

    International Nuclear Information System (INIS)

    Kurbatov, V.S.; Silin, I.N.

    1993-01-01

    The new method of function minimization is developed. Its main features are considered. It is possible minimization of regular function with the arbitrary structure. For χ 2 -like function the usage of simplified second derivatives is possible with the control of correctness. The constraints of arbitrary structure can be used. The means for fast movement along multidimensional valleys are used. The method is tested on real data of K π2 decay of the experiment on rare K - -decays. 6 refs

  18. The Regularized Iteratively Reweighted MAD Method for Change Detection in Multi- and Hyperspectral Data

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This paper describes new extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery. Much like boosting methods often applied in data mining work, the iteratively reweighted...... an agricultural region in Kenya, and hyperspectral airborne HyMap data from a small rural area in southeastern Germany are given. The latter case demonstrates the need for regularization....

  19. On Landweber–Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces

    International Nuclear Information System (INIS)

    Leitão, A; Alves, M Marques

    2012-01-01

    In this paper, iterative regularization methods of Landweber–Kaczmarz type are considered for solving systems of ill-posed equations modeled (finitely many) by operators acting between Banach spaces. Using assumptions of uniform convexity and smoothness on the parameter space, we are able to prove a monotony result for the proposed method, as well as to establish convergence (for exact data) and stability results (in the noisy data case). (paper)

  20. Does Vitamin C and E Supplementation Impair the Favorable Adaptations of Regular Exercise?

    Directory of Open Access Journals (Sweden)

    Michalis G. Nikolaidis

    2012-01-01

    Full Text Available The detrimental outcomes associated with unregulated and excessive production of free radicals remains a physiological concern that has implications to health, medicine and performance. Available evidence suggests that physiological adaptations to exercise training can enhance the body’s ability to quench free radicals and circumstantial evidence exists to suggest that key vitamins and nutrients may provide additional support to mitigate the untoward effects associated with increased free radical production. However, controversy has risen regarding the potential outcomes associated with vitamins C and E, two popular antioxidant nutrients. Recent evidence has been put forth suggesting that exogenous administration of these antioxidants may be harmful to performance making interpretations regarding the efficacy of antioxidants challenging. The available studies that employed both animal and human models provided conflicting outcomes regarding the efficacy of vitamin C and E supplementation, at least partly due to methodological differences in assessing oxidative stress and training adaptations. Based on the contradictory evidence regarding the effects of higher intakes of vitamin C and/or E on exercise performance and redox homeostasis, a permanent intake of non-physiological dosages of vitamin C and/or E cannot be recommended to healthy, exercising individuals.

  1. Comparing parameter choice methods for the regularization in the SONAH algorithm

    DEFF Research Database (Denmark)

    Gomes, Jesper Skovhus

    2006-01-01

    . The coefficients that perform this plane-to-plane transformation are found by solving a least squares problem, i.e. the SONAH algorithm minimizes a residual involving an infinite set of elementary waves. Since SONAH solves an inverse problem and since measurement errors are unavoidable in practice, regularization...... is needed. A parameter choice method based on a priori information about the signal-to-noise-ratio (SNR) in the measurement setup is often chosen. However, this parameter choice method may be undesirable since SNR is difficult to determine in practice. In this paper, data based parameter choice methods...... are used in order to determine a regularization parameter. Two such approaches are compared: Generalized Cross-Validation (GCV) and a trade-off curve analysis inspired by the L-curve. Results from computer simulations and from practical measurements with a two-layer microphone array are given...

  2. Output regularization of SVM seizure predictors: Kalman Filter versus the "Firing Power" method.

    Science.gov (United States)

    Teixeira, Cesar; Direito, Bruno; Bandarabadi, Mojtaba; Dourado, António

    2012-01-01

    Two methods for output regularization of support vector machines (SVMs) classifiers were applied for seizure prediction in 10 patients with long-term annotated data. The output of the classifiers were regularized by two methods: one based on the Kalman Filter (KF) and other based on a measure called the "Firing Power" (FP). The FP is a quantification of the rate of the classification in the preictal class in a past time window. In order to enable the application of the KF, the classification problem was subdivided in a two two-class problem, and the real-valued output of SVMs was considered. The results point that the FP method raise less false alarms than the KF approach. However, the KF approach presents an higher sensitivity, but the high number of false alarms turns their applicability negligible in some situations.

  3. Robust dynamic myocardial perfusion CT deconvolution for accurate residue function estimation via adaptive-weighted tensor total variation regularization: a preclinical study

    Science.gov (United States)

    Zeng, Dong; Gong, Changfei; Bian, Zhaoying; Huang, Jing; Zhang, Xinyu; Zhang, Hua; Lu, Lijun; Niu, Shanzhou; Zhang, Zhang; Liang, Zhengrong; Feng, Qianjin; Chen, Wufan; Ma, Jianhua

    2016-11-01

    Dynamic myocardial perfusion computed tomography (MPCT) is a promising technique for quick diagnosis and risk stratification of coronary artery disease. However, one major drawback of dynamic MPCT imaging is the heavy radiation dose to patients due to its dynamic image acquisition protocol. In this work, to address this issue, we present a robust dynamic MPCT deconvolution algorithm via adaptive-weighted tensor total variation (AwTTV) regularization for accurate residue function estimation with low-mA s data acquisitions. For simplicity, the presented method is termed ‘MPD-AwTTV’. More specifically, the gains of the AwTTV regularization over the original tensor total variation regularization are from the anisotropic edge property of the sequential MPCT images. To minimize the associative objective function we propose an efficient iterative optimization strategy with fast convergence rate in the framework of an iterative shrinkage/thresholding algorithm. We validate and evaluate the presented algorithm using both digital XCAT phantom and preclinical porcine data. The preliminary experimental results have demonstrated that the presented MPD-AwTTV deconvolution algorithm can achieve remarkable gains in noise-induced artifact suppression, edge detail preservation, and accurate flow-scaled residue function and MPHM estimation as compared with the other existing deconvolution algorithms in digital phantom studies, and similar gains can be obtained in the porcine data experiment.

  4. Adapting a regularized canopy reflectance model (REGFLEC) for the retrieval challenges of dryland agricultural systems

    KAUST Repository

    Houborg, Rasmus

    2016-08-20

    A regularized canopy reflectance model (REGFLEC) is applied over a dryland irrigated agricultural system in Saudi Arabia for the purpose of retrieving leaf area index (LAI) and leaf chlorophyll content (Chll). To improve the robustness of the retrieved properties, REGFLEC was modified to 1) correct for aerosol and adjacency effects, 2) consider foliar dust effects on modeled canopy reflectances, 3) include spectral information in the red-edge wavelength region, and 4) exploit empirical LAI estimates in the model inversion. Using multi-spectral RapidEye imagery allowed Chll to be retrieved with a Mean Absolute Deviation (MAD) of 7.9 μg cm− 2 (16%), based upon in-situ measurements conducted in fields of alfalfa, Rhodes grass and maize over the course of a growing season. LAI and Chll compensation effects on canopy reflectance were largely avoided by informing the inversion process with ancillary LAI inputs established empirically on the basis of a statistical machine learning technique. As a result, LAI was reproduced with good accuracy, with an overall MAD of 0.42 m2 m− 2 (12.5%). Results highlighted the considerable challenges associated with the translation of at-sensor radiance observations to surface bidirectional reflectances in dryland environments, where issues such as high aerosol loadings and large spatial gradients in surface reflectance from bright desert soils to dark vegetated fields are often present. Indeed, surface reflectances in the visible bands were reduced by up to 60% after correction for such adjacency effects. In addition, dust deposition on leaves required explicit modification of the reflectance sub-model to account for its influence. By implementing these model refinements, REGFLEC demonstrated its utility for within-field characterization of vegetation conditions over the challenging landscapes typical of dryland agricultural regions, offering a means through which improvements can be made in the management of these globally

  5. L{sub 1/2} regularization based numerical method for effective reconstruction of bioluminescence tomography

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)

    2014-05-14

    Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.

  6. Cardiac C-arm computed tomography using a 3D + time ROI reconstruction method with spatial and temporal regularization

    Energy Technology Data Exchange (ETDEWEB)

    Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)

    2014-02-15

    Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.

  7. The existence results and Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces

    Science.gov (United States)

    Wang, Min

    2017-06-01

    This paper aims to establish the Tikhonov regularization method for generalized mixed variational inequalities in Banach spaces. For this purpose, we firstly prove a very general existence result for generalized mixed variational inequalities, provided that the mapping involved has the so-called mixed variational inequality property and satisfies a rather weak coercivity condition. Finally, we establish the Tikhonov regularization method for generalized mixed variational inequalities. Our findings extended the results for the generalized variational inequality problem (for short, GVIP( F, K)) in R^n spaces (He in Abstr Appl Anal, 2012) to the generalized mixed variational inequality problem (for short, GMVIP(F,φ , K)) in reflexive Banach spaces. On the other hand, we generalized the corresponding results for the generalized mixed variational inequality problem (for short, GMVIP(F,φ ,K)) in R^n spaces (Fu and He in J Sichuan Norm Univ (Nat Sci) 37:12-17, 2014) to reflexive Banach spaces.

  8. Bayesian adaptive methods for clinical trials

    CERN Document Server

    Berry, Scott M; Muller, Peter

    2010-01-01

    Already popular in the analysis of medical device trials, adaptive Bayesian designs are increasingly being used in drug development for a wide variety of diseases and conditions, from Alzheimer's disease and multiple sclerosis to obesity, diabetes, hepatitis C, and HIV. Written by leading pioneers of Bayesian clinical trial designs, Bayesian Adaptive Methods for Clinical Trials explores the growing role of Bayesian thinking in the rapidly changing world of clinical trial analysis. The book first summarizes the current state of clinical trial design and analysis and introduces the main ideas and potential benefits of a Bayesian alternative. It then gives an overview of basic Bayesian methodological and computational tools needed for Bayesian clinical trials. With a focus on Bayesian designs that achieve good power and Type I error, the next chapters present Bayesian tools useful in early (Phase I) and middle (Phase II) clinical trials as well as two recent Bayesian adaptive Phase II studies: the BATTLE and ISP...

  9. A regularized vortex-particle mesh method for large eddy simulation

    DEFF Research Database (Denmark)

    Spietz, Henrik Juul; Walther, Jens Honore; Hejlesen, Mads Mølholm

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green’s function...... solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy...

  10. Fibonacci-regularization method for solving Cauchy integral equations of the first kind

    Directory of Open Access Journals (Sweden)

    Mohammad Ali Fariborzi Araghi

    2017-09-01

    Full Text Available In this paper, a novel scheme is proposed to solve the first kind Cauchy integral equation over a finite interval. For this purpose, the regularization method is considered. Then, the collocation method with Fibonacci base function is applied to solve the obtained second kind singular integral equation. Also, the error estimate of the proposed scheme is discussed. Finally, some sample Cauchy integral equations stem from the theory of airfoils in fluid mechanics are presented and solved to illustrate the importance and applicability of the given algorithm. The tables in the examples show the efficiency of the method.

  11. Application of L1/2 regularization logistic method in heart disease diagnosis.

    Science.gov (United States)

    Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying

    2014-01-01

    Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.

  12. A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Xinrong Ji

    2016-07-01

    Full Text Available In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN test platform further shows the advantages of the proposed algorithm with respect to communication cost.

  13. Adaptive Integral Method for Higher Order Method of Moments

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Meincke, Peter

    2008-01-01

    The adaptive integral method (AIM) is combined with the higher order method of moments (MoM) to solve integral equations. The technique takes advantage of the low computational complexity and memory requirements of the AIM and the reduced number of unknowns and higher order convergence of higher...

  14. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  15. Adaptive finite element method for shape optimization

    KAUST Repository

    Morin, Pedro

    2012-01-16

    We examine shape optimization problems in the context of inexact sequential quadratic programming. Inexactness is a consequence of using adaptive finite element methods (AFEM) to approximate the state and adjoint equations (via the dual weighted residual method), update the boundary, and compute the geometric functional. We present a novel algorithm that equidistributes the errors due to shape optimization and discretization, thereby leading to coarse resolution in the early stages and fine resolution upon convergence, and thus optimizing the computational effort. We discuss the ability of the algorithm to detect whether or not geometric singularities such as corners are genuine to the problem or simply due to lack of resolution - a new paradigm in adaptivity. © EDP Sciences, SMAI, 2012.

  16. Adaptive floating search methods in feature selection

    Czech Academy of Sciences Publication Activity Database

    Somol, Petr; Pudil, Pavel; Novovičová, Jana; Paclík, Pavel

    1999-01-01

    Roč. 20, 11/13 (1999), s. 1157-1163 ISSN 0167-8655 R&D Projects: GA ČR GA402/97/1242 Grant - others:MŠMT(CZ) VS96063 Institutional research plan: AV0Z1075907 Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.315, year: 1999 http://library.utia.cas.cz/separaty/historie/somol-adaptive floating search methods in feature selection.pdf

  17. Integrating adaptive governance and participatory multicriteria methods: a framework for climate adaptation governance

    Directory of Open Access Journals (Sweden)

    Stefania Munaretto

    2014-06-01

    Full Text Available Climate adaptation is a dynamic social and institutional process where the governance dimension is receiving growing attention. Adaptive governance is an approach that promises to reduce uncertainty by improving the knowledge base for decision making. As uncertainty is an inherent feature of climate adaptation, adaptive governance seems to be a promising approach for improving climate adaptation governance. However, the adaptive governance literature has so far paid little attention to decision-making tools and methods, and the literature on the governance of adaptation is in its infancy in this regard. We argue that climate adaptation governance would benefit from systematic and yet flexible decision-making tools and methods such as participatory multicriteria methods for the evaluation of adaptation options, and that these methods can be linked to key adaptive governance principles. Moving from these premises, we propose a framework that integrates key adaptive governance features into participatory multicriteria methods for the governance of climate adaptation.

  18. Advanced numerical methods in mesh generation and mesh adaptation

    Energy Technology Data Exchange (ETDEWEB)

    Lipnikov, Konstantine [Los Alamos National Laboratory; Danilov, A [MOSCOW, RUSSIA; Vassilevski, Y [MOSCOW, RUSSIA; Agonzal, A [UNIV OF LYON

    2010-01-01

    Numerical solution of partial differential equations requires appropriate meshes, efficient solvers and robust and reliable error estimates. Generation of high-quality meshes for complex engineering models is a non-trivial task. This task is made more difficult when the mesh has to be adapted to a problem solution. This article is focused on a synergistic approach to the mesh generation and mesh adaptation, where best properties of various mesh generation methods are combined to build efficiently simplicial meshes. First, the advancing front technique (AFT) is combined with the incremental Delaunay triangulation (DT) to build an initial mesh. Second, the metric-based mesh adaptation (MBA) method is employed to improve quality of the generated mesh and/or to adapt it to a problem solution. We demonstrate with numerical experiments that combination of all three methods is required for robust meshing of complex engineering models. The key to successful mesh generation is the high-quality of the triangles in the initial front. We use a black-box technique to improve surface meshes exported from an unattainable CAD system. The initial surface mesh is refined into a shape-regular triangulation which approximates the boundary with the same accuracy as the CAD mesh. The DT method adds robustness to the AFT. The resulting mesh is topologically correct but may contain a few slivers. The MBA uses seven local operations to modify the mesh topology. It improves significantly the mesh quality. The MBA method is also used to adapt the mesh to a problem solution to minimize computational resources required for solving the problem. The MBA has a solid theoretical background. In the first two experiments, we consider the convection-diffusion and elasticity problems. We demonstrate the optimal reduction rate of the discretization error on a sequence of adaptive strongly anisotropic meshes. The key element of the MBA method is construction of a tensor metric from hierarchical edge

  19. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  20. A regularized vortex-particle mesh method for large eddy simulation

    Science.gov (United States)

    Spietz, H. J.; Walther, J. H.; Hejlesen, M. M.

    2017-11-01

    We present recent developments of the remeshed vortex particle-mesh method for simulating incompressible fluid flow. The presented method relies on a parallel higher-order FFT based solver for the Poisson equation. Arbitrary high order is achieved through regularization of singular Green's function solutions to the Poisson equation and recently we have derived novel high order solutions for a mixture of open and periodic domains. With this approach the simulated variables may formally be viewed as the approximate solution to the filtered Navier Stokes equations, hence we use the method for Large Eddy Simulation by including a dynamic subfilter-scale model based on test-filters compatible with the aforementioned regularization functions. Further the subfilter-scale model uses Lagrangian averaging, which is a natural candidate in light of the Lagrangian nature of vortex particle methods. A multiresolution variation of the method is applied to simulate the benchmark problem of the flow past a square cylinder at Re = 22000 and the obtained results are compared to results from the literature.

  1. Optimized star sensors laboratory calibration method using a regularization neural network.

    Science.gov (United States)

    Zhang, Chengfen; Niu, Yanxiong; Zhang, Hao; Lu, Jiazhen

    2018-02-10

    High-precision ground calibration is essential to ensure the performance of star sensors. However, the complex distortion and multi-error coupling have brought great difficulties to traditional calibration methods, especially for large field of view (FOV) star sensors. Although increasing the complexity of models is an effective way to improve the calibration accuracy, it significantly increases the demand for calibration data. In order to achieve high-precision calibration of star sensors with large FOV, a novel laboratory calibration method based on a regularization neural network is proposed. A multi-layer structure neural network is designed to represent the mapping of the star vector and the corresponding star point coordinate directly. To ensure the generalization performance of the network, regularization strategies are incorporated into the net structure and the training algorithm. Simulation and experiment results demonstrate that the proposed method can achieve high precision with less calibration data and without any other priori information. Compared with traditional methods, the calibration error of the star sensor decreased by about 30%. The proposed method can satisfy the precision requirement for large FOV star sensors.

  2. Extension of the adiabatic regularization method to spin-1/2 fields

    Science.gov (United States)

    Landete, Aitor

    2015-04-01

    The adiabatic regularization method was designed by L. Parker [1] for scalar fields in order to to subtract the potentially UV divergences that appear in the particle number operator. After that the method was generalized [2] to remove, in a consistent way, the UV divergences that appear in the expectation value of the stress-energy tensor in homogeneous cosmological backgrounds. We are going to provide here the extension of the adiabatic regularization method to spin-1/2 fields first given in [3]. In order to achieve this extension we will show the generalization of the adiabatic expansion for fermionic fields which differs significantly from the WKB-type expansion that works for the scalar modes. We will also show the consistency of the extended method computing well-known results, computed by other renormalization methods for a Dirac field in a FLRW spacetime, like the conformal and axial anomalies. Finally we will compute the expectation value of the stress-energy tensor for a Dirac field in a de Sitter spacetime.

  3. The Translation and Adaptation of Agile Methods

    DEFF Research Database (Denmark)

    Pries-Heje, Jan; Baskerville, Richard

    2017-01-01

    Purpose The purpose of this paper is to use translation theory to develop a framework (called FTRA) that explains how companies adopt agile methods in a discourse of fragmentation and articulation. Design/methodology/approach A qualitative multiple case study of six firms using the Scrum agile...... Framing agile adaption with translation theory surfaces how the discourse between translocal (global) and local practice yields the social construction of agile methods. This result contrasts the more functionalist engineering perspective and privileges changeability over performance. Originality...

  4. Implementation of an optimal first-order method for strongly convex total variation regularization

    DEFF Research Database (Denmark)

    Jensen, Tobias Lindstrøm; Jørgensen, Jakob Heide; Hansen, Per Christian

    2012-01-01

    We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ-strongly convex objective functions with L-Lipschitz continuous gradient....... In the framework of Nesterov both μ and L are assumed known—an assumption that is seldom satisfied in practice. We propose to incorporate mechanisms to estimate locally sufficient μ and L during the iterations. The mechanisms also allow for the application to non-strongly convex functions. We discuss...

  5. Joint image registration and fusion method with a gradient strength regularization

    Science.gov (United States)

    Lidong, Huang; Wei, Zhao; Jun, Wang

    2015-05-01

    Image registration is an essential process for image fusion, and fusion performance can be used to evaluate registration accuracy. We propose a maximum likelihood (ML) approach to joint image registration and fusion instead of treating them as two independent processes in the conventional way. To improve the visual quality of a fused image, a gradient strength (GS) regularization is introduced in the cost function of ML. The GS of the fused image is controllable by setting the target GS value in the regularization term. This is useful because a larger target GS brings a clearer fused image and a smaller target GS makes the fused image smoother and thus restrains noise. Hence, the subjective quality of the fused image can be improved whether the source images are polluted by noise or not. We can obtain the fused image and registration parameters successively by minimizing the cost function using an iterative optimization method. Experimental results show that our method is effective with transformation, rotation, and scale parameters in the range of [-2.0, 2.0] pixel, [-1.1 deg, 1.1 deg], and [0.95, 1.05], respectively, and variances of noise smaller than 300. It also demonstrated that our method yields a more visual pleasing fused image and higher registration accuracy compared with a state-of-the-art algorithm.

  6. Applying sequential Monte Carlo methods into a distributed hydrologic model: lagged particle filtering approach with regularization

    Directory of Open Access Journals (Sweden)

    S. J. Noh

    2011-10-01

    Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.

  7. Comparisons of clustered regularly interspaced short palindromic repeats and viromes in human saliva reveal bacterial adaptations to salivary viruses.

    Science.gov (United States)

    Pride, David T; Salzman, Julia; Relman, David A

    2012-09-01

    Explorations of human microbiota have provided substantial insight into microbial community composition; however, little is known about interactions between various microbial components in human ecosystems. In response to the powerful impact of viral predation, bacteria have acquired potent defences, including an adaptive immune response based on the clustered regularly interspaced short palindromic repeats (CRISPRs)/Cas system. To improve our understanding of the interactions between bacteria and their viruses in humans, we analysed 13 977 streptococcal CRISPR sequences and compared them with 2 588 172 virome reads in the saliva of four human subjects over 17 months. We found a diverse array of viruses and CRISPR spacers, many of which were specific to each subject and time point. There were numerous viral sequences matching CRISPR spacers; these matches were highly specific for salivary viruses. We determined that spacers and viruses coexist at the same time, which suggests that streptococcal CRISPR/Cas systems are under constant pressure from salivary viruses. CRISPRs in some subjects were just as likely to match viral sequences from other subjects as they were to match viruses from the same subject. Because interactions between bacteria and viruses help to determine the structure of bacterial communities, CRISPR-virus analyses are likely to provide insight into the forces shaping the human microbiome. © 2012 Society for Applied Microbiology and Blackwell Publishing Ltd.

  8. GNSS/Low-Cost MEMS-INS Integration Using Variational Bayesian Adaptive Cubature Kalman Smoother and Ensemble Regularized ELM

    Directory of Open Access Journals (Sweden)

    Hassana Maigary Georges

    2015-01-01

    Full Text Available Among the inertial navigation system (INS devices used in land vehicle navigation (LVN, low-cost microelectromechanical systems (MEMS inertial sensors have received more interest for bridging global navigation satellites systems (GNSS signal failures because of their price and portability. Kalman filter (KF based GNSS/INS integration has been widely used to provide a robust solution to the navigation. However, its prediction model cannot give satisfactory results in the presence of colored and variational noise. In order to achieve reliable and accurate positional solution for LVN in urban areas surrounded by skyscrapers or under dense foliage and tunnels, a novel model combining variational Bayesian adaptive Kalman smoother (VB-ACKS as an alternative of KF and ensemble regularized extreme learning machine (ERELM for bridging global positioning systems outages is proposed. The ERELM is applied to reduce the fluctuating performance of GNSS during an outage. We show that a well-organized collection of predictors using ensemble learning yields a more accurate positional result when compared with conventional artificial neural network (ANN predictors. Experimental results show that the performance of VB-ACKS is more robust compared with KF solution, and the prediction of ERELM contains the smallest error compared with other ANN solutions.

  9. The Method of Lines Solution of the Regularized Long-Wave Equation Using Runge-Kutta Time Discretization Method

    Directory of Open Access Journals (Sweden)

    H. O. Bakodah

    2013-01-01

    Full Text Available A method of lines approach to the numerical solution of nonlinear wave equations typified by the regularized long wave (RLW is presented. The method developed uses a finite differences discretization to the space. Solution of the resulting system was obtained by applying fourth Runge-Kutta time discretization method. Using Von Neumann stability analysis, it is shown that the proposed method is marginally stable. To test the accuracy of the method some numerical experiments on test problems are presented. Test problems including solitary wave motion, two-solitary wave interaction, and the temporal evaluation of a Maxwellian initial pulse are studied. The accuracy of the present method is tested with and error norms and the conservation properties of mass, energy, and momentum under the RLW equation.

  10. Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods

    Energy Technology Data Exchange (ETDEWEB)

    Volentic, J. [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)

    1997-12-31

    Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.

  11. AN AUTOMATED METHOD FOR 3D ROOF OUTLINE GENERATION AND REGULARIZATION IN AIRBONE LASER SCANNER DATA

    Directory of Open Access Journals (Sweden)

    S. N. Perera

    2012-07-01

    Full Text Available In this paper, an automatic approach for the generation and regularization of 3D roof boundaries in Airborne Laser scanner data is presented. The workflow is commenced by segmentation of the point clouds. A classification step and a rule based roof extraction step are followed the planar segmentation. Refinement on roof extraction is performed in order to minimize the effect due to urban vegetation. Boundary points of the connected roof planes are extracted and fitted series of straight line segments. Each line is then regularized with respect to the dominant building orientation. We introduce the usage of cycle graphs for the best use of topological information. Ridge-lines and step-edges are basically extracted to recognise correct topological relationships among the roof faces. Inner roof corners are geometrically fitted based on the closed cycle graphs. Outer boundary is reconstructed using the same concept but with the outer most cycle graph. In here, union of the sub cycles is taken. Intermediate line segments (outer bounds are intersected to reconstruct the roof eave lines. Two test areas with two different point densities are tested with the developed approach. Performance analysis of the test results is provided to demonstrate the applicability of the method.

  12. A graph regularized non-negative matrix factorization method for identifying microRNA-disease associations.

    Science.gov (United States)

    Xiao, Qiu; Luo, Jiawei; Liang, Cheng; Cai, Jie; Ding, Pingjian

    2017-09-01

    MicroRNAs (miRNAs) play crucial roles in post-transcriptional regulations and various cellular processes. The identification of disease-related miRNAs provides great insights into the underlying pathogenesis of diseases at a system level. However, most existing computational approaches are biased towards known miRNA-disease associations, which is inappropriate for those new diseases or miRNAs without any known association information. In this study, we propose a new method with graph regularized non-negative matrix factorization in heterogeneous omics data, called GRNMF, to discover potential associations between miRNAs and diseases, especially for new diseases and miRNAs or those diseases and miRNAs with sparse known associations. First, we integrate the disease semantic information and miRNA functional information to estimate disease similarity and miRNA similarity, respectively. Considering that there is no available interaction observed for new diseases or miRNAs, a preprocessing step is developed to construct the interaction score profiles that will assist in prediction. Next, a graph regularized non-negative matrix factorization framework is utilized to simultaneously identify potential associations for all diseases. The results indicated that our proposed method can effectively prioritize disease-associated miRNAs with higher accuracy compared with other recent approaches. Moreover, case studies also demonstrated the effectiveness of GRNMF to infer unknown miRNA-disease associations for those novel diseases and miRNAs. The code of GRNMF is freely available at https://github.com/XIAO-HN/GRNMF/. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  13. Comparison of two regularization methods for soft x-ray tomography at Tore Supra

    International Nuclear Information System (INIS)

    Jardin, A; Mazon, D; Bielecki, J

    2016-01-01

    Soft x-ray (SXR) emission in the range 0.1–20 keV is widely used to obtain valuable information on tokamak plasma physics, such as particle transport, magnetic configuration or magnetohydrodynamic activity. In particular, 2D tomography is the usual plasma diagnostic to access the local SXR emissivity. The tomographic inversion is traditionally performed from line-integrated measurements of two or more cameras viewing the plasma in a poloidal cross-section, like at Tore Supra (TS). Unfortunately, due to the limited number of measured projections and presence of noise, the tomographic reconstruction of SXR emissivity is a mathematical ill-posed problem. Thus, obtaining reliable results of the tomographic inversion is a very challenging task. In order to perform the reconstruction, inversion algorithms implemented in present tokamaks use a priori information as additional constraints imposed on the plasma SXR emissivity. Among several potential inversion methods, some of them have been identified as well suited to tokamak plasmas. The purpose of this work is to compare two promising inversion methods, i.e. the minimum fisher information method already used at TS and planned for WEST configuration, and the alternative 2nd order Phillips–Tikhonov regularization with smoothness constraints imposed on the second derivative norm. Respective accuracy of both reconstruction methods as well as overall robustness and computational time are studied, using several synthetic SXR emissivity profiles. Finally, a real case is studied through tomographic reconstruction from TS SXR database. (paper)

  14. Efficient Integration of Highly Eccentric Orbits by Scaling Methods Applied to Kustaanheimo-Stiefel Regularization

    Science.gov (United States)

    Fukushima, Toshio

    2004-12-01

    We apply our single scaling method to the numerical integration of perturbed two-body problems regularized by the Kustaanheimo-Stiefel (K-S) transformation. The scaling is done by multiplying a single scaling factor with the four-dimensional position and velocity vectors of an associated harmonic oscillator in order to maintain the Kepler energy relation in terms of the K-S variables. As with the so-called energy rectification of Aarseth, the extra cost for the scaling is negligible, since the integration of the Kepler energy itself is already incorporated in the original K-S formulation. On the other hand, the single scaling method can be applied at every integration step without facing numerical instabilities. For unperturbed cases, the single scaling applied at every step gives a better result than either the original K-S formulation, the energy rectification applied at every apocenter, or the single scaling method applied at every apocenter. For the perturbed cases, however, the single scaling method applied at every apocenter provides the best performance for all perturbation types, whether the main source of error is truncation or round-off.

  15. A novel relational regularization feature selection method for joint regression and classification in AD diagnosis.

    Science.gov (United States)

    Zhu, Xiaofeng; Suk, Heung-Il; Wang, Li; Lee, Seong-Whan; Shen, Dinggang

    2017-05-01

    In this paper, we focus on joint regression and classification for Alzheimer's disease diagnosis and propose a new feature selection method by embedding the relational information inherent in the observations into a sparse multi-task learning framework. Specifically, the relational information includes three kinds of relationships (such as feature-feature relation, response-response relation, and sample-sample relation), for preserving three kinds of the similarity, such as for the features, the response variables, and the samples, respectively. To conduct feature selection, we first formulate the objective function by imposing these three relational characteristics along with an ℓ 2,1 -norm regularization term, and further propose a computationally efficient algorithm to optimize the proposed objective function. With the dimension-reduced data, we train two support vector regression models to predict the clinical scores of ADAS-Cog and MMSE, respectively, and also a support vector classification model to determine the clinical label. We conducted extensive experiments on the Alzheimer's Disease Neuroimaging Initiative (ADNI) dataset to validate the effectiveness of the proposed method. Our experimental results showed the efficacy of the proposed method in enhancing the performances of both clinical scores prediction and disease status identification, compared to the state-of-the-art methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. [Compatibility regularity of compound traditional Chinese medicine patents based on association principle and entropy method].

    Science.gov (United States)

    Yin, Xiang-jun; He, Qing-yong

    2015-02-01

    To analyze the compatibility regularity of compound traditional Chinese medicine (TCM) patents for treating dyslipidemia, and provide basis for the clinical development and research of new TCM for treating dyslipidemia. Totally 243 compound traditional Chinese medicine patents for treating dyslipidemia were collected from the national patent database from September 1985 to March 2014 and analyzed by using drug frequency, association rules, complex network and entropy method of Traditional Chinese Medicine Inheritance System (V1.1). The commonest single medicine in the treatment of dyslipidemia is Crataegi Fructus 109 (44.86%). The commonest pair medicine is Crataegi Fructus-Salviae Miltiorrhizae Radix et Rhizoma 53 (21.81%). The commonest corner drug is Crataegi Fructus-Cassiae Semen-Polygoni Multiflori Radix 25 (10.29%). The common prescriptions on basis of association rules are Prunellae Spica-->Salviae Miltiorrhizae Radix et Rhizoma (0.833), Rhei Radix et Rhizoma, Alismatis Rhizoma-->Polygoni Multiflori Radix (1.00), Salviae Miltiorrhizae Radix et Rhizoma, Cassiae Semen, Alismatis Rhizoma-->Polygoni Multiflori Radix (0.929). The core drugs based on complex networks are Salviae Miltiorrhizae Radix et Rhizoma and Crataegi Fructus. The new prescriptions extracted by entropy method are Atractylodis Macrocephalae Rhizoma-Glycyrrhizae Radix et Rhizoma-Platycladi Semen-Stephaniae Tetrandrae Radix; Citri Reticulatae Pericarpium-Poria-Coicis Semen-Pinelliae Rhizoma. This study shows the regularity in the compatibility of compound TCM patents treating dyslipidemia, suggesting that future studies on new traditional Chinese medicines treating dyslipidemia should focus on the following six aspects: (1) Single medicine should be preferred: e. g. Crataegi Fructus; (2) Pair medicines should be preferred: e. g. Crataegi Fructus-Salviae Miltiorrhizae Radix et Rhizoma; (3) Corner drugs should be preferred: e. g. Crataegi Fructus, Cassiae Semen, Polygoni Multiflori Radix; (4) The

  17. Implementing Adaptive Educational Methods with IMS Learning Design

    NARCIS (Netherlands)

    Specht, Marcus; Burgos, Daniel

    2006-01-01

    Please, cite this publication as: Specht, M. & Burgos, D. (2006). Implementing Adaptive Educational Methods with IMS Learning Design. Proceedings of Adaptive Hypermedia. June, Dublin, Ireland. Retrieved June 30th, 2006, from http://dspace.learningnetworks.org

  18. Single photon emission computed tomography using a regularizing iterative method for attenuation correction

    International Nuclear Information System (INIS)

    Soussaline, Francoise; Cao, A.; Lecoq, G.

    1981-06-01

    An analytically exact solution to the attenuated tomographic operator is proposed. Such a technique called Regularizing Iterative Method (RIM) belongs to the iterative class of procedures where a priori knowledge can be introduced on the evaluation of the size and shape of the activity domain to be reconstructed, and on the exact attenuation distribution. The relaxation factor used is so named because it leads to fast convergence and provides noise filtering for a small number of iteractions. The effectiveness of such a method was tested in the Single Photon Emission Computed Tomography (SPECT) reconstruction problem, with the goal of precise correction for attenuation before quantitative study. Its implementation involves the use of a rotating scintillation camera based SPECT detector connected to a mini computer system. Mathematical simulations of cylindical uniformly attenuated phantoms indicate that in the range of a priori calculated relaxation factor a fast converging solution can always be found with a (contrast) accuracy of the order of 0.2 to 4% given that numerical errors and noise are or not, taken into account. The sensitivity of the (RIM) algorithm to errors in the size of the reconstructed object and in the value of the attenuation coefficient μ was studied, using the same simulation data. Extreme variations of +- 15% in these parameters will lead to errors of the order of +- 20% in the quantitative results. Physical phantoms representing a variety of geometrical situations were also studied

  19. Regularization and computational methods for precise solution of perturbed orbit transfer problems

    Science.gov (United States)

    Woollands, Robyn Michele

    The author has developed a suite of algorithms for solving the perturbed Lambert's problem in celestial mechanics. These algorithms have been implemented as a parallel computation tool that has broad applicability. This tool is composed of four component algorithms and each provides unique benefits for solving a particular type of orbit transfer problem. The first one utilizes a Keplerian solver (a-iteration) for solving the unperturbed Lambert's problem. This algorithm not only provides a "warm start" for solving the perturbed problem but is also used to identify which of several perturbed solvers is best suited for the job. The second algorithm solves the perturbed Lambert's problem using a variant of the modified Chebyshev-Picard iteration initial value solver that solves two-point boundary value problems. This method converges over about one third of an orbit and does not require a Newton-type shooting method and thus no state transition matrix needs to be computed. The third algorithm makes use of regularization of the differential equations through the Kustaanheimo-Stiefel transformation and extends the domain of convergence over which the modified Chebyshev-Picard iteration two-point boundary value solver will converge, from about one third of an orbit to almost a full orbit. This algorithm also does not require a Newton-type shooting method. The fourth algorithm uses the method of particular solutions and the modified Chebyshev-Picard iteration initial value solver to solve the perturbed two-impulse Lambert problem over multiple revolutions. The method of particular solutions is a shooting method but differs from the Newton-type shooting methods in that it does not require integration of the state transition matrix. The mathematical developments that underlie these four algorithms are derived in the chapters of this dissertation. For each of the algorithms, some orbit transfer test cases are included to provide insight on accuracy and efficiency of these

  20. Void Structures in Regularly Patterned ZnO Nanorods Grown with the Hydrothermal Method

    Directory of Open Access Journals (Sweden)

    Yu-Feng Yao

    2014-01-01

    Full Text Available The void structures and related optical properties after thermal annealing with ambient oxygen in regularly patterned ZnO nanrorod (NR arrays grown with the hydrothermal method are studied. In increasing the thermal annealing temperature, void distribution starts from the bottom and extends to the top of an NR in the vertical (c-axis growth region. When the annealing temperature is higher than 400°C, void distribution spreads into the lateral (m-axis growth region. Photoluminescence measurement shows that the ZnO band-edge emission, in contrast to defect emission in the yellow-red range, is the strongest under the n-ZnO NR process conditions of 0.003 M in Ga-doping concentration and 300°C in thermal annealing temperature with ambient oxygen. Energy dispersive X-ray spectroscopy data indicate that the concentration of hydroxyl groups in the vertical growth region is significantly higher than that in the lateral growth region. During thermal annealing, hydroxyl groups are desorbed from the NR leaving anion vacancies for reacting with cation vacancies to form voids.

  1. 3D DC Resistivity Inversion with Topography Based on Regularized Conjugate Gradient Method

    Directory of Open Access Journals (Sweden)

    Jian-ke Qiang

    2013-01-01

    Full Text Available During the past decades, we observed a strong interest in 3D DC resistivity inversion and imaging with complex topography. In this paper, we implemented 3D DC resistivity inversion based on regularized conjugate gradient method with FEM. The Fréchet derivative is assembled with the electric potential in order to speed up the inversion process based on the reciprocity theorem. In this study, we also analyzed the sensitivity of the electric potential on the earth’s surface to the conductivity in each cell underground and introduced an optimized weighting function to produce new sensitivity matrix. The synthetic model study shows that this optimized weighting function is helpful to improve the resolution of deep anomaly. By incorporating topography into inversion, the artificial anomaly which is actually caused by topography can be eliminated. As a result, this algorithm potentially can be applied to process the DC resistivity data collected in mountain area. Our synthetic model study also shows that the convergence and computation speed are very stable and fast.

  2. A multiresolution method for solving the Poisson equation using high order regularization

    DEFF Research Database (Denmark)

    Hejlesen, Mads Mølholm; Walther, Jens Honore

    2016-01-01

    and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...

  3. Reading proficiency and adaptability in orthographic processing: an examination of the effect of type of orthography read on brain activity in regular and dyslexic readers.

    Science.gov (United States)

    Bar-Kochva, Irit; Breznitz, Zvia

    2014-01-01

    Regular readers were found to adjust the routine of reading to the demands of processing imposed by different orthographies. Dyslexic readers may lack such adaptability in reading. This hypothesis was tested among readers of Hebrew, as Hebrew has two forms of script differing in phonological transparency. Event-related potentials were recorded from 24 regular and 24 dyslexic readers while they carried out a lexical decision task in these two forms of script. The two forms of script elicited distinct amplitudes and latencies at ∼165 ms after target onset, and these effects were larger in regular than in dyslexic readers. These early effects appeared not to be merely a result of the visual difference between the two forms of script (the presence of diacritics). The next effect of form of script was obtained on amplitudes elicited at latencies associated with orthographic-lexical processing and the categorization of stimuli, and these appeared earlier in regular readers (∼340 ms) than in dyslexic readers (∼400 ms). The behavioral measures showed inferior reading skills of dyslexic readers compared to regular readers in reading of both forms of script. Taken together, the results suggest that although dyslexic readers are not indifferent to the type of orthography read, they fail to adjust the routine of reading to the demands of processing imposed by both a transparent and an opaque orthography.

  4. Adaptive design methods in clinical trials – a review

    Directory of Open Access Journals (Sweden)

    Chang Mark

    2008-05-01

    Full Text Available Abstract In recent years, the use of adaptive design methods in clinical research and development based on accrued data has become very popular due to its flexibility and efficiency. Based on adaptations applied, adaptive designs can be classified into three categories: prospective, concurrent (ad hoc, and retrospective adaptive designs. An adaptive design allows modifications made to trial and/or statistical procedures of ongoing clinical trials. However, it is a concern that the actual patient population after the adaptations could deviate from the originally target patient population and consequently the overall type I error (to erroneously claim efficacy for an infective drug rate may not be controlled. In addition, major adaptations of trial and/or statistical procedures of on-going trials may result in a totally different trial that is unable to address the scientific/medical questions the trial intends to answer. In this article, several commonly considered adaptive designs in clinical trials are reviewed. Impacts of ad hoc adaptations (protocol amendments, challenges in by design (prospective adaptations, and obstacles of retrospective adaptations are described. Strategies for the use of adaptive design in clinical development of rare diseases are discussed. Some examples concerning the development of Velcade intended for multiple myeloma and non-Hodgkin's lymphoma are given. Practical issues that are commonly encountered when implementing adaptive design methods in clinical trials are also discussed.

  5. On the Adaptation of an Agile Information Systems Development Method

    NARCIS (Netherlands)

    Aydin, M.N.; Harmsen, F.; van Slooten, C.; Stegwee, R.A.

    2005-01-01

    Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This article presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. Two forms

  6. Adaptation of an Agile Information System Development Method

    NARCIS (Netherlands)

    Aydin, M.N.; Harmsen, A.F.; van Hillegersberg, Jos; Stegwee, R.A.; Siau, K.

    2007-01-01

    Little specific research has been conducted to date on the adaptation of agile information systems development (ISD) methods. This chapter presents the work practice in dealing with the adaptation of such a method in the ISD department of one of the leading financial institutes in Europe. The

  7. Adaptive integral equation methods in transport theory

    International Nuclear Information System (INIS)

    Kelley, C.T.

    1992-01-01

    In this paper, an adaptive multilevel algorithm for integral equations is described that has been developed with the Chandrasekhar H equation and its generalizations in mind. The algorithm maintains good performance when the Frechet derivative of the nonlinear map is singular at the solution, as happens in radiative transfer with conservative scattering and in critical neutron transport. Numerical examples that demonstrate the algorithm's effectiveness are presented

  8. Adaptive Nodal Transport Methods for Reactor Transient Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Thomas Downar; E. Lewis

    2005-08-31

    Develop methods for adaptively treating the angular, spatial, and time dependence of the neutron flux in reactor transient analysis. These methods were demonstrated in the DOE transport nodal code VARIANT and the US NRC spatial kinetics code, PARCS.

  9. A Mixed L2 Norm Regularized HRF Estimation Method for Rapid Event-Related fMRI Experiments

    Directory of Open Access Journals (Sweden)

    Yu Lei

    2013-01-01

    Full Text Available Brain state decoding or “mind reading” via multivoxel pattern analysis (MVPA has become a popular focus of functional magnetic resonance imaging (fMRI studies. In brain decoding, stimulus presentation rate is increased as fast as possible to collect many training samples and obtain an effective and reliable classifier or computational model. However, for extremely rapid event-related experiments, the blood-oxygen-level-dependent (BOLD signals evoked by adjacent trials are heavily overlapped in the time domain. Thus, identifying trial-specific BOLD responses is difficult. In addition, voxel-specific hemodynamic response function (HRF, which is useful in MVPA, should be used in estimation to decrease the loss of weak information across voxels and obtain fine-grained spatial information. Regularization methods have been widely used to increase the efficiency of HRF estimates. In this study, we propose a regularization framework called mixed L2 norm regularization. This framework involves Tikhonov regularization and an additional L2 norm regularization term to calculate reliable HRF estimates. This technique improves the accuracy of HRF estimates and significantly increases the classification accuracy of the brain decoding task when applied to a rapid event-related four-category object classification experiment. At last, some essential issues such as the impact of low-frequency fluctuation (LFF and the influence of smoothing are discussed for rapid event-related experiments.

  10. Application of Tikhonov regularization method to wind retrieval from scatterometer data II: cyclone wind retrieval with consideration of rain

    International Nuclear Information System (INIS)

    Zhong Jian; Huang Si-Xun; Fei Jian-Fang; Du Hua-Dong; Zhang Liang

    2011-01-01

    According to the conclusion of the simulation experiments in paper I, the Tikhonov regularization method is applied to cyclone wind retrieval with a rain-effect-considering geophysical model function (called GMF+Rain). The GMF+Rain model which is based on the NASA scatterometer-2 (NSCAT2) GMF is presented to compensate for the effects of rain on cyclone wind retrieval. With the multiple solution scheme (MSS), the noise of wind retrieval is effectively suppressed, but the influence of the background increases. It will cause a large wind direction error in ambiguity removal when the background error is large. However, this can be mitigated by the new ambiguity removal method of Tikhonov regularization as proved in the simulation experiments. A case study on an extratropical cyclone of hurricane observed with SeaWinds at 25-km resolution shows that the retrieved wind speed for areas with rain is in better agreement with that derived from the best track analysis for the GMF+Rain model, but the wind direction obtained with the two-dimensional variational (2DVAR) ambiguity removal is incorrect. The new method of Tikhonov regularization effectively improves the performance of wind direction ambiguity removal through choosing appropriate regularization parameters and the retrieved wind speed is almost the same as that obtained from the 2DVAR. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)

  11. A new method for the regularization of a class of divergent Feynman integrals in covariant and axial gauges

    International Nuclear Information System (INIS)

    Lee, H.C.; Milgram, M.S.

    1984-07-01

    A hybrid of dimensional and analytic regularization is used to regulate and uncover a Meijer's G-function representation for a class of massless, divergent Feynman integrals in an axial gauge. Integrals in the covariant gauge belong to a subclass and those in the light-cone gauge are reached by analytic continuation. The method decouples the physical ultraviolet and infrared singularities from the spurious axial gauge singularity but regulates all three simultaneously. For the axial gauge singularity, the new analytic method is more powerful and elegant than the old principal value prescription, but the two methods yield identical infinite as well as regular parts. It is shown that dimensional and analytic regularization can be made equivalent, implying that the former method is free from spurious γ5-anomalies and the latter preserves gauge invariance. The hybrid method permits the evaluation of integrals containing arbritrary integer powers of logarithms in the integrand by differentiation with respect to exponents. Such 'exponent derivatives' generate the same set of 'polylogs' as that generated in multi-loop integrals in perturbation theories and may be useful for solving equations in nonperturbation theories. The close relation between the method of exponent derivatives and the prescription of 't Hooft and Veltman for treating overlapping divergencies is pointed out. It is demonstrated that both methods generate functions that are free from unrecognizable logarithmic infinite parts. Nonperturbation theories expressed in terms of exponent derivatives are thus renormalizable. Some intriguing connections between nonperturbation theories and nonintegral exponents are pointed out

  12. Linearized Alternating Direction Method of Multipliers for Constrained Nonconvex Regularized Optimization

    Science.gov (United States)

    2016-11-22

    expensive per-iteration computational cost. When A or B is non-diagonal, neither the SCP algorithm nor the GIST algorithm is efficient for solving problem...2. We observe that the propossed algorithm solves both problem (11) and problem (13) efficiently . Compared with ℓ1 regularization, we observe that...Chen, M Hawes, L Mihaylova, J Xiao, and W Liu. Vehicle logo recognition by spatial- sift combined with logistic regression. Proceedings of Fusion 2016

  13. Variable Costs Method. Application Variants Adapted to Romanian Accounting Plan

    Directory of Open Access Journals (Sweden)

    Gheorghe V. Lepadatu

    2009-09-01

    Full Text Available This article describes the variable costs method and its adaptation possibilities to the Romanian general accounting plan. There are described the three variants of the variable costs method and presented the methodological stages that are passing through managerial accounting using the 9th class “Management accounts”. The article is ending with the advantages and disadvantages of adaptation of variable costs method to the Romanian general accounting plan.

  14. Designing adaptive intensive interventions using methods from engineering.

    Science.gov (United States)

    Lagoa, Constantino M; Bekiroglu, Korkut; Lanza, Stephanie T; Murphy, Susan A

    2014-10-01

    Adaptive intensive interventions are introduced, and new methods from the field of control engineering for use in their design are illustrated. A detailed step-by-step explanation of how control engineering methods can be used with intensive longitudinal data to design an adaptive intensive intervention is provided. The methods are evaluated via simulation. Simulation results illustrate how the designed adaptive intensive intervention can result in improved outcomes with less treatment by providing treatment only when it is needed. Furthermore, the methods are robust to model misspecification as well as the influence of unobserved causes. These new methods can be used to design adaptive interventions that are effective yet reduce participant burden. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  15. Danish pedagogical methodics: adaption on Belarusian ground

    DEFF Research Database (Denmark)

    Andryieuski, Andrei; Skryhan, K.; Andryieuskaya, M.

    2009-01-01

    On the basis of our experience of studies and work at Danish universities and Belarusian State University we present a range of methodics that can be easily applied to Belarusian higher school education system to increase its efficiency....

  16. Comparative analysis of gradient-field-based orientation estimation methods and regularized singular-value decomposition for fringe pattern processing.

    Science.gov (United States)

    Sun, Qi; Fu, Shujun

    2017-09-20

    Fringe orientation is an important feature of fringe patterns and has a wide range of applications such as guiding fringe pattern filtering, phase unwrapping, and abstraction. Estimating fringe orientation is a basic task for subsequent processing of fringe patterns. However, various noise, singular and obscure points, and orientation data degeneration lead to inaccurate calculations of fringe orientation. Thus, to deepen the understanding of orientation estimation and to better guide orientation estimation in fringe pattern processing, some advanced gradient-field-based orientation estimation methods are compared and analyzed. At the same time, following the ideas of smoothing regularization and computing of bigger gradient fields, a regularized singular-value decomposition (RSVD) technique is proposed for fringe orientation estimation. To compare the performance of these gradient-field-based methods, quantitative results and visual effect maps of orientation estimation are given on simulated and real fringe patterns that demonstrate that the RSVD produces the best estimation results at a cost of relatively less time.

  17. A sparsity-regularized Born iterative method for reconstruction of two-dimensional piecewise continuous inhomogeneous domains

    KAUST Repository

    Sandhu, Ali Imran

    2016-04-10

    A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.

  18. Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function

    Directory of Open Access Journals (Sweden)

    Najme Maleki

    2014-07-01

    Full Text Available This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas. Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data. Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.

  19. Adaptive upscaling with the dual mesh method

    Energy Technology Data Exchange (ETDEWEB)

    Guerillot, D.; Verdiere, S.

    1997-08-01

    The objective of this paper is to demonstrate that upscaling should be calculated during the flow simulation instead of trying to enhance the a priori upscaling methods. Hence, counter-examples are given to motivate our approach, the so-called Dual Mesh Method. The main steps of this numerical algorithm are recalled. Applications illustrate the necessity to consider different average relative permeability values depending on the direction in space. Moreover, these values could be different for the same average saturation. This proves that an a priori upscaling cannot be the answer even in homogeneous cases because of the {open_quotes}dynamical heterogeneity{close_quotes} created by the saturation profile. Other examples show the efficiency of the Dual Mesh Method applied to heterogeneous medium and to an actual field case in South America.

  20. Application of clustering methods: Regularized Markov clustering (R-MCL) for analyzing dengue virus similarity

    Science.gov (United States)

    Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.

    2017-07-01

    Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.

  1. Regular figures

    CERN Document Server

    Tóth, L Fejes; Ulam, S; Stark, M

    1964-01-01

    Regular Figures concerns the systematology and genetics of regular figures. The first part of the book deals with the classical theory of the regular figures. This topic includes description of plane ornaments, spherical arrangements, hyperbolic tessellations, polyhedral, and regular polytopes. The problem of geometry of the sphere and the two-dimensional hyperbolic space are considered. Classical theory is explained as describing all possible symmetrical groupings in different spaces of constant curvature. The second part deals with the genetics of the regular figures and the inequalities fo

  2. A Consistent Adaptive-resolution Smoothed Particle Hydrodynamics Method

    Science.gov (United States)

    Pan, Wenxiao; Hu, Wei; Hu, Xiaozhe; Negrut, Dan; Univ of Wisconsin, Madison Collaboration; Tufts University Collaboration

    2017-11-01

    We seek to accelerate and increase the size of simulations for fluid-structure interactions (FSI) by using adaptive resolutions in the spatial discretization of the equations governing the time evolution of systems displaying two-way fluid-solid coupling. To this end, we propose an adaptive-resolution smoothed particle hydrodynamics (SPH) approach, in which spatial resolutions adaptively vary according to a recovery-based error estimator of velocity gradient as flow evolves. The second-order consistent discretization of spatial differential operators is employed to ensure the accuracy of the proposed method. The convergence, accuracy, and efficiency attributes of the new method are assessed by simulating different flows. In this process, the numerical results are compared to the analytical, finite element, and consistent SPH single-resolution solutions. We anticipate that the proposed adaptive-resolution method will enlarge the class of SPH-tractable FSI applications.

  3. Study on Self-adapting Processing Method in Radiant Image

    International Nuclear Information System (INIS)

    Shen Kuan; Cai Yufang; Duan Liming

    2009-01-01

    This paper describes principle and character of digital radiography. After analyzing the drawbacks of current processing methods and specialty of collected signals, a new self-adapting method based on the wavelet transform is applied to process radiation image. The method maps the subsection of signal to 0-255 to form several gray images and then fuses these images to form a new enhanced image, then uses nonlinear color assignment scheme increasing the image resolution. The experiment results show that the self-adapting processing method is better than traditional ones. (authors)

  4. An Adaptive Reordered Method for Computing PageRank

    Directory of Open Access Journals (Sweden)

    Yi-Ming Bu

    2013-01-01

    Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.

  5. 3D Inversion of Magnetic Data through Wavelet based Regularization Method

    Directory of Open Access Journals (Sweden)

    Maysam Abedi

    2015-06-01

    Full Text Available This study deals with the 3D recovering of magnetic susceptibility model by incorporating the sparsity-based constraints in the inversion algorithm. For this purpose, the area under prospect was divided into a large number of rectangular prisms in a mesh with unknown susceptibilities. Tikhonov cost functions with two sparsity functions were used to recover the smooth parts as well as the sharp boundaries of model parameters. A pre-selected basis namely wavelet can recover the region of smooth behaviour of susceptibility distribution while Haar or finite-difference (FD domains yield a solution with rough boundaries. Therefore, a regularizer function which can benefit from the advantages of both wavelets and Haar/FD operators in representation of the 3D magnetic susceptibility distributionwas chosen as a candidate for modeling magnetic anomalies. The optimum wavelet and parameter β which controls the weight of the two sparsifying operators were also considered. The algorithm assumed that there was no remanent magnetization and observed that magnetometry data represent only induced magnetization effect. The proposed approach is applied to a noise-corrupted synthetic data in order to demonstrate its suitability for 3D inversion of magnetic data. On obtaining satisfactory results, a case study pertaining to the ground based measurement of magnetic anomaly over a porphyry-Cu deposit located in Kerman providence of Iran. Now Chun deposit was presented to be 3D inverted. The low susceptibility in the constructed model coincides with the known location of copper ore mineralization.

  6. Introduction to Adaptive Methods for Differential Equations

    Science.gov (United States)

    Eriksson, Kenneth; Estep, Don; Hansbo, Peter; Johnson, Claes

    Knowing thus the Algorithm of this calculus, which I call Differential Calculus, all differential equations can be solved by a common method (Gottfried Wilhelm von Leibniz, 1646-1719).When, several years ago, I saw for the first time an instrument which, when carried, automatically records the number of steps taken by a pedestrian, it occurred to me at once that the entire arithmetic could be subjected to a similar kind of machinery so that not only addition and subtraction, but also multiplication and division, could be accomplished by a suitably arranged machine easily, promptly and with sure results. For it is unworthy of excellent men to lose hours like slaves in the labour of calculations, which could safely be left to anyone else if the machine was used. And now that we may give final praise to the machine, we may say that it will be desirable to all who are engaged in computations which, as is well known, are the managers of financial affairs, the administrators of others estates, merchants, surveyors, navigators, astronomers, and those connected with any of the crafts that use mathematics (Leibniz).

  7. Track and vertex reconstruction: From classical to adaptive methods

    International Nuclear Information System (INIS)

    Strandlie, Are; Fruehwirth, Rudolf

    2010-01-01

    This paper reviews classical and adaptive methods of track and vertex reconstruction in particle physics experiments. Adaptive methods have been developed to meet the experimental challenges at high-energy colliders, in particular, the CERN Large Hadron Collider. They can be characterized by the obliteration of the traditional boundaries between pattern recognition and statistical estimation, by the competition between different hypotheses about what constitutes a track or a vertex, and by a high level of flexibility and robustness achieved with a minimum of assumptions about the data. The theoretical background of some of the adaptive methods is described, and it is shown that there is a close connection between the two main branches of adaptive methods: neural networks and deformable templates, on the one hand, and robust stochastic filters with annealing, on the other hand. As both classical and adaptive methods of track and vertex reconstruction presuppose precise knowledge of the positions of the sensitive detector elements, the paper includes an overview of detector alignment methods and a survey of the alignment strategies employed by past and current experiments.

  8. Regular icosahedron

    OpenAIRE

    Mihelak, Veronika

    2016-01-01

    Here are collected properties of regular icosahedron which are useful for students of mathematics or mathematics teachers who can prepare exercises for talented students in elementary or middle school. The initial section describes the basic properties of regular polyhedra: tetrahedron, cube, dodecahedron, octahedron and of course icosahedron. We have proven that there are only five regular or platonic solids and have verified Euler's polyhedron formula for them. Then we focused on selected p...

  9. Exact Solutions of the Space Time Fractional Symmetric Regularized Long Wave Equation Using Different Methods

    Directory of Open Access Journals (Sweden)

    Özkan Güner

    2014-01-01

    Full Text Available We apply the functional variable method, exp-function method, and (G′/G-expansion method to establish the exact solutions of the nonlinear fractional partial differential equation (NLFPDE in the sense of the modified Riemann-Liouville derivative. As a result, some new exact solutions for them are obtained. The results show that these methods are very effective and powerful mathematical tools for solving nonlinear fractional equations arising in mathematical physics. As a result, these methods can also be applied to other nonlinear fractional differential equations.

  10. An optimal adaptive wavelet method without coarsening of the iterands

    NARCIS (Netherlands)

    Gantumur, T.; Harbrecht, H.; Stevenson, R.

    2007-01-01

    In this paper, an adaptive wavelet method for solving linear operator equations is constructed that is a modification of the method from [Math. Comp, 70 (2001), pp. 27-75] by Cohen, Dahmen and DeVore, in the sense that there is no recurrent coarsening of the iterands. Despite this, it will be shown

  11. Maximal ? -regularity

    NARCIS (Netherlands)

    Van Neerven, J.M.A.M.; Veraar, M.C.; Weis, L.

    2015-01-01

    In this paper, we prove maximal regularity estimates in “square function spaces” which are commonly used in harmonic analysis, spectral theory, and stochastic analysis. In particular, they lead to a new class of maximal regularity results for both deterministic and stochastic equations in L p

  12. The adaptive collision source method for discrete ordinates radiation transport

    International Nuclear Information System (INIS)

    Walters, William J.; Haghighat, Alireza

    2017-01-01

    Highlights: • A new adaptive quadrature method to solve the discrete ordinates transport equation. • The adaptive collision source (ACS) method splits the flux into n’th collided components. • Uncollided flux requires high quadrature; this is lowered with number of collisions. • ACS automatically applies appropriate quadrature order each collided component. • The adaptive quadrature is 1.5–4 times more efficient than uniform quadrature. - Abstract: A novel collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained, with potentially a different quadrature order used for each. Traditionally, the flux from every iteration is combined, with the same quadrature applied to the combined flux. Since the scattering process tends to distribute the radiation more evenly over angles (i.e., make it more isotropic), the quadrature requirements generally decrease with each iteration. This method allows for an optimal use of processing power, by using a high order quadrature for the first iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and is referred to as the adaptive collision source (ACS) method. The ACS methodology has been implemented in the 3-D, parallel, multigroup discrete ordinates code TITAN. This code was tested on a several simple and complex fixed-source problems. The ACS implementation in TITAN has shown a reduction in computation time by a factor of 1.5–4 on the fixed-source test problems, for the same desired level of accuracy, as compared to the standard TITAN code.

  13. Conserved charges of Schwarzschild-NUT-AdS space-time using the method of regularization through relocalization

    Science.gov (United States)

    Nashed, G. G. L.

    2015-10-01

    A tetrad field, which gives Schwarzschild-NUT-AdS metric, is provided. We calculate the total conserved charges of this tetrad. This is done by using "regularization through relocalization" for the first time. This method gives the correct value of the total charge of Schwarzschild-NUT-AdS space-time, which depends on the gravitational mass of the system. We show that the NUT parameter has no physical meaning on the conserved quantities of the Schwarzschild-NUT-AdS space-time.

  14. Extended generalized Lagrangian multipliers for magnetohydrodynamics using adaptive multiresolution methods

    Directory of Open Access Journals (Sweden)

    Domingues M. O.

    2013-12-01

    Full Text Available We present a new adaptive multiresoltion method for the numerical simulation of ideal magnetohydrodynamics. The governing equations, i.e., the compressible Euler equations coupled with the Maxwell equations are discretized using a finite volume scheme on a two-dimensional Cartesian mesh. Adaptivity in space is obtained via Harten’s cell average multiresolution analysis, which allows the reliable introduction of a locally refined mesh while controlling the error. The explicit time discretization uses a compact Runge–Kutta method for local time stepping and an embedded Runge-Kutta scheme for automatic time step control. An extended generalized Lagrangian multiplier approach with the mixed hyperbolic-parabolic correction type is used to control the incompressibility of the magnetic field. Applications to a two-dimensional problem illustrate the properties of the method. Memory savings and numerical divergences of magnetic field are reported and the accuracy of the adaptive computations is assessed by comparing with the available exact solution.

  15. Adaptive Subband Filtering Method for MEMS Accelerometer Noise Reduction

    Directory of Open Access Journals (Sweden)

    Piotr PIETRZAK

    2008-12-01

    Full Text Available Silicon microaccelerometers can be considered as an alternative to high-priced piezoelectric sensors. Unfortunately, relatively high noise floor of commercially available MEMS (Micro-Electro-Mechanical Systems sensors limits the possibility of their usage in condition monitoring systems of rotating machines. The solution of this problem is the method of signal filtering described in the paper. It is based on adaptive subband filtering employing Adaptive Line Enhancer. For filter weights adaptation, two novel algorithms have been developed. They are based on the NLMS algorithm. Both of them significantly simplify its software and hardware implementation and accelerate the adaptation process. The paper also presents the software (Matlab and hardware (FPGA implementation of the proposed noise filter. In addition, the results of the performed tests are reported. They confirm high efficiency of the solution.

  16. Islanding detection scheme based on adaptive identifier signal estimation method.

    Science.gov (United States)

    Bakhshi, M; Noroozian, R; Gharehpetian, G B

    2017-11-01

    This paper proposes a novel, passive-based anti-islanding method for both inverter and synchronous machine-based distributed generation (DG) units. Unfortunately, when the active/reactive power mismatches are near to zero, majority of the passive anti-islanding methods cannot detect the islanding situation, correctly. This study introduces a new islanding detection method based on exponentially damped signal estimation method. The proposed method uses adaptive identifier method for estimating of the frequency deviation of the point of common coupling (PCC) link as a target signal that can detect the islanding condition with near-zero active power imbalance. Main advantage of the adaptive identifier method over other signal estimation methods is its small sampling window. In this paper, the adaptive identifier based islanding detection method introduces a new detection index entitled decision signal by estimating of oscillation frequency of the PCC frequency and can detect islanding conditions, properly. In islanding conditions, oscillations frequency of PCC frequency reach to zero, thus threshold setting for decision signal is not a tedious job. The non-islanding transient events, which can cause a significant deviation in the PCC frequency are considered in simulations. These events include different types of faults, load changes, capacitor bank switching, and motor starting. Further, for islanding events, the capability of the proposed islanding detection method is verified by near-to-zero active power mismatches. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.

  17. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, M.; Johnson, C.R.; Smith, P.J.; Fogelson, A.

    1998-12-10

    OAK-B135 Final Report: Symposium on Adaptive Methods for Partial Differential Equations. Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  18. Mapping of Primary Instructional Methods and Teaching Techniques for Regularly Scheduled, Formal Teaching Sessions in an Anesthesia Residency Program

    DEFF Research Database (Denmark)

    Vested Madsen, Matias; Macario, Alex; Yamamoto, Satoshi

    2016-01-01

    In this study, we examined the regularly scheduled, formal teaching sessions in a single anesthesiology residency program to (1) map the most common primary instructional methods, (2) map the use of 10 known teaching techniques, and (3) assess if residents scored sessions that incorporated active......-question written survey rating the session. The most common primary instructional methods were computer slides-based classroom lectures (66%), workshops (15%), simulations (5%), and journal club (5%). The number of teaching techniques used per formal teaching session averaged 5.31 (SD, 1.92; median, 5...... had a mean score of 8.44 (range, 5-10; median, 9; SD, 1.2) compared with a mean score of 8.63 (range, 5-10; median, 9; SD, 1.1) for active sessions (P = 0.63). Slides-based classroom lectures were the most common instructional method, and faculty used an average of 5 known teaching techniques per...

  19. Teaching Rapid and Slow Learners in High Schools: The Status of Adaptations in Junior, Senior, and Regular High Schools Enrolling More than 300 Pupils. Bulletin, 1954, No. 5

    Science.gov (United States)

    Jewett, Arno; Hull, J. Dan; Brown, Kenneth E.; Cummings, Howard H.; Johnson, Philip G.; Laxson, Mary; Ludington, John R.; Mallory, Berenice; Segel, David

    1954-01-01

    This bulletin represents a cooperative effort of nine secondary school specialists in the Office of Education to picture the provisions used in large high schools to adapt teaching methods in different subjects for pupils who are not average. It is recognized that a pupil may be a rapid learner in one subject and a slow learner in another. Each of…

  20. The Standard Days Method(®): efficacy, satisfaction and demand at regular family planning service delivery settings in Turkey.

    Science.gov (United States)

    Kursun, Zerrin; Cali, Sanda; Sakarya, Sibel

    2014-06-01

    To evaluate the demand, efficacy, and satisfaction concerning the Standard Days Method(®) (SDM; a fertility awareness method) as an option presented among other contraceptive methods at regular service delivery settings. The survey group consisted of 993 women who presented at the primary care units in Umraniye District of Istanbul, Turkey, between 1 October 2006 and 31 March 2008, and started to use a new method. Women were enrolled until reaching a limit of 250 new users for each method, or expiration of the six-month registration period. Participants were followed for up to one year of method use. The characteristics of women who chose the SDM were similar to those of participants who opted for other methods. The most common reasons for selecting it were that it is natural and causes no side effects. Fifty-one percent used the SDM for the full year, compared to 71% who chose an intrauterine device (IUD). Continuation rates were significantly lower for all other methods. During the one-year follow-up period, 12% of SDM-, 7% of pill-, 7% of condom-, 3% of monthly injection-, 1% of quarterly injection-, and 0.5% of IUD users became pregnant. The SDM had relatively high continuation rates and relatively good levels of satisfaction among participants and their husbands. It should be mentioned among the routinely offered contraceptive methods.

  1. A comparative analysis of the EEDF obtained by Regularization and by Least square fit methods

    International Nuclear Information System (INIS)

    Gutierrez T, C.; Flores Ll, H.

    2004-01-01

    The second derived of the characteristic curve current-voltage (I - V) of a Langmuir probe (I - V) is numerically calculated using the Tikhonov method for to determine the distribution function of the electrons energy (EEDF). One comparison of the obtained EEDF and a fit by least square are discussed (LS). The I - V experimental curve is obtained in a plasma source in the electron cyclotron resonance (ECR) using a cylindrical probe. The parameters of plasma are determined of the EEDF by means of the Laframboise theory. For the case of the LS fit, the obtained results are similar to those obtained by the Tikhonov method, but in the first case the procedure is slow to achieve the best fit. (Author)

  2. Nonlinear Projective-Iteration Methods for Solving Transport Problems on Regular and Unstructured Grids

    International Nuclear Information System (INIS)

    Dmitriy Y. Anistratov; Adrian Constantinescu; Loren Roberts; William Wieselquist

    2007-01-01

    This is a project in the field of fundamental research on numerical methods for solving the particle transport equation. Numerous practical problems require to use unstructured meshes, for example, detailed nuclear reactor assembly-level calculations, large-scale reactor core calculations, radiative hydrodynamics problems, where the mesh is determined by hydrodynamic processes, and well-logging problems in which the media structure has very complicated geometry. Currently this is an area of very active research in numerical transport theory. main issues in developing numerical methods for solving the transport equation are the accuracy of the numerical solution and effectiveness of iteration procedure. The problem in case of unstructured grids is that it is very difficult to derive an iteration algorithm that will be unconditionally stable

  3. The research of contamination regularities of historical buildings and architectural monuments by methods of computer modeling

    Directory of Open Access Journals (Sweden)

    Kuzmichev Andrey A.

    2017-01-01

    Full Text Available Due to the active step of urbanization and rapid development of industry the external appearance of buildings and architectural monuments of urban environment from visual ecology position requires special attention. Dust deposition by polluted atmospheric air is one of the key aspects of degradation of the facades of buildings. With the help of modern computer modeling methods it is possible to evaluate the impact of polluted atmospheric air on the external facades of the buildings in order to save them.

  4. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  5. Wavelet methods in multi-conjugate adaptive optics

    International Nuclear Information System (INIS)

    Helin, T; Yudytskiy, M

    2013-01-01

    The next generation ground-based telescopes rely heavily on adaptive optics for overcoming the limitation of atmospheric turbulence. In the future adaptive optics modalities, like multi-conjugate adaptive optics (MCAO), atmospheric tomography is the major mathematical and computational challenge. In this severely ill-posed problem, a fast and stable reconstruction algorithm is needed that can take into account many real-life phenomena of telescope imaging. We introduce a novel reconstruction method for the atmospheric tomography problem and demonstrate its performance and flexibility in the context of MCAO. Our method is based on using locality properties of compactly supported wavelets, both in the spatial and frequency domains. The reconstruction in the atmospheric tomography problem is obtained by solving the Bayesian MAP estimator with a conjugate-gradient-based algorithm. An accelerated algorithm with preconditioning is also introduced. Numerical performance is demonstrated on the official end-to-end simulation tool OCTOPUS of European Southern Observatory. (paper)

  6. On Self-Adaptive Method for General Mixed Variational Inequalities

    Directory of Open Access Journals (Sweden)

    Abdellah Bnouhachem

    2008-01-01

    Full Text Available We suggest and analyze a new self-adaptive method for solving general mixed variational inequalities, which can be viewed as an improvement of the method of (Noor 2003. Global convergence of the new method is proved under the same assumptions as Noor's method. Some preliminary computational results are given to illustrate the efficiency of the proposed method. Since the general mixed variational inequalities include general variational inequalities, quasivariational inequalities, and nonlinear (implicit complementarity problems as special cases, results proved in this paper continue to hold for these problems.

  7. SIMULATION OF PULSED BREAKDOWN IN HELIUM BY ADAPTIVE METHODS

    Directory of Open Access Journals (Sweden)

    S. I. Eliseev

    2014-09-01

    Full Text Available The paper deals with the processes occurring during electrical breakdown in gases as well as numerical simulation of these processes using adaptive mesh refinement methods. Discharge between needle electrodes in helium at atmospheric pressure is selected for the test simulation. Physical model of the accompanying breakdown processes is based on self- consistent system of continuity equations for streams of charged particles (electrons and positive ions and Poisson equation for electric potential. Sharp plasma heterogeneity in the area of streamers requires the usage of adaptive algorithms for constructing of computational grids for modeling. The method for grid adaptive construction together with justification of its effectiveness for significantly unsteady gas breakdown simulation at atmospheric pressure is described. Upgraded version of Gerris package is used for numerical simulation of electrical gas breakdown. Software package, originally focused on solution of nonlinear problems in fluid dynamics, appears to be suitable for processes modeling in non-stationary plasma described by continuity equations. The usage of adaptive grids makes it possible to get an adequate numerical model for the breakdown development in the system of needle electrodes. Breakdown dynamics is illustrated by contour plots of electron densities and electric field intensity obtained in the course of solving. Breakdown mechanism of positive and negative (orientated to anode streamers formation is demonstrated and analyzed. Correspondence between adaptive building of computational grid and generated plasma gradients is shown. Obtained results can be used as a basis for full-scale numerical experiments on electric breakdown in gases.

  8. Investigation of physical regularities in gamma gamma logging of oil wells by Monte Carlo method

    International Nuclear Information System (INIS)

    Gulin, Yu.A.

    1973-01-01

    Some results are given of calculations by the Monte Carlo method of specific problems of gamma-gamma density logging. The paper considers the influence of probe length and volume density of the rocks; the angular distribution of the scattered radiation incident on the instrument; the spectra of the radiation being recorded and of the source radiation; depths of surveys, the effect of the mud cake, the possibility of collimating the source radiation; the choice of source, initial collimation angles, the optimum angle of recording scattered gamma-radiation and the radiation discrimination threshold; and the possibility of determining the mineralogical composition of rocks in sections of oil wells and of identifying once-scattered radiation. (author)

  9. Adaptation-II of the surrogate methods for linear programming ...

    African Journals Online (AJOL)

    Adaptation-II of the surrogate methods for linear programming problems. SO Oko. Abstract. No Abstract. Global Journal of Mathematical Sciences Vol. 5(1) 2006: 63-71. Full Text: EMAIL FULL TEXT EMAIL FULL TEXT · DOWNLOAD FULL TEXT DOWNLOAD FULL TEXT · http://dx.doi.org/10.4314/gjmas.v5i1.21381.

  10. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    An adaptive image denoising method based on local parameters optimization. 881 the computations and the directional decomposition is done using the directional filter banks. (DFB). Then, the Donoho and Johnstone's threshold is used to modify the coefficients, which in turn provide the noise-free image on applying the ...

  11. Space-time adaptive wavelet methods for parabolic evolution problems

    NARCIS (Netherlands)

    Schwab, C.; Stevenson, R.

    2009-01-01

    With respect to space-time tensor-product wavelet bases, parabolic initial boundary value problems are equivalently formulated as bi-infinite matrix problems. Adaptive wavelet methods are shown to yield sequences of approximate solutions which converge at the optimal rate. In case the spatial domain

  12. EXPERIMENTAL AND ANALYTICAL METHOD FOR ACCELERATED FATIGUE BENCH TEST OF STRUCTURES AT REGULAR MULTI-CYCLE LOADING

    Directory of Open Access Journals (Sweden)

    E. K. Pochtenny

    2006-01-01

    Full Text Available The paper presents main statements of the developed general scientific principles and experimental and analytical method for accelerated bench test of bearing structures and machine parts at a regular loading. According to the test results executed in terms of the proposed methodology it is possible to predict a service life of a number of automotive bearing structures for conditions of irregular loading.The developed method has been used for execution of bench tests and calculation and experimental estimation of a service life of a truck tractor frame, prospective types of axles and elements of trailer train suspension and other bearing structures of automotive machinery of the Minsk Motor-Works.

  13. Three-dimension Cole-Cole model inversion of induced polarization data based on regularized conjugate gradient method

    Science.gov (United States)

    Xu, Zhengwei

    Modeling of induced polarization (IP) phenomena is important for developing effective methods for remote sensing of subsurface geology and is widely used in mineral exploration. However, the quantitative interpretation of IP data in a complex 3D environment is still a challenging problem of applied geophysics. In this dissertation I use the regularized conjugate gradient method to determine the 3D distribution of the four parameters of the Cole-Cole model based on surface induced polarization (IP) data. This method takes into account the nonlinear nature of both electromagnetic induction (EMI) and IP phenomena. The solution of the 3D IP inverse problem is based on the regularized smooth inversion only. The method was tested on synthetic models with DC conductivity, intrinsic chargeability, time constant, and relaxation parameters, and it was also applied to the practical 3D IP survey data. I demonstrate that the four parameters of the Cole-Cole model, DC electrical resistivity, rho 0 , chargeability, eta time constant, tau and the relaxation parameter, C, can be recovered from the observed IP data simultaneously. There are four Cole-Cole parameters involved in the inversion, in other words, within each cell, there are DC conductivity (sigma0 ), chargeability (eta), time parameters (tau), and relaxation parameters (C) compared to conductivity only, used in EM only inversion. In addition to more inversion parameters used in IP survey, dipole-dipole configuration which requires more sources and receivers. One the other hand, calculating Green tensor and Frechet matrix time consuming and storing them requires a lot of memory. So, I develop parallel computation using MATLAB parallel tool to speed up the calculation.

  14. Use of dynamic grid adaption in the ASWR-method

    International Nuclear Information System (INIS)

    Graf, U.; Romstedt, P.; Werner, W.

    1985-01-01

    A dynamic grid adaption method has been developed for use with the ASWR-method. The method automatically adapts the number and position of the spatial meshpoints as the solution of hyperbolic or parabolic vector partial differential equations progresses in time. The mesh selection algorithm is based on the minimization of the L 2 -norm of the spatial discretization error. The method permits accurate calculation of the evolution of inhomogenities like wave fronts, shock layers and other sharp transitions, while generally using a coarse computational grid. The number of required mesh points is significantly reduced, relative to a fixed Eulerian grid. Since the mesh selection algorithm is computationally inexpensive, a corresponding reduction of computing time results

  15. An Adaptive Laboratory Evolution Method to Accelerate Autotrophic Metabolism

    DEFF Research Database (Denmark)

    Zhang, Tian; Tremblay, Pier-Luc

    2018-01-01

    Adaptive laboratory evolution (ALE) is an approach enabling the development of novel characteristics in microbial strains via the application of a constant selection pressure. This method is also an efficient tool to acquire insights on molecular mechanisms responsible for specific phenotypes. AL...... autotrophically and reducing CO2 into acetate more efficiently. Strains developed via this ALE method were also used to gain knowledge on the autotrophic metabolism of S. ovata as well as other acetogenic bacteria....

  16. Panchromatic cooperative hyperspectral adaptive wide band deletion repair method

    Science.gov (United States)

    Jiang, Bitao; Shi, Chunyu

    2018-02-01

    In the hyperspectral data, the phenomenon of stripe deletion often occurs, which seriously affects the efficiency and accuracy of data analysis and application. Narrow band deletion can be directly repaired by interpolation, and this method is not ideal for wide band deletion repair. In this paper, an adaptive spectral wide band missing restoration method based on panchromatic information is proposed, and the effectiveness of the algorithm is verified by experiments.

  17. Method and system for environmentally adaptive fault tolerant computing

    Science.gov (United States)

    Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)

    2010-01-01

    A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.

  18. Discrete linear canonical transform computation by adaptive method.

    Science.gov (United States)

    Zhang, Feng; Tao, Ran; Wang, Yue

    2013-07-29

    The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.

  19. Adaptive sampling method in deep-penetration particle transport problem

    International Nuclear Information System (INIS)

    Wang Ruihong; Ji Zhicheng; Pei Lucheng

    2012-01-01

    Deep-penetration problem has been one of the difficult problems in shielding calculation with Monte Carlo method for several decades. In this paper, a kind of particle transport random walking system under the emission point as a sampling station is built. Then, an adaptive sampling scheme is derived for better solution with the achieved information. The main advantage of the adaptive scheme is to choose the most suitable sampling number from the emission point station to obtain the minimum value of the total cost in the process of the random walk. Further, the related importance sampling method is introduced. Its main principle is to define the importance function due to the particle state and to ensure the sampling number of the emission particle is proportional to the importance function. The numerical results show that the adaptive scheme under the emission point as a station could overcome the difficulty of underestimation of the result in some degree, and the adaptive importance sampling method gets satisfied results as well. (authors)

  20. Comparison of methods for optimal choice of the regularization parameter for linear electrical impedance tomography of brain function.

    Science.gov (United States)

    Abascal, Juan-Felipe P J; Arridge, Simon R; Bayford, Richard H; Holder, David S

    2008-11-01

    Electrical impedance tomography has the potential to provide a portable non-invasive method for imaging brain function. Clinical data collection has largely been undertaken with time difference data and linear image reconstruction methods. The purpose of this work was to determine the best method for selecting the regularization parameter of the inverse procedure, using the specific application of evoked brain activity in neonatal babies as an exemplar. The solution error norm and image SNR for the L-curve (LC), discrepancy principle (DP), generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) selection methods were evaluated in simulated data using an anatomically accurate finite element method (FEM) of the neonatal head and impedance changes due to blood flow in the visual cortex recorded in vivo. For simulated data, LC, GCV and UPRE were equally best. In human data in four neonatal infants, no significant differences were found among selection methods. We recommend that GCV or LC be employed for reconstruction of human neonatal images, as UPRE requires an empirical estimate of the noise variance.

  1. An adaptive sampling and windowing interrogation method in PIV

    Science.gov (United States)

    Theunissen, R.; Scarano, F.; Riethmuller, M. L.

    2007-01-01

    This study proposes a cross-correlation based PIV image interrogation algorithm that adapts the number of interrogation windows and their size to the image properties and to the flow conditions. The proposed methodology releases the constraint of uniform sampling rate (Cartesian mesh) and spatial resolution (uniform window size) commonly adopted in PIV interrogation. Especially in non-optimal experimental conditions where the flow seeding is inhomogeneous, this leads either to loss of robustness (too few particles per window) or measurement precision (too large or coarsely spaced interrogation windows). Two criteria are investigated, namely adaptation to the local signal content in the image and adaptation to local flow conditions. The implementation of the adaptive criteria within a recursive interrogation method is described. The location and size of the interrogation windows are locally adapted to the image signal (i.e., seeding density). Also the local window spacing (commonly set by the overlap factor) is put in relation with the spatial variation of the velocity field. The viability of the method is illustrated over two experimental cases where the limitation of a uniform interrogation approach appears clearly: a shock-wave-boundary layer interaction and an aircraft vortex wake. The examples show that the spatial sampling rate can be adapted to the actual flow features and that the interrogation window size can be arranged so as to follow the spatial distribution of seeding particle images and flow velocity fluctuations. In comparison with the uniform interrogation technique, the spatial resolution is locally enhanced while in poorly seeded regions the level of robustness of the analysis (signal-to-noise ratio) is kept almost constant.

  2. Final Report: Symposium on Adaptive Methods for Partial Differential Equations

    Energy Technology Data Exchange (ETDEWEB)

    Pernice, Michael; Johnson, Christopher R.; Smith, Philip J.; Fogelson, Aaron

    1998-12-08

    Complex physical phenomena often include features that span a wide range of spatial and temporal scales. Accurate simulation of such phenomena can be difficult to obtain, and computations that are under-resolved can even exhibit spurious features. While it is possible to resolve small scale features by increasing the number of grid points, global grid refinement can quickly lead to problems that are intractable, even on the largest available computing facilities. These constraints are particularly severe for three dimensional problems that involve complex physics. One way to achieve the needed resolution is to refine the computational mesh locally, in only those regions where enhanced resolution is required. Adaptive solution methods concentrate computational effort in regions where it is most needed. These methods have been successfully applied to a wide variety of problems in computational science and engineering. Adaptive methods can be difficult to implement, prompting the development of tools and environments to facilitate their use. To ensure that the results of their efforts are useful, algorithm and tool developers must maintain close communication with application specialists. Conversely it remains difficult for application specialists who are unfamiliar with the methods to evaluate the trade-offs between the benefits of enhanced local resolution and the effort needed to implement an adaptive solution method.

  3. Reliable prediction of heat transfer coefficient in three-phase bubble column reactor via adaptive neuro-fuzzy inference system and regularization network

    Science.gov (United States)

    Garmroodi Asil, A.; Nakhaei Pour, A.; Mirzaei, Sh.

    2018-04-01

    In the present article, generalization performances of regularization network (RN) and optimize adaptive neuro-fuzzy inference system (ANFIS) are compared with a conventional software for prediction of heat transfer coefficient (HTC) as a function of superficial gas velocity (5-25 cm/s) and solid fraction (0-40 wt%) at different axial and radial locations. The networks were trained by resorting several sets of experimental data collected from a specific system of air/hydrocarbon liquid phase/silica particle in a slurry bubble column reactor (SBCR). A special convection HTC measurement probe was manufactured and positioned in an axial distance of 40 and 130 cm above the sparger at center and near the wall of SBCR. The simulation results show that both in-house RN and optimized ANFIS due to powerful noise filtering capabilities provide superior performances compared to the conventional software of MATLAB ANFIS and ANN toolbox. For the case of 40 and 130 cm axial distance from center of sparger, at constant superficial gas velocity of 25 cm/s, adding 40 wt% silica particles to liquid phase leads to about 66% and 69% increasing in HTC respectively. The HTC in the column center for all the cases studied are about 9-14% larger than those near the wall region.

  4. A viscosity adaption method for Lattice Boltzmann simulations

    Science.gov (United States)

    Conrad, Daniel; Schneider, Andreas; Böhle, Martin

    2014-11-01

    In this work, we consider the limited fitness for practical use of the Lattice Boltzmann Method for non-Newtonian fluid flows. Several authors have shown that the LBM is capable of correctly simulating those fluids. However, due to stability reasons the modeled viscosity range has to be truncated. The resulting viscosity boundaries are chosen arbitrarily, because the correct simulation Mach number for the physical problem is unknown a priori. This easily leads to corrupt simulation results. A viscosity adaption method (VAM) is derived which drastically improves the applicability of LBM for non-Newtonian fluid flows by adaption of the modeled viscosity range to the actual physical problem. This is done through tuning of the global Mach number to the solution-dependent shear rate. We demonstrate that the VAM can be used to accelerate LBM simulations and improve their accuracy, for both steady state and transient cases.

  5. A NDVI assisted remote sensing image adaptive scale segmentation method

    Science.gov (United States)

    Zhang, Hong; Shen, Jinxiang; Ma, Yanmei

    2018-03-01

    Multiscale segmentation of images can effectively form boundaries of different objects with different scales. However, for the remote sensing image which widely coverage with complicated ground objects, the number of suitable segmentation scales, and each of the scale size is still difficult to be accurately determined, which severely restricts the rapid information extraction of the remote sensing image. A great deal of experiments showed that the normalized difference vegetation index (NDVI) can effectively express the spectral characteristics of a variety of ground objects in remote sensing images. This paper presents a method using NDVI assisted adaptive segmentation of remote sensing images, which segment the local area by using NDVI similarity threshold to iteratively select segmentation scales. According to the different regions which consist of different targets, different segmentation scale boundaries could be created. The experimental results showed that the adaptive segmentation method based on NDVI can effectively create the objects boundaries for different ground objects of remote sensing images.

  6. QUEST+: A general multidimensional Bayesian adaptive psychometric method.

    Science.gov (United States)

    Watson, Andrew B

    2017-03-01

    QUEST+ is a Bayesian adaptive psychometric testing method that allows an arbitrary number of stimulus dimensions, psychometric function parameters, and trial outcomes. It is a generalization and extension of the original QUEST procedure and incorporates many subsequent developments in the area of parametric adaptive testing. With a single procedure, it is possible to implement a wide variety of experimental designs, including conventional threshold measurement; measurement of psychometric function parameters, such as slope and lapse; estimation of the contrast sensitivity function; measurement of increment threshold functions; measurement of noise-masking functions; Thurstone scale estimation using pair comparisons; and categorical ratings on linear and circular stimulus dimensions. QUEST+ provides a general method to accelerate data collection in many areas of cognitive and perceptual science.

  7. Adaptive and robust methods of reconstruction (ARMOR) for thermoacoustic tomography.

    Science.gov (United States)

    Xie, Yao; Guo, Bin; Li, Jian; Ku, Geng; Wang, Lihong V

    2008-12-01

    In this paper, we present new adaptive and robust methods of reconstruction (ARMOR) for thermoacoustic tomography (TAT), and study their performances for breast cancer detection. TAT is an emerging medical imaging technique that combines the merits of high contrast due to electromagnetic or laser stimulation and high resolution offered by thermal acoustic imaging. The current image reconstruction methods used for TAT, such as the delay-and-sum (DAS) approach, are data-independent and suffer from low-resolution, high sidelobe levels, and poor interference rejection capabilities. The data-adaptive ARMOR can have much better resolution and much better interference rejection capabilities than their data-independent counterparts. By allowing certain uncertainties, ARMOR can be used to mitigate the amplitude and phase distortion problems encountered in TAT. The excellent performance of ARMOR is demonstrated using both simulated and experimentally measured data.

  8. Optimal and adaptive methods of processing hydroacoustic signals (review)

    Science.gov (United States)

    Malyshkin, G. S.; Sidel'nikov, G. B.

    2014-09-01

    Different methods of optimal and adaptive processing of hydroacoustic signals for multipath propagation and scattering are considered. Advantages and drawbacks of the classical adaptive (Capon, MUSIC, and Johnson) algorithms and "fast" projection algorithms are analyzed for the case of multipath propagation and scattering of strong signals. The classical optimal approaches to detecting multipath signals are presented. A mechanism of controlled normalization of strong signals is proposed to automatically detect weak signals. The results of simulating the operation of different detection algorithms for a linear equidistant array under multipath propagation and scattering are presented. An automatic detector is analyzed, which is based on classical or fast projection algorithms, which estimates the background proceeding from median filtering or the method of bilateral spatial contrast.

  9. Parallel adaptive sparse approximation methods for analysis of geoacoustic pulses

    Directory of Open Access Journals (Sweden)

    Kim Alina

    2017-01-01

    Full Text Available The article is devoted to a new approach in the analysis of geoacoustic pulses. The authors proposed a mathematical model based on a sparse representation of the signal. An adaptive matching pursuit method has been developed to identify model parameters. A parallel implementation of this algorithm is proposed on the CUDA platform. This allows real-time processing and modeling of signals.

  10. Adaptive-mesh zoning by the equipotential method

    Energy Technology Data Exchange (ETDEWEB)

    Winslow, A.M.

    1981-04-01

    An adaptive mesh method is proposed for the numerical solution of differential equations which causes the mesh lines to move closer together in regions where higher resolution in some physical quantity T is desired. A coefficient D > 0 is introduced into the equipotential zoning equations, where D depends on the gradient of T . The equations are inverted, leading to nonlinear elliptic equations for the mesh coordinates with source terms which depend on the gradient of D. A functional form of D is proposed.

  11. Adaptive measurement method for miniature spectrometers used in cold environments.

    Science.gov (United States)

    Wang, Hangzhou; Nan, Liwen; Huang, Hui; Yang, Ping; Song, Hong; Han, Jiwan; Wu, Yuanqian; Yan, Tingting; Yuan, Zhuoli; Chen, Ying

    2017-10-01

    Adaptive measurement is a major concern when using miniature spectrometers in extreme environments, especially when the ambient temperatures and incident light intensities vary greatly. In this study, parameters, including the signal output and the relevant noise and signal-to-noise ratio (SNR) of a fiber optic spectrometry system composed of a photodiode array miniature spectrometer and external driver electronics were examined at multiple integration times from -50°C to 30°C, well below the specified operating temperature of this spectrometer. The relationships between those parameters and incident light level were also examined, at a single temperature of 0°C. Based on these examinations, temperature-induced biases in the linear operating range of the spectrometer were identified. Signal output and the relevant noise and SNR in response to different integration times, temperatures, and incident light levels were assessed separately. These assessments were then used to develop an adaptive measurement method for estimating the incident light level and setting up an optimal integration time for this spectrometer, while autonomously adapting the variation in the ambient temperature and incident light level simultaneously. This approach provides a general framework for developing an adaptive measurement algorithm for miniature spectrometers, which face tremendous variations in ambient temperature and incident light level.

  12. A novel adaptive force control method for IPMC manipulation

    International Nuclear Information System (INIS)

    Hao, Lina; Sun, Zhiyong; Su, Yunquan; Gao, Jianchao; Li, Zhi

    2012-01-01

    IPMC is a type of electro-active polymer material, also called artificial muscle, which can generate a relatively large deformation under a relatively low input voltage (generally speaking, less than 5 V), and can be implemented in a water environment. Due to these advantages, IPMC can be used in many fields such as biomimetics, service robots, bio-manipulation, etc. Until now, most existing methods for IPMC manipulation are displacement control not directly force control, however, under most conditions, the success rate of manipulations for tiny fragile objects is limited by the contact force, such as using an IPMC gripper to fix cells. Like most EAPs, a creep phenomenon exists in IPMC, of which the generated force will change with time and the creep model will be influenced by the change of the water content or other environmental factors, so a proper force control method is urgently needed. This paper presents a novel adaptive force control method (AIPOF control—adaptive integral periodic output feedback control), based on employing a creep model of which parameters are obtained by using the FRLS on-line identification method. The AIPOF control method can achieve an arbitrary pole configuration as long as the plant is controllable and observable. This paper also designs the POF and IPOF controller to compare their test results. Simulation and experiments of micro-force-tracking tests are carried out, with results confirming that the proposed control method is viable. (paper)

  13. Successful adaptation of a research methods course in South America.

    Science.gov (United States)

    Tamariz, Leonardo; Vasquez, Diego; Loor, Cecilia; Palacio, Ana

    2017-01-01

    South America has low research productivity. The lack of a structured research curriculum is one of the barriers to conducting research. To report our experience adapting an active learning-based research methods curriculum to improve research productivity at a university in Ecuador. We used a mixed-method approach to test the adaptation of the research curriculum at Universidad Catolica Santiago de Guayaquil. The curriculum uses a flipped classroom and active learning approach to teach research methods. When adapted, it was longitudinal and had 16-hour programme of in-person teaching and a six-month follow-up online component. Learners were organized in theme groups according to interest, and each group had a faculty leader. Our primary outcome was research productivity, which was measured by the succesful presentation of the research project at a national meeting, or publication in a peer-review journal. Our secondary outcomes were knowledge and perceived competence before and after course completion. We conducted qualitative interviews of faculty members and students to evaluate themes related to participation in research. Fifty university students and 10 faculty members attended the course. We had a total of 15 groups. Both knowledge and perceived competence increased by 17 and 18 percentage points, respectively. The presentation or publication rate for the entire group was 50%. The qualitative analysis showed that a lack of research culture and curriculum were common barriers to research. A US-based curriculum can be successfully adapted in low-middle income countries. A research curriculum aids in achieving pre-determined milestones. UCSG: Universidad Catolica Santiago de Guayaquil; UM: University of Miami.

  14. Adaptive Current Control Method for Hybrid Active Power Filter

    Science.gov (United States)

    Chau, Minh Thuyen

    2016-09-01

    This paper proposes an adaptive current control method for Hybrid Active Power Filter (HAPF). It consists of a fuzzy-neural controller, identification and prediction model and cost function. The fuzzy-neural controller parameters are adjusted according to the cost function minimum criteria. For this reason, the proposed control method has a capability on-line control clings to variation of the load harmonic currents. Compared to the single fuzzy logic control method, the proposed control method shows the advantages of better dynamic response, compensation error in steady-state is smaller, able to online control is better and harmonics cancelling is more effective. Simulation and experimental results have demonstrated the effectiveness of the proposed control method.

  15. Parallel, adaptive finite element methods for conservation laws

    Science.gov (United States)

    Biswas, Rupak; Devine, Karen D.; Flaherty, Joseph E.

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  16. Adaptive BDDC Deluxe Methods for H(curl)

    KAUST Repository

    Zampini, Stefano

    2017-03-17

    The work presents numerical results using adaptive BDDC deluxe methods for preconditioning the linear systems arising from finite element discretizations of the time-domain, quasi-static approximation of the Maxwell’s equations. The provided results, obtained using the BDDC implementation of the PETSc library, show that these methods are poly-logarithmic in the polynomial degree of the Nédélec elements of first and second kind, and robust with respect to arbitrary distributions of the magnetic permeability and the conductivity of the medium.

  17. A multilevel adaptive reaction-splitting method for SRNs

    KAUST Repository

    Moraes, Alvaro

    2016-01-06

    In [5], we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks (SRNs) specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either high or low activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This is achieved with a computational complexity of order O(TOL-2). We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost.

  18. 3D CSEM inversion based on goal-oriented adaptive finite element method

    Science.gov (United States)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with

  19. Nonlinear diffusion filtering methods locally adapted to data features

    Science.gov (United States)

    Kollár, Michal; Čunderlík, Róbert; Mikula, Karol

    2017-04-01

    The contribution deals with nonlinear diffusion filtering methods on a planar surface. These methods represent an extension of the simple linear diffusion filtering by the nonlinear diffusivity coefficient. This coefficient represents a function which depends on data features such as gradient and local or global extrema of data. In the case of the regularized surface Perona-Malik model, method mostly used in image processing, the diffusivity coefficient represents the edge detector function. If we use the nonlinear diffusion filtering influenced by the Laplace operator, local extrema detector function affects the diffusion process. We use a finite-volume method to approximate numerically the nonlinear parabolic partial differential equation on uniform rectangle grid and finite difference method to approximate gradients and Laplacians. Numerical experiments present nonlinear diffusion filtering of artificial data and real measurements in upcoming filtering software with real-time filtered data visualization widget. Real measurements represent GOCE satellite observations, satellite-only MDT data, and high-resolution altimetry-derived gravity data. They aim to point out the main advantage of the nonlinear diffusion models which, on the contrary to linear models, preserve important structures of processed data.

  20. Convergence of a Scholtes-type regularization method for cardinality-constrained optimization problems with an application in sparse robust portfolio optimization

    Czech Academy of Sciences Publication Activity Database

    Branda, Martin; Bucher, M.; Červinka, Michal; Schwartz, A.

    2018-01-01

    Roč. 70, č. 2 (2018), s. 503-530 ISSN 0926-6003 R&D Projects: GA ČR GA15-00735S Institutional support: RVO:67985556 Keywords : Cardinality constraints * Regularization method * Scholtes regularization * Strong stationarity * Sparse portfolio optimization * Robust portfolio optimization Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Statistics and probability Impact factor: 1.520, year: 2016 http://library.utia.cas.cz/separaty/2018/MTR/branda-0489264.pdf

  1. A systematic review of gait analysis methods based on inertial sensors and adaptive algorithms.

    Science.gov (United States)

    Caldas, Rafael; Mundt, Marion; Potthast, Wolfgang; Buarque de Lima Neto, Fernando; Markert, Bernd

    2017-09-01

    The conventional methods to assess human gait are either expensive or complex to be applied regularly in clinical practice. To reduce the cost and simplify the evaluation, inertial sensors and adaptive algorithms have been utilized, respectively. This paper aims to summarize studies that applied adaptive also called artificial intelligence (AI) algorithms to gait analysis based on inertial sensor data, verifying if they can support the clinical evaluation. Articles were identified through searches of the main databases, which were encompassed from 1968 to October 2016. We have identified 22 studies that met the inclusion criteria. The included papers were analyzed due to their data acquisition and processing methods with specific questionnaires. Concerning the data acquisition, the mean score is 6.1±1.62, what implies that 13 of 22 papers failed to report relevant outcomes. The quality assessment of AI algorithms presents an above-average rating (8.2±1.84). Therefore, AI algorithms seem to be able to support gait analysis based on inertial sensor data. Further research, however, is necessary to enhance and standardize the application in patients, since most of the studies used distinct methods to evaluate healthy subjects. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Highly accurate adaptive TOF determination method for ultrasonic thickness measurement

    Science.gov (United States)

    Zhou, Lianjie; Liu, Haibo; Lian, Meng; Ying, Yangwei; Li, Te; Wang, Yongqing

    2018-04-01

    Determining the time of flight (TOF) is very critical for precise ultrasonic thickness measurement. However, the relatively low signal-to-noise ratio (SNR) of the received signals would induce significant TOF determination errors. In this paper, an adaptive time delay estimation method has been developed to improve the TOF determination’s accuracy. An improved variable step size adaptive algorithm with comprehensive step size control function is proposed. Meanwhile, a cubic spline fitting approach is also employed to alleviate the restriction of finite sampling interval. Simulation experiments under different SNR conditions were conducted for performance analysis. Simulation results manifested the performance advantage of proposed TOF determination method over existing TOF determination methods. When comparing with the conventional fixed step size, and Kwong and Aboulnasr algorithms, the steady state mean square deviation of the proposed algorithm was generally lower, which makes the proposed algorithm more suitable for TOF determination. Further, ultrasonic thickness measurement experiments were performed on aluminum alloy plates with various thicknesses. They indicated that the proposed TOF determination method was more robust even under low SNR conditions, and the ultrasonic thickness measurement accuracy could be significantly improved.

  3. Planetary gearbox fault diagnosis using an adaptive stochastic resonance method

    Science.gov (United States)

    Lei, Yaguo; Han, Dong; Lin, Jing; He, Zhengjia

    2013-07-01

    Planetary gearboxes are widely used in aerospace, automotive and heavy industry applications due to their large transmission ratio, strong load-bearing capacity and high transmission efficiency. The tough operation conditions of heavy duty and intensive impact load may cause gear tooth damage such as fatigue crack and teeth missed etc. The challenging issues in fault diagnosis of planetary gearboxes include selection of sensitive measurement locations, investigation of vibration transmission paths and weak feature extraction. One of them is how to effectively discover the weak characteristics from noisy signals of faulty components in planetary gearboxes. To address the issue in fault diagnosis of planetary gearboxes, an adaptive stochastic resonance (ASR) method is proposed in this paper. The ASR method utilizes the optimization ability of ant colony algorithms and adaptively realizes the optimal stochastic resonance system matching input signals. Using the ASR method, the noise may be weakened and weak characteristics highlighted, and therefore the faults can be diagnosed accurately. A planetary gearbox test rig is established and experiments with sun gear faults including a chipped tooth and a missing tooth are conducted. And the vibration signals are collected under the loaded condition and various motor speeds. The proposed method is used to process the collected signals and the results of feature extraction and fault diagnosis demonstrate its effectiveness.

  4. Mapping of Primary Instructional Methods and Teaching Techniques for Regularly Scheduled, Formal Teaching Sessions in an Anesthesia Residency Program.

    Science.gov (United States)

    Vested Madsen, Matias; Macario, Alex; Yamamoto, Satoshi; Tanaka, Pedro

    2016-06-01

    In this study, we examined the regularly scheduled, formal teaching sessions in a single anesthesiology residency program to (1) map the most common primary instructional methods, (2) map the use of 10 known teaching techniques, and (3) assess if residents scored sessions that incorporated active learning as higher quality than sessions with little or no verbal interaction between teacher and learner. A modified Delphi process was used to identify useful teaching techniques. A representative sample of each of the formal teaching session types was mapped, and residents anonymously completed a 5-question written survey rating the session. The most common primary instructional methods were computer slides-based classroom lectures (66%), workshops (15%), simulations (5%), and journal club (5%). The number of teaching techniques used per formal teaching session averaged 5.31 (SD, 1.92; median, 5; range, 0-9). Clinical applicability (85%) and attention grabbers (85%) were the 2 most common teaching techniques. Thirty-eight percent of the sessions defined learning objectives, and one-third of sessions engaged in active learning. The overall survey response rate equaled 42%, and passive sessions had a mean score of 8.44 (range, 5-10; median, 9; SD, 1.2) compared with a mean score of 8.63 (range, 5-10; median, 9; SD, 1.1) for active sessions (P = 0.63). Slides-based classroom lectures were the most common instructional method, and faculty used an average of 5 known teaching techniques per formal teaching session. The overall education scores of the sessions as rated by the residents were high.

  5. Computational methods for the verification of adaptive control systems

    Science.gov (United States)

    Prasanth, Ravi K.; Boskovic, Jovan; Mehra, Raman K.

    2004-08-01

    Intelligent and adaptive control systems will significantly challenge current verification and validation (V&V) processes, tools, and methods for flight certification. Although traditional certification practices have produced safe and reliable flight systems, they will not be cost effective for next-generation autonomous unmanned air vehicles (UAVs) due to inherent size and complexity increases from added functionality. Affordable V&V of intelligent control systems is by far the most important challenge in the development of UAVs faced by both commercial and military aerospace industry in the United States. This paper presents a formal modeling framework for a class of adaptive control systems and an associated computational scheme. The class of systems considered include neural network-based flight control systems and vehicle health management systems. This class of systems and indeed all adaptive systems are hybrid systems whose continuum dynamics is nonlinear. Our computational procedure is iterative and each iteration has two sequential steps. The first step is to derive an approximating finite-state automaton whose behaviors contain the behaviors of the hybrid system. The second step is to check if the language accepted by the approximating automaton is empty (emptiness checking). The iterations are terminated if the language accepted is empty; otherwise, the approximation is refined and the iteration is continued. This procedure will never produce an "error-free" certificate when the actual system contains errors which is an important requirement in V&V of safety critical systems.

  6. Dental dam clamp adaptation method on carved gypsum cast.

    Science.gov (United States)

    Cazacu, N C E

    2014-01-01

    Dental Dam is the safest and most efficient isolation technique in endodontics and restorative dentistry, but it also used in esthetics, orthodontics, prosthetics, pedodontics and periodontology (for teeth immobilization). While in most cases the standard clamps are efficient, in some clinical situations clamp adaptation is mandatory in order to assure a tight contact on the tooth. The purpose of this study is to list the elements of the clamp, which should be modified in order to assure a secure constriction of the clamp on the anchor tooth, by using the carved gypsum cast method. 100 patients were examined, diagnosed and treated for various diagnoses like simple decay, gangrene, chronic apical periodontitis, and endodontic retreatments. The clamps used in this study were produced by Hu-Friedy, Hygenic, KKD, SDI, Hager & Werken. In 10 cases, the anchor tooth did not provide enough stability to the standard clamp--as provided by the producer. Therefore, we have done some adjustments to some of the elements of the clamp: the arch, the wings, the plateau, the active area, and the contact points. In 6 cases, major clamp adaptations on carved gypsum cast were imperative. The classic clamps cannot provide a grip to be enough in all the clinical cases due to the huge variety and position and implantation of the anchor teeth. Therefore, in such situations, the clamps should be adapted in order to provide stability and assure the safe isolation during the treatment. The modified clamps will be useful in similar cases, so they must be kept.

  7. The SMART CLUSTER METHOD - adaptive earthquake cluster analysis and declustering

    Science.gov (United States)

    Schaefer, Andreas; Daniell, James; Wenzel, Friedemann

    2016-04-01

    Earthquake declustering is an essential part of almost any statistical analysis of spatial and temporal properties of seismic activity with usual applications comprising of probabilistic seismic hazard assessments (PSHAs) and earthquake prediction methods. The nature of earthquake clusters and subsequent declustering of earthquake catalogues plays a crucial role in determining the magnitude-dependent earthquake return period and its respective spatial variation. Various methods have been developed to address this issue from other researchers. These have differing ranges of complexity ranging from rather simple statistical window methods to complex epidemic models. This study introduces the smart cluster method (SCM), a new methodology to identify earthquake clusters, which uses an adaptive point process for spatio-temporal identification. Hereby, an adaptive search algorithm for data point clusters is adopted. It uses the earthquake density in the spatio-temporal neighbourhood of each event to adjust the search properties. The identified clusters are subsequently analysed to determine directional anisotropy, focussing on a strong correlation along the rupture plane and adjusts its search space with respect to directional properties. In the case of rapid subsequent ruptures like the 1992 Landers sequence or the 2010/2011 Darfield-Christchurch events, an adaptive classification procedure is applied to disassemble subsequent ruptures which may have been grouped into an individual cluster using near-field searches, support vector machines and temporal splitting. The steering parameters of the search behaviour are linked to local earthquake properties like magnitude of completeness, earthquake density and Gutenberg-Richter parameters. The method is capable of identifying and classifying earthquake clusters in space and time. It is tested and validated using earthquake data from California and New Zealand. As a result of the cluster identification process, each event in

  8. An HP Adaptive Discontinuous Galerkin Method for Hyperbolic Conservation Laws. Ph.D. Thesis

    Science.gov (United States)

    Bey, Kim S.

    1994-01-01

    This dissertation addresses various issues for model classes of hyperbolic conservation laws. The basic approach developed in this work employs a new family of adaptive, hp-version, finite element methods based on a special discontinuous Galerkin formulation for hyperbolic problems. The discontinuous Galerkin formulation admits high-order local approximations on domains of quite general geometry, while providing a natural framework for finite element approximations and for theoretical developments. The use of hp-versions of the finite element method makes possible exponentially convergent schemes with very high accuracies in certain cases; the use of adaptive hp-schemes allows h-refinement in regions of low regularity and p-enrichment to deliver high accuracy, while keeping problem sizes manageable and dramatically smaller than many conventional approaches. The use of discontinuous Galerkin methods is uncommon in applications, but the methods rest on a reasonable mathematical basis for low-order cases and has local approximation features that can be exploited to produce very efficient schemes, especially in a parallel, multiprocessor environment. The place of this work is to first and primarily focus on a model class of linear hyperbolic conservation laws for which concrete mathematical results, methodologies, error estimates, convergence criteria, and parallel adaptive strategies can be developed, and to then briefly explore some extensions to more general cases. Next, we provide preliminaries to the study and a review of some aspects of the theory of hyperbolic conservation laws. We also provide a review of relevant literature on this subject and on the numerical analysis of these types of problems.

  9. A multilevel adaptive reaction-splitting method for SRNs

    KAUST Repository

    Moraes, Alvaro

    2015-01-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks specifically designed for systems in which the set of reaction channels can be adaptively partitioned into two subsets characterized by either “high” or “low” activity. To estimate expected values of observables of the system, our method bounds the global computational error to be below a prescribed tolerance, within a given confidence level. This is achieved with a computational complexity of order O (TOL-2).We also present a novel control variate technique which may dramatically reduce the variance of the coarsest level at a negligible computational cost. Our numerical examples show substantial gains with respect to the standard Stochastic Simulation Algorithm (SSA) by Gillespie and also our previous hybrid Chernoff tau-leap method.

  10. Multiple centroid method to evaluate the adaptability of alfalfa genotypes

    Directory of Open Access Journals (Sweden)

    Moysés Nascimento

    2015-02-01

    Full Text Available This study aimed to evaluate the efficiency of multiple centroids to study the adaptability of alfalfa genotypes (Medicago sativa L.. In this method, the genotypes are compared with ideotypes defined by the bissegmented regression model, according to the researcher's interest. Thus, genotype classification is carried out as determined by the objective of the researcher and the proposed recommendation strategy. Despite the great potential of the method, it needs to be evaluated under the biological context (with real data. In this context, we used data on the evaluation of dry matter production of 92 alfalfa cultivars, with 20 cuttings, from an experiment in randomized blocks with two repetitions carried out from November 2004 to June 2006. The multiple centroid method proved efficient for classifying alfalfa genotypes. Moreover, it showed no unambiguous indications and provided that ideotypes were defined according to the researcher's interest, facilitating data interpretation.

  11. ECG-derived respiration methods: adapted ICA and PCA.

    Science.gov (United States)

    Tiinanen, Suvi; Noponen, Kai; Tulppo, Mikko; Kiviniemi, Antti; Seppänen, Tapio

    2015-05-01

    Respiration is an important signal in early diagnostics, prediction, and treatment of several diseases. Moreover, a growing trend toward ambulatory measurements outside laboratory environments encourages developing indirect measurement methods such as ECG derived respiration (EDR). Recently, decomposition techniques like principal component analysis (PCA), and its nonlinear version, kernel PCA (KPCA), have been used to derive a surrogate respiration signal from single-channel ECG. In this paper, we propose an adapted independent component analysis (AICA) algorithm to obtain EDR signal, and extend the normal linear PCA technique based on the best principal component (PC) selection (APCA, adapted PCA) to improve its performance further. We also demonstrate that the usage of smoothing spline resampling and bandpass-filtering improve the performance of all EDR methods. Compared with other recent EDR methods using correlation coefficient and magnitude squared coherence, the proposed AICA and APCA yield a statistically significant improvement with correlations 0.84, 0.82, 0.76 and coherences 0.90, 0.91, 0.85 between reference respiration and AICA, APCA and KPCA, respectively. Copyright © 2015 IPEM. Published by Elsevier Ltd. All rights reserved.

  12. A Fast Adaptive Receive Antenna Selection Method in MIMO System

    Directory of Open Access Journals (Sweden)

    Chaowei Wang

    2013-01-01

    Full Text Available Antenna selection has been regarded as an effective method to acquire the diversity benefits of multiple antennas while potentially reduce hardware costs. This paper focuses on receive antenna selection. According to the proportion between the numbers of total receive antennas and selected antennas and the influence of each antenna on system capacity, we propose a fast adaptive antenna selection algorithm for wireless multiple-input multiple-output (MIMO systems. Mathematical analysis and numerical results show that our algorithm significantly reduces the computational complexity and memory requirement and achieves considerable system capacity gain compared with the optimal selection technique in the same time.

  13. MR Image Reconstruction Using Block Matching and Adaptive Kernel Methods.

    Science.gov (United States)

    Schmidt, Johannes F M; Santelli, Claudio; Kozerke, Sebastian

    2016-01-01

    An approach to Magnetic Resonance (MR) image reconstruction from undersampled data is proposed. Undersampling artifacts are removed using an iterative thresholding algorithm applied to nonlinearly transformed image block arrays. Each block array is transformed using kernel principal component analysis where the contribution of each image block to the transform depends in a nonlinear fashion on the distance to other image blocks. Elimination of undersampling artifacts is achieved by conventional principal component analysis in the nonlinear transform domain, projection onto the main components and back-mapping into the image domain. Iterative image reconstruction is performed by interleaving the proposed undersampling artifact removal step and gradient updates enforcing consistency with acquired k-space data. The algorithm is evaluated using retrospectively undersampled MR cardiac cine data and compared to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT reconstruction. Evaluation of image quality and root-mean-squared-error (RMSE) reveal improved image reconstruction for up to 8-fold undersampled data with the proposed approach relative to k-t SPARSE-SENSE, block matching with spatial Fourier filtering and k-t ℓ1-SPIRiT. In conclusion, block matching and kernel methods can be used for effective removal of undersampling artifacts in MR image reconstruction and outperform methods using standard compressed sensing and ℓ1-regularized parallel imaging methods.

  14. MFAM: Multiple Frequency Adaptive Model-Based Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2018-03-24

    This paper presents MFAM (Multiple Frequency Adaptive Model-based localization method), a novel model-based indoor localization method that is capable of using multiple wireless signal frequencies simultaneously. It utilizes indoor architectural model and physical properties of wireless signal propagation through objects and space. The motivation for developing multiple frequency localization method lies in the future Wi-Fi standards (e.g., 802.11ah) and the growing number of various wireless signals present in the buildings (e.g., Wi-Fi, Bluetooth, ZigBee, etc.). Current indoor localization methods mostly rely on a single wireless signal type and often require many devices to achieve the necessary accuracy. MFAM utilizes multiple wireless signal types and improves the localization accuracy over the usage of a single frequency. It continuously monitors signal propagation through space and adapts the model according to the changes indoors. Using multiple signal sources lowers the required number of access points for a specific signal type while utilizing signals, already present in the indoors. Due to the unavailability of the 802.11ah hardware, we have evaluated proposed method with similar signals; we have used 2.4 GHz Wi-Fi and 868 MHz HomeMatic home automation signals. We have performed the evaluation in a modern two-bedroom apartment and measured mean localization error 2.0 to 2.3 m and median error of 2.0 to 2.2 m. Based on our evaluation results, using two different signals improves the localization accuracy by 18% in comparison to 2.4 GHz Wi-Fi-only approach. Additional signals would improve the accuracy even further. We have shown that MFAM provides better accuracy than competing methods, while having several advantages for real-world usage.

  15. A psychophysical comparison of two methods for adaptive histogram equalization.

    Science.gov (United States)

    Zimmerman, J B; Cousins, S B; Hartzell, K M; Frisse, M E; Kahn, M G

    1989-05-01

    Adaptive histogram equalization (AHE) is a method for adaptive contrast enhancement of digital images. It is an automatic, reproducible method for the simultaneous viewing of contrast within a digital image with a large dynamic range. Recent experiments have shown that in specific cases, there is no significant difference in the ability of AHE and linear intensity windowing to display gray-scale contrast. More recently, a variant of AHE which limits the allowed contrast enhancement of the image has been proposed. This contrast-limited adaptive histogram equalization (CLAHE) produces images in which the noise content of an image is not excessively enhanced, but in which sufficient contrast is provided for the visualization of structures within the image. Images processed with CLAHE have a more natural appearance and facilitate the comparison of different areas of an image. However, the reduced contrast enhancement of CLAHE may hinder the ability of an observer to detect the presence of some significant gray-scale contrast. In this report, a psychophysical observer experiment was performed to determine if there is a significant difference in the ability of AHE and CLAHE to depict gray-scale contrast. Observers were presented with computed tomography (CT) images of the chest processed with AHE and CLAHE. Subtle artificial lesions were introduced into some images. The observers were asked to rate their confidence regarding the presence of the lesions; this rating-scale data was analyzed using receiver operating characteristic (ROC) curve techniques. These ROC curves were compared for significant differences in the observers' performances. In this report, no difference was found in the abilities of AHE and CLAHE to depict contrast information.

  16. Neural Classifier Construction using Regularization, Pruning

    DEFF Research Database (Denmark)

    Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan

    1998-01-01

    In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunctio...... with optimal brain damage pruning, a test error estimate is used to select the network architecture. The scheme is evaluated on four classification problems.......In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...

  17. An adaptive stepsize method for the chemical Langevin equation

    Science.gov (United States)

    Ilie, Silvana; Teslya, Alexandra

    2012-05-01

    Mathematical and computational modeling are key tools in analyzing important biological processes in cells and living organisms. In particular, stochastic models are essential to accurately describe the cellular dynamics, when the assumption of the thermodynamic limit can no longer be applied. However, stochastic models are computationally much more challenging than the traditional deterministic models. Moreover, many biochemical systems arising in applications have multiple time-scales, which lead to mathematical stiffness. In this paper we investigate the numerical solution of a stochastic continuous model of well-stirred biochemical systems, the chemical Langevin equation. The chemical Langevin equation is a stochastic differential equation with multiplicative, non-commutative noise. We propose an adaptive stepsize algorithm for approximating the solution of models of biochemical systems in the Langevin regime, with small noise, based on estimates of the local error. The underlying numerical method is the Milstein scheme. The proposed adaptive method is tested on several examples arising in applications and it is shown to have improved efficiency and accuracy compared to the existing fixed stepsize schemes.

  18. Adaptive designs based on the truncated product method

    Directory of Open Access Journals (Sweden)

    Neuhäuser Markus

    2005-09-01

    Full Text Available Abstract Background Adaptive designs are becoming increasingly important in clinical research. One approach subdivides the study into several (two or more stages and combines the p-values of the different stages using Fisher's combination test. Methods Alternatively to Fisher's test, the recently proposed truncated product method (TPM can be applied to combine the p-values. The TPM uses the product of only those p-values that do not exceed some fixed cut-off value. Here, these two competing analyses are compared. Results When an early termination due to insufficient effects is not appropriate, such as in dose-response analyses, the probability to stop the trial early with the rejection of the null hypothesis is increased when the TPM is applied. Therefore, the expected total sample size is decreased. This decrease in the sample size is not connected with a loss in power. The TPM turns out to be less advantageous, when an early termination of the study due to insufficient effects is possible. This is due to a decrease of the probability to stop the trial early. Conclusion It is recommended to apply the TPM rather than Fisher's combination test whenever an early termination due to insufficient effects is not suitable within the adaptive design.

  19. Numerical and adaptive grid methods for ideal magnetohydrodynamics

    Science.gov (United States)

    Loring, Burlen

    2008-02-01

    In this thesis numerical finite difference methods for ideal magnetohydrodynamics(MHD) are investigated. A review of the relevant physics, essential for interpreting the results of numerical solutions and constructing validation cases, is presented. This review includes a discusion of the propagation of small amplitude waves in the MHD system as well as a thorough discussion of MHD shocks, contacts and rarefactions and how they can be piece together to obtain a solutions to the MHD Riemann problem. Numerical issues relevant to the MHD system such as: the loss of nonlinear numerical stability in the presence of discontinuous solutions, the introduction of spurious forces due to the growth of the divergence of the magnetic flux density, the loss of pressure positivity, and the effects of non-conservative numerical methods are discussed, along with the practical approaches which can be used to remedy or minimize the negative consequences of each. The use of block structured adaptive mesh refinement is investigated in the context of a divergence free MHD code. A new method for conserving magnetic flux across AMR grid interfaces is developed and a detailed discussion of our implementation of this method using the CHOMBO AMR framework is given. A preliminary validation of the new method for conserving magnetic flux density across AMR grid interfaces illustrates that the method works. Finally a number of code validation cases are examined spurring a discussion of the strengths and weaknesses of the numerics employed.

  20. A parallel direct solver for the self-adaptive hp Finite Element Method

    KAUST Repository

    Paszyński, Maciej R.

    2010-03-01

    In this paper we present a new parallel multi-frontal direct solver, dedicated for the hp Finite Element Method (hp-FEM). The self-adaptive hp-FEM generates in a fully automatic mode, a sequence of hp-meshes delivering exponential convergence of the error with respect to the number of degrees of freedom (d.o.f.) as well as the CPU time, by performing a sequence of hp refinements starting from an arbitrary initial mesh. The solver constructs an initial elimination tree for an arbitrary initial mesh, and expands the elimination tree each time the mesh is refined. This allows us to keep track of the order of elimination for the solver. The solver also minimizes the memory usage, by de-allocating partial LU factorizations computed during the elimination stage of the solver, and recomputes them for the backward substitution stage, by utilizing only about 10% of the computational time necessary for the original computations. The solver has been tested on 3D Direct Current (DC) borehole resistivity measurement simulations problems. We measure the execution time and memory usage of the solver over a large regular mesh with 1.5 million degrees of freedom as well as on the highly non-regular mesh, generated by the self-adaptive h p-FEM, with finite elements of various sizes and polynomial orders of approximation varying from p = 1 to p = 9. From the presented experiments it follows that the parallel solver scales well up to the maximum number of utilized processors. The limit for the solver scalability is the maximum sequential part of the algorithm: the computations of the partial LU factorizations over the longest path, coming from the root of the elimination tree down to the deepest leaf. © 2009 Elsevier Inc. All rights reserved.

  1. Adaptive discontinuous Galerkin methods for non-linear reactive flows

    CERN Document Server

    Uzunca, Murat

    2016-01-01

    The focus of this monograph is the development of space-time adaptive methods to solve the convection/reaction dominated non-stationary semi-linear advection diffusion reaction (ADR) equations with internal/boundary layers in an accurate and efficient way. After introducing the ADR equations and discontinuous Galerkin discretization, robust residual-based a posteriori error estimators in space and time are derived. The elliptic reconstruction technique is then utilized to derive the a posteriori error bounds for the fully discrete system and to obtain optimal orders of convergence. As coupled surface and subsurface flow over large space and time scales is described by (ADR) equation the methods described in this book are of high importance in many areas of Geosciences including oil and gas recovery, groundwater contamination and sustainable use of groundwater resources, storing greenhouse gases or radioactive waste in the subsurface.

  2. Adaptive decoupled power control method for inverter connected DG

    DEFF Research Database (Denmark)

    Sun, Xiaofeng; Tian, Yanjun; Chen, Zhe

    2014-01-01

    an adaptive droop control method based on online evaluation of power decouple matrix for inverter connected distributed generations in distribution system. Traditional decoupled power control is simply based on line impedance parameter, but the load characteristics also cause the power coupling, and alter......The integration of renewable energy technology is making the power distribution system more flexible, but also introducing challenges for traditional technology. With the nature of intermittent and less inertial, renewable energy-based generations need effective control methods to cooperate...... with other devices, such as storage, loads and the utility grid. The widely used power frequency (P–f) droop control is based on the precondition of inductive line impedance, but the low-voltage system is mainly resistive, and also the different load character needs to be considered. This study presents...

  3. Development of a novel molecular detection method for clustered regularly interspaced short palindromic repeats (CRISPRs) in Taylorella organisms.

    Science.gov (United States)

    Hara, Yasushi; Nakajima, Takuya; Akamatsu, Marie; Yahiro, Motoki; Kagawa, Shizuko; Petry, Sandrine; Matsuda, Motoo; Moore, John E

    2015-07-01

    Contagious equine metritis is a bacterial infectious disease of horses caused by Taylorella equigenitalis, a Gram-negative eubacterium. The disease has been described in several continents, including Europe, North America and Asia. A novel molecular method was developed to detect clustered regularly interspaced short palindromic repeats (CRISPRs), which were separated by non-repetitive unique spacer regions (NRUSRs) of similar length, in the Taylorella equigenitalis EQ59 strain using a primer pair, f-/r-TeCRISPR-ladder, by PCR amplification. In total, 31 Taylorella isolates (17 T. equigenitalis and 14 Taylorella asinigenitalis) were examined. The T. equigenitalis isolates came from thoroughbred and cold-blooded horses from nine countries during 1980-1996, whilst the T. asinigenitalis isolates all originated from donkey jacks in France and the USA during 1997-2006. PAGE fractionated all of the 13 CRISPRs separated by 12 NRUSRs in T. equigenitalis EQ59. Permutation examples of CRISPRs, which were separated by NRUSRs for small-sized ladders, consisting of two doublet bands were shown. Putative CRISPRs separated by NRUSRs were amplified with 14/17 (82.4 %) geographically disparate T. equigenitalis isolates using the newly designed primer pair. Approximately 82.4 % of the T. equigenitalis isolates had CRISPRs separated by NRUSRs. The CRISPR locus was also found in the French T. asinigenitalis strain MCE3. Putative CRISPRs separated by NRUSRs were detected similarly in 4/14 (28.6 %) T. asinigenitalis isolates. Overall, a more detailed understanding of the molecular biology of CRISPRs within Taylorella organisms may help elucidate the pathogenic virulence and transmission mechanisms associated with this important equine pathogen.

  4. Harmonic Adaptability Remote Testing Method for Offshore Wind Turbines

    Directory of Open Access Journals (Sweden)

    Zimin Jiang

    2017-11-01

    Full Text Available Harmonic adaptability (HA capability is required for large-scale onshore and offshore wind turbines (WTs connected to the grid. To ensure that the distortion of the harmonic voltage at the grid access point generated by grid simulator is in accordance with the required value, this paper proposes an on-site HA remote testing method for offshore WTs to eliminate submarine cable effects. The deviation compensation method detects the integer harmonic voltage distortion based on instantaneous reactive power theory, and the deviation from the required value is compensated by series active power filter. In order to further reduce the capacity of the designed device, integer harmonics close to the resonant frequency are suppressed by the selective harmonic damping (SHD method initially. Owing to the attenuation of extreme amplification, the deviation determining the equipment capacity is decreased correspondingly. As a small synthesized impedance working for the selected frequency can suppress the amplification significantly, a low power ratings design for the SHD method can be achieved, and undesired resonance can be avoided. Simulation results indicate that the proposed method can make the harmonic distortion within the error tolerance.

  5. [The Confusion Assessment Method: Transcultural adaptation of a French version].

    Science.gov (United States)

    Antoine, V; Belmin, J; Blain, H; Bonin-Guillaume, S; Goldsmith, L; Guerin, O; Kergoat, M-J; Landais, P; Mahmoudi, R; Morais, J A; Rataboul, P; Saber, A; Sirvain, S; Wolfklein, G; de Wazieres, B

    2018-04-03

    The Confusion Assessment Method (CAM) is a validated key tool in clinical practice and research programs to diagnose delirium and assess its severity. There is no validated French version of the CAM training manual and coding guide (Inouye SK). The aim of this study was to establish a consensual French version of the CAM and its manual. Cross-cultural adaptation to achieve equivalence between the original version and a French adapted version of the CAM manual. A rigorous process was conducted including control of cultural adequacy of the tool's components, double forward and back translations, reconciliation, expert committee review (including bilingual translators with different nationalities, a linguist, highly qualified clinicians, methodologists) and pretesting. A consensual French version of the CAM was achieved. Implementation of the CAM French version in daily clinical practice will enable optimal diagnosis of delirium diagnosis and enhance communication between health professionals in French speaking countries. Validity and psychometric properties are being tested in a French multicenter cohort, opening up new perspectives for improved quality of care and research programs in French speaking countries. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  6. An adaptive finite element method for steady and transient problems

    International Nuclear Information System (INIS)

    Benner, R.E. Jr.; Davis, H.T.; Scriven, L.E.

    1987-01-01

    Distributing integral error uniformly over variable subdomains, or finite elements, is an attractive criterion by which to subdivide a domain for the Galerkin/finite element method when localized steep gradients and high curvatures are to be resolved. Examples are fluid interfaces, shock fronts and other internal layers, as well as fluid mechanical and other boundary layers, e.g. thin-film states at solid walls. The uniform distribution criterion is developed into an adaptive technique for one-dimensional problems. Nodal positions can be updated simultaneously with nodal values during Newton iteration, but it is usually better to adopt nearly optimal nodal positions during Newton iteration upon nodal values. Three illustrative problems are solved: steady convection with diffusion, gradient theory of fluid wetting on a solid surface and Buckley-Leverett theory of two phase Darcy flow in porous media

  7. Regularized Structural Equation Modeling

    Science.gov (United States)

    Jacobucci, Ross; Grimm, Kevin J.; McArdle, John J.

    2016-01-01

    A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM’s utility. PMID:27398019

  8. Adaptive Elastic Net for Generalized Methods of Moments.

    Science.gov (United States)

    Caner, Mehmet; Zhang, Hao Helen

    2014-01-30

    Model selection and estimation are crucial parts of econometrics. This paper introduces a new technique that can simultaneously estimate and select the model in generalized method of moments (GMM) context. The GMM is particularly powerful for analyzing complex data sets such as longitudinal and panel data, and it has wide applications in econometrics. This paper extends the least squares based adaptive elastic net estimator of Zou and Zhang (2009) to nonlinear equation systems with endogenous variables. The extension is not trivial and involves a new proof technique due to estimators lack of closed form solutions. Compared to Bridge-GMM of Caner (2009), we allow for the number of parameters to diverge to infinity as well as collinearity among a large number of variables, also the redundant parameters set to zero via a data dependent technique. This method has the oracle property, meaning that we can estimate nonzero parameters with their standard limit and the redundant parameters are dropped from the equations simultaneously. Numerical examples are used to illustrate the performance of the new method.

  9. Automatic estimation of the regularization parameter in 2D focusing gravity inversion: application of the method to the Safo manganese mine in the northwest of Iran

    Science.gov (United States)

    Vatankhah, Saeed; Ardestani, Vahid E.; Renaut, Rosemary A.

    2014-08-01

    We investigate the use of Tikhonov regularization with the minimum support stabilizer for underdetermined 2D inversion of gravity data. This stabilizer produces models with non-smooth properties which is useful for identifying geologic structures with sharp boundaries. A very important aspect of using Tikhonov regularization is the choice of the regularization parameter that controls the trade-off between the data fidelity and the stabilizing functional. The L-curve and generalized cross-validation techniques, which only require the relative sizes of the uncertainties in the observations, are considered. Both criteria are applied in an iterative process; at each iteration a value for the regularization parameter is estimated. Suitable values for the regularization parameter are successfully determined in both cases for synthetic, but practically relevant, examples. Whenever the geologic situation permits, it is easier and more efficient to model the subsurface with a 2D algorithm, rather than to apply a full 3D approach. Then, because the problem is smaller it is appropriate to use the generalized singular value decomposition to solve the problem efficiently. The method is applied to a profile of gravity data acquired over the Safo mining camp in Maku, Iran, which is well known for manganese ores. The presented results demonstrate success in reconstructing the geometry and density distribution of the subsurface source.

  10. Automatic estimation of the regularization parameter in 2D focusing gravity inversion: application of the method to the Safo manganese mine in the northwest of Iran

    International Nuclear Information System (INIS)

    Vatankhah, Saeed; Ardestani, Vahid E; Renaut, Rosemary A

    2014-01-01

    We investigate the use of Tikhonov regularization with the minimum support stabilizer for underdetermined 2D inversion of gravity data. This stabilizer produces models with non-smooth properties which is useful for identifying geologic structures with sharp boundaries. A very important aspect of using Tikhonov regularization is the choice of the regularization parameter that controls the trade-off between the data fidelity and the stabilizing functional. The L-curve and generalized cross-validation techniques, which only require the relative sizes of the uncertainties in the observations, are considered. Both criteria are applied in an iterative process; at each iteration a value for the regularization parameter is estimated. Suitable values for the regularization parameter are successfully determined in both cases for synthetic, but practically relevant, examples. Whenever the geologic situation permits, it is easier and more efficient to model the subsurface with a 2D algorithm, rather than to apply a full 3D approach. Then, because the problem is smaller it is appropriate to use the generalized singular value decomposition to solve the problem efficiently. The method is applied to a profile of gravity data acquired over the Safo mining camp in Maku, Iran, which is well known for manganese ores. The presented results demonstrate success in reconstructing the geometry and density distribution of the subsurface source. (paper)

  11. Point-splitting as a regularization method for {lambda}{phi}{sup 4}-type vertices: Abelian case

    Energy Technology Data Exchange (ETDEWEB)

    Moura-Melo, Winder A.; Helayel Neto, J.A. [Centro Brasileiro de Pesquisas Fisicas (CBPF), Rio de Janeiro, RJ (Brazil)

    1998-11-01

    We obtained regularized Abelian Lagrangians containing {lambda}{phi}{sup 4} -type vertices by means of a suitable point-splitting procedure. The calculation is developed in details for a general Lagrangian, whose fields (gauge and matter ones) satisfy certain conditions. We illustrates our results by considering some special cases, such as the Abelian Higgs, the ({psi}-bar{psi}){sup 2} and the Avdeev-Chizov (real rank-2 antisymmetric tensor as matter fields) models. We also discuss some features of the obtained Lagrangian such as the regularity and non-locality of its new integrating terms. Moreover, the resolution of the Abelian case may teach us some useful technical aspects when dealing with the non-Abelian one. (author)

  12. Adaptive Finite Element Methods for Elliptic Problems with Discontinuous Coefficients

    KAUST Repository

    Bonito, Andrea

    2013-01-01

    Elliptic PDEs with discontinuous diffusion coefficients occur in application domains such as diffusions through porous media, electromagnetic field propagation on heterogeneous media, and diffusion processes on rough surfaces. The standard approach to numerically treating such problems using finite element methods is to assume that the discontinuities lie on the boundaries of the cells in the initial triangulation. However, this does not match applications where discontinuities occur on curves, surfaces, or manifolds, and could even be unknown beforehand. One of the obstacles to treating such discontinuity problems is that the usual perturbation theory for elliptic PDEs assumes bounds for the distortion of the coefficients in the L∞ norm and this in turn requires that the discontinuities are matched exactly when the coefficients are approximated. We present a new approach based on distortion of the coefficients in an Lq norm with q < ∞ which therefore does not require the exact matching of the discontinuities. We then use this new distortion theory to formulate new adaptive finite element methods (AFEMs) for such discontinuity problems. We show that such AFEMs are optimal in the sense of distortion versus number of computations, and report insightful numerical results supporting our analysis. © 2013 Societ y for Industrial and Applied Mathematics.

  13. Transforming Social Regularities in a Multicomponent Community-Based Intervention: A Case Study of Professionals' Adaptability to Better Support Parents to Meet Their Children's Needs.

    Science.gov (United States)

    Quiroz Saavedra, Rodrigo; Brunson, Liesette; Bigras, Nathalie

    2017-06-01

    This paper presents an in-depth case study of the dynamic processes of mutual adjustment that occurred between two professional teams participating in a multicomponent community-based intervention (CBI). Drawing on the concept of social regularities, we focus on patterns of social interaction within and across the two microsystems involved in delivering the intervention. Two research strategies, narrative analysis and structural network analysis, were used to reveal the social regularities linking the two microsystems. Results document strategies and actions undertaken by the professionals responsible for the intervention to modify intersetting social regularities to deal with a problem situation that arose during the course of one intervention cycle. The results illustrate how key social regularities were modified in order to resolve the problem situation and allow the intervention to continue to function smoothly. We propose that these changes represent a transition to a new state of the ecological intervention system. This transformation appeared to be the result of certain key intervening mechanisms: changing key role relationships, boundary spanning, and synergy. The transformation also appeared to be linked to positive setting-level and individual-level outcomes: confidence of key team members, joint planning, decision-making and intervention activities, and the achievement of desired intervention objectives. © Society for Community Research and Action 2017.

  14. Object-Oriented Support for Adaptive Methods on Paranel Machines

    Directory of Open Access Journals (Sweden)

    Sandeep Bhatt

    1993-01-01

    Full Text Available This article reports on experiments from our ongoing project whose goal is to develop a C++ library which supports adaptive and irregular data structures on distributed memory supercomputers. We demonstrate the use of our abstractions in implementing "tree codes" for large-scale N-body simulations. These algorithms require dynamically evolving treelike data structures, as well as load-balancing, both of which are widely believed to make the application difficult and cumbersome to program for distributed-memory machines. The ease of writing the application code on top of our C++ library abstractions (which themselves are application independent, and the low overhead of the resulting C++ code (over hand-crafted C code supports our belief that object-oriented approaches are eminently suited to programming distributed-memory machines in a manner that (to the applications programmer is architecture-independent. Our contribution in parallel programming methodology is to identify and encapsulate general classes of communication and load-balancing strategies useful across applications and MIMD architectures. This article reports experimental results from simulations of half a million particles using multiple methods.

  15. A dynamically adaptive lattice Boltzmann method for thermal convection problems

    Directory of Open Access Journals (Sweden)

    Feldhusen Kai

    2016-12-01

    Full Text Available Utilizing the Boussinesq approximation, a double-population incompressible thermal lattice Boltzmann method (LBM for forced and natural convection in two and three space dimensions is developed and validated. A block-structured dynamic adaptive mesh refinement (AMR procedure tailored for the LBM is applied to enable computationally efficient simulations of moderate to high Rayleigh number flows which are characterized by a large scale disparity in boundary layers and free stream flow. As test cases, the analytically accessible problem of a two-dimensional (2D forced convection flow through two porous plates and the non-Cartesian configuration of a heated rotating cylinder are considered. The objective of the latter is to advance the boundary conditions for an accurate treatment of curved boundaries and to demonstrate the effect on the solution. The effectiveness of the overall approach is demonstrated for the natural convection benchmark of a 2D cavity with differentially heated walls at Rayleigh numbers from 103 up to 108. To demonstrate the benefit of the employed AMR procedure for three-dimensional (3D problems, results from the natural convection in a cubic cavity at Rayleigh numbers from 103 up to 105 are compared with benchmark results.

  16. An Adaptive UKF Based SLAM Method for Unmanned Underwater Vehicle

    Directory of Open Access Journals (Sweden)

    Hongjian Wang

    2013-01-01

    Full Text Available This work proposes an improved unscented Kalman filter (UKF-based simultaneous localization and mapping (SLAM algorithm based on an adaptive unscented Kalman filter (AUKF with a noise statistic estimator. The algorithm solves the issue that conventional UKF-SLAM algorithms have declining accuracy, with divergence occurring when the prior noise statistic is unknown and time-varying. The new SLAM algorithm performs an online estimation of the statistical parameters of unknown system noise by introducing a modified Sage-Husa noise statistic estimator. The algorithm also judges whether the filter is divergent and restrains potential filtering divergence using a covariance matching method. This approach reduces state estimation error, effectively improving navigation accuracy of the SLAM system. A line feature extraction is implemented through a Hough transform based on the ranging sonar model. Test results based on unmanned underwater vehicle (UUV sea trial data indicate that the proposed AUKF-SLAM algorithm is valid and feasible and provides better accuracy than the standard UKF-SLAM system.

  17. Data mining methods application in reflexive adaptation realization in e-learning systems

    Directory of Open Access Journals (Sweden)

    A. S. Bozhday

    2017-01-01

    Full Text Available In recent years, e-learning technologies are rapidly gaining momentum in their evolution. In this regard, issues related to improving the quality of software for virtual educational systems are becoming topical: increasing the period of exploitation of programs, increasing their reliability and flexibility. The above characteristics directly depend on the ability of the software system to adapt to changes in the domain, environment and user characteristics. In some cases, this ability is reduced to the timely optimization of the program’s own interfaces and data structure. At present, several approaches to creating mechanisms for self-optimization of software systems are known, but all of them have an insufficient degree of formalization and, as a consequence, weak universality. The purpose of this work is to develop the basics of the technology of self-optimization of software systems in the structure of e-learning. The proposed technology is based on the formulated and formalized principle of reflexive adaptation of software, applicable to a wide class of software systems and based on the discovery of new knowledge in the behavioral products of the system.To solve this problem, methods of data mining were applied. Data mining allows finding regularities in the functioning of software systems, which may not be obvious at the stage of their development. Finding such regularities and their subsequent analysis will make it possible to reorganize the structure of the system in a more optimal way and without human intervention, which will prolong the life cycle of the software and reduce the costs of its maintenance. Achieving this effect is important for e-learning systems, since they are quite expensive.The main results of the work include: the proposed classification of software adaptation mechanisms, taking into account the latest trends in the IT field in general and in the field of e-learning in particular; Formulation and formalization of

  18. LDRD Final Report: Adaptive Methods for Laser Plasma Simulation

    International Nuclear Information System (INIS)

    Dorr, M R; Garaizar, F X; Hittinger, J A

    2003-01-01

    The goal of this project was to investigate the utility of parallel adaptive mesh refinement (AMR) in the simulation of laser plasma interaction (LPI). The scope of work included the development of new numerical methods and parallel implementation strategies. The primary deliverables were (1) parallel adaptive algorithms to solve a system of equations combining plasma fluid and light propagation models, (2) a research code implementing these algorithms, and (3) an analysis of the performance of parallel AMR on LPI problems. The project accomplished these objectives. New algorithms were developed for the solution of a system of equations describing LPI. These algorithms were implemented in a new research code named ALPS (Adaptive Laser Plasma Simulator) that was used to test the effectiveness of the AMR algorithms on the Laboratory's large-scale computer platforms. The details of the algorithm and the results of the numerical tests were documented in an article published in the Journal of Computational Physics [2]. A principal conclusion of this investigation is that AMR is most effective for LPI systems that are ''hydrodynamically large'', i.e., problems requiring the simulation of a large plasma volume relative to the volume occupied by the laser light. Since the plasma-only regions require less resolution than the laser light, AMR enables the use of efficient meshes for such problems. In contrast, AMR is less effective for, say, a single highly filamented beam propagating through a phase plate, since the resulting speckle pattern may be too dense to adequately separate scales with a locally refined mesh. Ultimately, the gain to be expected from the use of AMR is highly problem-dependent. One class of problems investigated in this project involved a pair of laser beams crossing in a plasma flow. Under certain conditions, energy can be transferred from one beam to the other via a resonant interaction with an ion acoustic wave in the crossing region. AMR provides an

  19. The Student with Albinism in the Regular Classroom.

    Science.gov (United States)

    Ashley, Julia Robertson

    This booklet, intended for regular education teachers who have children with albinism in their classes, begins with an explanation of albinism, then discusses the special needs of the student with albinism in the classroom, and presents information about adaptations and other methods for responding to these needs. Special social and emotional…

  20. Incompressible Navier-Stokes inverse design method based on adaptive unstructured meshes

    International Nuclear Information System (INIS)

    Rahmati, M.T.; Charlesworth, D.; Zangeneh, M.

    2005-01-01

    An inverse method for blade design based on Navier-Stokes equations on adaptive unstructured meshes has been developed. In the method, unlike the method based on inviscid equations, the effect of viscosity is directly taken into account. In the method, the pressure (or pressure loading) is prescribed. The design method then computes the blade shape that would accomplish the target prescribed pressure distribution. The method is implemented using a cell-centered finite volume method, which solves the incompressible Navier-Stokes equations on unstructured meshes. An adaptive unstructured mesh method based on grid subdivision and local adaptive mesh method is utilized for increasing the accuracy. (author)

  1. The Economics of Adaptation: Concepts, Methods and Examples

    DEFF Research Database (Denmark)

    Callaway, John MacIntosh; Naswa, Prakriti; Trærup, Sara Lærke Meltofte

    and sectoral level strategies, plans and policies. Furthermore, we see it at the local level, where people are already adapting to the early impacts of climate change that affect livelihoods through, for example, changing rainfall patterns, drought, and frequency and intensity of extreme events. Analyses...... of the costs and benefits of climate change impacts and adaptation measures are important to inform future action. Despite the growth in the volume of research and studies on the economics of climate change adaptation over the past 10 years, there are still important gaps and weaknesses in the existing...... knowledge that limit effective and efficient decision-making and implementation of adaptation measures. Much of the literature to date has focussed on aggregate (national, regional and global) estimates of the economic costs of climate change impacts. There has been much less attention to the economics...

  2. Regularized Generalized Canonical Correlation Analysis

    Science.gov (United States)

    Tenenhaus, Arthur; Tenenhaus, Michel

    2011-01-01

    Regularized generalized canonical correlation analysis (RGCCA) is a generalization of regularized canonical correlation analysis to three or more sets of variables. It constitutes a general framework for many multi-block data analysis methods. It combines the power of multi-block data analysis methods (maximization of well identified criteria) and…

  3. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods

    International Nuclear Information System (INIS)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. Finally, our simulated annealing program

  4. Adaptation.

    Science.gov (United States)

    Broom, Donald M

    2006-01-01

    The term adaptation is used in biology in three different ways. It may refer to changes which occur at the cell and organ level, or at the individual level, or at the level of gene action and evolutionary processes. Adaptation by cells, especially nerve cells helps in: communication within the body, the distinguishing of stimuli, the avoidance of overload and the conservation of energy. The time course and complexity of these mechanisms varies. Adaptive characters of organisms, including adaptive behaviours, increase fitness so this adaptation is evolutionary. The major part of this paper concerns adaptation by individuals and its relationships to welfare. In complex animals, feed forward control is widely used. Individuals predict problems and adapt by acting before the environmental effect is substantial. Much of adaptation involves brain control and animals have a set of needs, located in the brain and acting largely via motivational mechanisms, to regulate life. Needs may be for resources but are also for actions and stimuli which are part of the mechanism which has evolved to obtain the resources. Hence pigs do not just need food but need to be able to carry out actions like rooting in earth or manipulating materials which are part of foraging behaviour. The welfare of an individual is its state as regards its attempts to cope with its environment. This state includes various adaptive mechanisms including feelings and those which cope with disease. The part of welfare which is concerned with coping with pathology is health. Disease, which implies some significant effect of pathology, always results in poor welfare. Welfare varies over a range from very good, when adaptation is effective and there are feelings of pleasure or contentment, to very poor. A key point concerning the concept of individual adaptation in relation to welfare is that welfare may be good or poor while adaptation is occurring. Some adaptation is very easy and energetically cheap and

  5. Systems and Methods for Derivative-Free Adaptive Control

    Science.gov (United States)

    Yucelen, Tansel (Inventor); Kim, Kilsoo (Inventor); Calise, Anthony J. (Inventor)

    2015-01-01

    An adaptive control system is disclosed. The control system can control uncertain dynamic systems. The control system can employ one or more derivative-free adaptive control architectures. The control system can further employ one or more derivative-free weight update laws. The derivative-free weight update laws can comprise a time-varying estimate of an ideal vector of weights. The control system of the present invention can therefore quickly stabilize systems that undergo sudden changes in dynamics, caused by, for example, sudden changes in weight. Embodiments of the present invention can also provide a less complex control system than existing adaptive control systems. The control system can control aircraft and other dynamic systems, such as, for example, those with non-minimum phase dynamics.

  6. The use of the spectral method within the fast adaptive composite grid method

    Energy Technology Data Exchange (ETDEWEB)

    McKay, S.M.

    1994-12-31

    The use of efficient algorithms for the solution of partial differential equations has been sought for many years. The fast adaptive composite grid (FAC) method combines an efficient algorithm with high accuracy to obtain low cost solutions to partial differential equations. The FAC method achieves fast solution by combining solutions on different grids with varying discretizations and using multigrid like techniques to find fast solution. Recently, the continuous FAC (CFAC) method has been developed which utilizes an analytic solution within a subdomain to iterate to a solution of the problem. This has been shown to achieve excellent results when the analytic solution can be found. The CFAC method will be extended to allow solvers which construct a function for the solution, e.g., spectral and finite element methods. In this discussion, the spectral methods will be used to provide a fast, accurate solution to the partial differential equation. As spectral methods are more accurate than finite difference methods, the ensuing accuracy from this hybrid method outside of the subdomain will be investigated.

  7. Utilizing and Adapting the Delphi Method for Use in Qualitative Research

    OpenAIRE

    Shane R. Brady

    2015-01-01

    The Delphi method is a pragmatic research method created in the 1950s by researchers at the RAND Corporation for use in policy making, organizational decision making, and to inform direct practices. While the Delphi method has been regularly utilized in mixed methods studies, far fewer studies have been completed using the Delphi method for qualitative research. Despite the utility of the Delphi method in social science research, little guidance is provided for using the Delphi in the context...

  8. METHOD OF GROUP ADAPTATION WITH FIXING OF BIASES OF NEURONS (AFBN FOR FORECASTING OF INDICATORS OF QUALITY OF VOLUME ANNOUNCERS

    Directory of Open Access Journals (Sweden)

    S. V. Bukharin

    2015-01-01

    Full Text Available Neural modeling often doesn't guarantee performance of the principle of a community – the neural model trained on one data set can be not adequate when giving on its entrance of data from other set. Therefore when using neural modeling procedure of testing of the received results by means of the method of ridge regression based on the theory of regularization incorrectly of objectives is necessary. The being of the offered method of adaptation of a neural network with fixing of shifts (ABNS is as follows: 1. Instead of a two-layer neural network for adaptation the single-layer neural network more fully answering to use of a method of characteristic points as which the weighed sums of separate groups of signs get out is recommended. 2. For elimination of a problem of the ambiguity caused by a traditional choice of casual entry conditions, initial values of scales and shifts of neurons get out equal to zero. 3. For methodological unity of the solution of a straight line and the return problem of examination, on weight and shift of a neural network the following restrictions are programmatically imposed: the weight [0, 1], and shifts forcibly rely equal to zero by an adaptation speed parameter choice. 4. Results of neural modeling can often be doubtful owing to violation of the principle of a community and check of its observance requires obligatory testing of the received results, for example, by means of a method of ridge regression. As appears from the presented results, in all cases it is necessary to use the offered methods of consecutive and group adaptation with fixing of shifts of neurons, as thus there is a possibility of restoration of initial regression model. When fixing zero shifts of neurons their found weight gain values from the range [0, 1] that provides methodological unity of the solution of a straight line and return problem of examination.

  9. An adaptive household sampling method for rural African communities

    African Journals Online (AJOL)

    The adaptive sampling strategy was cost and time effective: freely available versions of Google Earth and QGIS software were employed along with inexpensive handheld Global Positioning System (GPS) devices; a total of 57 households were surveyed by teams of two enumerators over three consecutive Sundays.

  10. Heuristic Constraint Management Methods in Multidimensional Adaptive Testing

    Science.gov (United States)

    Born, Sebastian; Frey, Andreas

    2017-01-01

    Although multidimensional adaptive testing (MAT) has been proven to be highly advantageous with regard to measurement efficiency when several highly correlated dimensions are measured, there are few operational assessments that use MAT. This may be due to issues of constraint management, which is more complex in MAT than it is in unidimensional…

  11. Key concepts and methods in social vulnerability and adaptive capacity

    Science.gov (United States)

    Daniel J. Murphy; Carina Wyborn; Laurie Yung; Daniel R. Williams

    2015-01-01

    National forests have been asked to assess how climate change will impact nearby human communities. To assist their thinking on this topic, we examine the concepts of social vulnerability and adaptive capacity with an emphasis on a range of theoretical and methodological approaches. This analysis is designed to help researchers and decision-makers select appropriate...

  12. Regularization in Matrix Relevance Learning

    NARCIS (Netherlands)

    Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael

    A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can

  13. High Resolution DNS of Turbulent Flows using an Adaptive, Finite Volume Method

    Science.gov (United States)

    Trebotich, David

    2014-11-01

    We present a new computational capability for high resolution simulation of incompressible viscous flows. Our approach is based on cut cell methods where an irregular geometry such as a bluff body is intersected with a rectangular Cartesian grid resulting in cut cells near the boundary. In the cut cells we use a conservative discretization based on a discrete form of the divergence theorem to approximate fluxes for elliptic and hyperbolic terms in the Navier-Stokes equations. Away from the boundary the method reduces to a finite difference method. The algorithm is implemented in the Chombo software framework which supports adaptive mesh refinement and massively parallel computations. The code is scalable to 200,000 + processor cores on DOE supercomputers, resulting in DNS studies at unprecedented scale and resolution. For flow past a cylinder in transition (Re = 300) we observe a number of secondary structures in the far wake in 2D where the wake is over 120 cylinder diameters in length. These are compared with the more regularized wake structures in 3D at the same scale. For flow past a sphere (Re = 600) we resolve an arrowhead structure in the velocity in the near wake. The effectiveness of AMR is further highlighted in a simulation of turbulent flow (Re = 6000) in the contraction of an oil well blowout preventer. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Advanced Scientific Computing Research, Applied Mathematics program under Contract Number DE-AC02-05-CH11231.

  14. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  15. Non-regularized inversion method from light scattering applied to ferrofluid magnetization curves for magnetic size distribution analysis

    International Nuclear Information System (INIS)

    Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H.

    2014-01-01

    A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online

  16. Adaptation

    International Development Research Centre (IDRC) Digital Library (Canada)

    . Dar es Salaam. Durban. Bloemfontein. Antananarivo. Cape Town. Ifrane ... program strategy. A number of CCAA-supported projects have relevance to other important adaptation-related themes such as disaster preparedness and climate.

  17. Selecting protein families for environmental features based on manifold regularization.

    Science.gov (United States)

    Jiang, Xingpeng; Xu, Weiwei; Park, E K; Li, Guangrong

    2014-06-01

    Recently, statistics and machine learning have been developed to identify functional or taxonomic features of environmental features or physiological status. Important proteins (or other functional and taxonomic entities) to environmental features can be potentially used as biosensors. A major challenge is how the distribution of protein and gene functions embodies the adaption of microbial communities across environments and host habitats. In this paper, we propose a novel regularization method for linear regression to adapt the challenge. The approach is inspired by local linear embedding (LLE) and we call it a manifold-constrained regularization for linear regression (McRe). The novel regularization procedure also has potential to be used in solving other linear systems. We demonstrate the efficiency and the performance of the approach in both simulation and real data.

  18. Adaptive Integral Method for Higher-Order Hierarchical Method of Moments

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Meincke, Peter

    2006-01-01

    The Adaptive Integral Method (AIM) is applied to solve the volume integral equation in conjunction with the higher-order Method of Moments (MoM). The classical AIM is modified for larger discretization cells to take advantage of higher-order MoM. The technique combines the low computational...... complexity and memory requirements of AIM with the reduced number of unknowns and higher-order convergence of higher-order hierarchical Legendre basis functions. Numerical examples given show the advantages of the proposed technique over AIM based on low-order basis functions in terms of memory...... and computational time. Several preconditioning techniques applied to AIM for volume integral equations are considered....

  19. Effort variation regularization in sound field reproduction

    DEFF Research Database (Denmark)

    Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis

    2010-01-01

    In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...... in the listening room....

  20. Adaptive Higher-Order Methods for Problems in Elastodynamics

    National Research Council Canada - National Science Library

    Oden, J

    2000-01-01

    .... Two significant classes of new methods were developed, analyzed, and implemented: 1) the so-called hp-Cloud Method, a variant of the meshfree methods built on partitions of unity generated by traditional finite elements...

  1. 3-D design method for welding groove and seal weld of reactor CRDM adapter

    International Nuclear Information System (INIS)

    Ma Baiyong; Wang Xiaobin; Zhu Xiaoyong

    2008-01-01

    Based on the analysis of 2 D and 3 D shapes of welding groove and seal weld of reactor CRDM adapter, four intersecting curves are defined, and a method and gist to 3 D design of adapter welding groove and seal weld is proposed. Parameterized design of adapter welding groove and seal weld has been realized using UG software, and the main factors which affect the welding section areas have been analyzed. Compared with the measurement, the error of weld section area of each adapter created by spline fitting method is less than 0.8%. (authors)

  2. Adapt

    Science.gov (United States)

    Bargatze, L. F.

    2015-12-01

    Active Data Archive Product Tracking (ADAPT) is a collection of software routines that permits one to generate XML metadata files to describe and register data products in support of the NASA Heliophysics Virtual Observatory VxO effort. ADAPT is also a philosophy. The ADAPT concept is to use any and all available metadata associated with scientific data to produce XML metadata descriptions in a consistent, uniform, and organized fashion to provide blanket access to the full complement of data stored on a targeted data server. In this poster, we present an application of ADAPT to describe all of the data products that are stored by using the Common Data File (CDF) format served out by the CDAWEB and SPDF data servers hosted at the NASA Goddard Space Flight Center. These data servers are the primary repositories for NASA Heliophysics data. For this purpose, the ADAPT routines have been used to generate data resource descriptions by using an XML schema named Space Physics Archive, Search, and Extract (SPASE). SPASE is the designated standard for documenting Heliophysics data products, as adopted by the Heliophysics Data and Model Consortium. The set of SPASE XML resource descriptions produced by ADAPT includes high-level descriptions of numerical data products, display data products, or catalogs and also includes low-level "Granule" descriptions. A SPASE Granule is effectively a universal access metadata resource; a Granule associates an individual data file (e.g. a CDF file) with a "parent" high-level data resource description, assigns a resource identifier to the file, and lists the corresponding assess URL(s). The CDAWEB and SPDF file systems were queried to provide the input required by the ADAPT software to create an initial set of SPASE metadata resource descriptions. Then, the CDAWEB and SPDF data repositories were queried subsequently on a nightly basis and the CDF file lists were checked for any changes such as the occurrence of new, modified, or deleted

  3. A Novel Method of Adaptive Traffic Image Enhancement for Complex Environments

    Directory of Open Access Journals (Sweden)

    Cao Liu

    2015-01-01

    Full Text Available There exist two main drawbacks for traffic images in classic image enhancement methods. First is the performance degradation that occurs under frontlight, backlight, and extremely dark conditions. The second drawback is complicated manual settings, such as transform functions and multiple parameter selection mechanisms. Thus, this paper proposes an effective and adaptive parameter optimization enhancement algorithm based on adaptive brightness baseline drift (ABBD for color traffic images under different luminance conditions. This method consists of two parts: brightness baseline model acquisition and adaptive color image compensation. The brightness baseline model can be attained by analyzing changes in light along a timeline. The adaptive color image compensation involves color space remapping and adaptive compensation specific color components. Our experiments were tested on various traffic images under frontlight, backlight, and during nighttime. The experimental results show that the proposed method achieved better effects compared with other available methods under different luminance conditions, which also effectively reduced the influence of the weather.

  4. Mitigation and adaptation cost assessment: Concepts, methods and appropriate use

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-12-31

    The present report on mitigation and adaptation costs addresses the complex issue of identifying synergies and tradeoffs between national priorities and mitigation policies, an issue that requires the integration of various disciplines so as to provide a comprehensive overview of future development trends, available technologies and economic policies. Further, the report suggests a new conceptual framework for treating the social aspects in assessing mitigation and adaptation costs in climate change studies. The impacts of certain sustainability indicators such as employment and poverty reduction on mitigation costing are also discussed in the report. Among the topics to be considered by over 120 distinguished international experts, are the elements of costing methodologies at both the micro and macro levels. Special effort will be made to include the impacts of such parameters as income, equity, poverty, employment and trade. Hence, the contents of this report are highly relevant to the authors of the Third Working Group in the development of the TAR. The report contains a chapter on Special Issues and Problems Related to Cost Assessment for Developing Countries. This chapter will provide valuable background in the further development of these concepts in the TAR because it is an area that has not received due attention in previous work. (au)

  5. Mitigation and adaptation cost assessment: Concepts, methods and appropriate use

    International Nuclear Information System (INIS)

    1998-01-01

    The present report on mitigation and adaptation costs addresses the complex issue of identifying synergies and tradeoffs between national priorities and mitigation policies, an issue that requires the integration of various disciplines so as to provide a comprehensive overview of future development trends, available technologies and economic policies. Further, the report suggests a new conceptual framework for treating the social aspects in assessing mitigation and adaptation costs in climate change studies. The impacts of certain sustainability indicators such as employment and poverty reduction on mitigation costing are also discussed in the report. Among the topics to be considered by over 120 distinguished international experts, are the elements of costing methodologies at both the micro and macro levels. Special effort will be made to include the impacts of such parameters as income, equity, poverty, employment and trade. Hence, the contents of this report are highly relevant to the authors of the Third Working Group in the development of the TAR. The report contains a chapter on Special Issues and Problems Related to Cost Assessment for Developing Countries. This chapter will provide valuable background in the further development of these concepts in the TAR because it is an area that has not received due attention in previous work. (au)

  6. Assessment of necessary regularity of internal irradiation monitoring on the basis of direct and indirect methods of dosimetry

    International Nuclear Information System (INIS)

    Malykhin, V.M.; Ivanova, N.I.

    1981-01-01

    It is shown that when assessing the necessary periodicity of internal irradiation monitoring, it is required to take account of the nature (rhythm) of radionuclide intake to the organism during the monitoring period, the effective period of radionuclide biological half-life, its activity in the organism, sensitivity of the technique applied and the labour-consumig character of the monitoring method [ru

  7. Travel time calculation in regular 3D grid in local and regional scale using fast marching method

    Science.gov (United States)

    Polkowski, M.

    2015-12-01

    Local and regional 3D seismic velocity models of crust and sediments are very important for numerous technics like mantle and core tomography, localization of local and regional events and others. Most of those techniques require calculation of wave travel time through the 3D model. This can be achieved using multiple approaches from simple ray tracing to advanced full waveform calculation. In this study simple and efficient implementation of fast marching method is presented. This method provides more information than ray tracing and is much less complicated than methods like full waveform being the perfect compromise. Presented code is written in C++, well commented and is easy to modify for different types of studies. Additionally performance is widely discussed including possibilities of multithreading and massive parallelism like GPU. Source code will be published in 2016 as it is part of the PhD thesis. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  8. Adaptation

    International Development Research Centre (IDRC) Digital Library (Canada)

    Nairobi, Kenya. 28 Adapting Fishing Policy to Climate Change with the Aid of Scientific and Endogenous Knowledge. Cap Verde, Gambia,. Guinea, Guinea Bissau,. Mauritania and Senegal. Environment and Development in the Third World. (ENDA-TM). Dakar, Senegal. 29 Integrating Indigenous Knowledge in Climate Risk ...

  9. A method for the deliberate and deliberative selection of policy instrument mixes for climate change adaptation

    Directory of Open Access Journals (Sweden)

    Heleen L. P. Mees

    2014-06-01

    Full Text Available Policy instruments can help put climate adaptation plans into action. Here, we propose a method for the systematic assessment and selection of policy instruments for stimulating adaptation action. The multi-disciplinary set of six assessment criteria is derived from economics, policy, and legal studies. These criteria are specified for the purpose of climate adaptation by taking into account four challenges to the governance of climate adaptation: uncertainty, spatial diversity, controversy, and social complexity. The six criteria and four challenges are integrated into a step-wise method that enables the selection of instruments starting from a generic assessment and ending with a specific assessment of policy instrument mixes for the stimulation of a specific adaptation measure. We then apply the method to three examples of adaptation measures. The method's merits lie in enabling deliberate choices through a holistic and comprehensive set of adaptation specific criteria, as well as deliberative choices by offering a stepwise method that structures an informed dialog on instrument selection. Although the method was created and applied by scientific experts, policy-makers can also use the method.

  10. Improved Model Calibration From Genetically Adaptive Multi-Method Search

    Science.gov (United States)

    Vrugt, J. A.; Robinson, B. A.

    2006-12-01

    Evolutionary optimization is a subject of intense interest in many fields of study, including computational chemistry, biology, bio-informatics, economics, computational science, geophysics and environmental science. The goal is to determine values for model parameters or state variables that provide the best possible solution to a predefined cost or objective function, or a set of optimal trade-off values in the case of two or more conflicting objectives. However, locating optimal solutions often turns out to be painstakingly tedious, or even completely beyond current or projected computational capacity. Here we present an innovative concept of genetically adaptive multi-algorithm optimization. Benchmark results show that this new optimization technique is significantly more efficient than current state-of-the-art evolutionary algorithms, approaching a factor of ten improvement for the more complex, higher dimensional optimization problems. Our new algorithm provides new opportunities for solving previously intractable environmental model calibration problems.

  11. On Round-off Error for Adaptive Finite Element Methods

    KAUST Repository

    Alvarez-Aramberri, J.

    2012-06-02

    Round-off error analysis has been historically studied by analyzing the condition number of the associated matrix. By controlling the size of the condition number, it is possible to guarantee a prescribed round-off error tolerance. However, the opposite is not true, since it is possible to have a system of linear equations with an arbitrarily large condition number that still delivers a small round-off error. In this paper, we perform a round-off error analysis in context of 1D and 2D hp-adaptive Finite Element simulations for the case of Poisson equation. We conclude that boundary conditions play a fundamental role on the round-off error analysis, specially for the so-called ‘radical meshes’. Moreover, we illustrate the importance of the right-hand side when analyzing the round-off error, which is independent of the condition number of the matrix.

  12. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  13. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  14. Adapting Language Modeling Methods for Expert Search to Rank Wikipedia Entities

    Science.gov (United States)

    Jiang, Jiepu; Lu, Wei; Rong, Xianqian; Gao, Yangyan

    In this paper, we propose two methods to adapt language modeling methods for expert search to the INEX entity ranking task. In our experiments, we notice that language modeling methods for expert search, if directly applied to the INEX entity ranking task, cannot effectively distinguish entity types. Thus, our proposed methods aim at resolving this problem. First, we propose a method to take into account the INEX category query field. Second, we use an interpolation of two language models to rank entities, which can solely work on the text query. Our experiments indicate that both methods can effectively adapt language modeling methods for expert search to the INEX entity ranking task.

  15. A non-statistical regularization approach and a tensor product decomposition method applied to complex flow data

    Science.gov (United States)

    von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin

    2016-04-01

    Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I

  16. Adaptation of chemical methods of analysis to the matrix of pyrite-acidified mining lakes

    International Nuclear Information System (INIS)

    Herzsprung, P.; Friese, K.

    2000-01-01

    Owing to the unusual matrix of pyrite-acidified mining lakes, the analysis of chemical parameters may be difficult. A number of methodological improvements have been developed so far, and a comprehensive validation of methods is envisaged. The adaptation of the available methods to small-volume samples of sediment pore waters and the adaptation of sensitivity to the expected concentration ranges is an important element of the methods applied in analyses of biogeochemical processes in mining lakes [de

  17. Adapting participatory and agile software methods to participatory rural development

    OpenAIRE

    Dearden, Andy; Rizvi, H.

    2008-01-01

    This paper presents observations from a project that combines participatory rural development methods with participatory design techniques to support a farmers’ co-operative in Madhya Pradesh, India

  18. PLS-based and regularization-based methods for the selection of relevant variables in non-targeted metabolomics data

    Directory of Open Access Journals (Sweden)

    Renata Bujak

    2016-07-01

    Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings

  19. The older person has a stroke: Learning to adapt using the Feldenkrais® Method.

    Science.gov (United States)

    Jackson-Wyatt, O

    1995-01-01

    The older person with a stroke requires adapted therapeutic interventions to take into account normal age-related changes. The Feldenkrais® Method presents a model for learning to promote adaptability that addresses key functional changes seen with normal aging. Clinical examples related to specific functional tasks are discussed to highlight major treatment modifications and neuromuscular, psychological, emotional, and sensory considerations.

  20. A Dynamic and Adaptive Selection Radar Tracking Method Based on Information Entropy

    Directory of Open Access Journals (Sweden)

    Ge Jianjun

    2017-12-01

    Full Text Available Nowadays, the battlefield environment has become much more complex and variable. This paper presents a quantitative method and lower bound for the amount of target information acquired from multiple radar observations to adaptively and dynamically organize the detection of battlefield resources based on the principle of information entropy. Furthermore, for minimizing the given information entropy’s lower bound for target measurement at every moment, a method to dynamically and adaptively select radars with a high amount of information for target tracking is proposed. The simulation results indicate that the proposed method has higher tracking accuracy than that of tracking without adaptive radar selection based on entropy.

  1. Simple method for adaptive filtering of motion artifacts in E-textile wearable ECG sensors.

    Science.gov (United States)

    Alkhidir, Tamador; Sluzek, Andrzej; Yapici, Murat Kaya

    2015-08-01

    In this paper, we have developed a simple method for adaptive out-filtering of the motion artifact from the electrocardiogram (ECG) obtained by using conductive textile electrodes. The textile electrodes were placed on the left and the right wrist to measure ECG through lead-1 configuration. The motion artifact was induced by simple hand movements. The reference signal for adaptive filtering was obtained by placing additional electrodes at one hand to capture the motion of the hand. The adaptive filtering was compared to independent component analysis (ICA) algorithm. The signal-to-noise ratio (SNR) for the adaptive filtering approach was higher than independent component analysis in most cases.

  2. a transport layer protocol using adaptive loss recovery method for ...

    Indian Academy of Sciences (India)

    Ayhan Kİraz

    intermediate nodes are responsible for recovering missing segments. When LRMs were compared, hop-by-hop method performs achieves results in unreliable connections. End-to-end method provides faster segment transmission and low end-to-end latency values in low error rates. Sev- eral simulations were performed ...

  3. A Monte Carlo adapted finite element method for dislocation ...

    Indian Academy of Sciences (India)

    P Zakian

    2017-10-10

    Oct 10, 2017 ... simulations are proposed. Various comparisons are examined to illustrate the capability of both methods for random simulation of faults. Keywords. Monte Carlo simulation; stochastic modeling; split node technique; finite element method; earthquake fault dislocation. 1. Introduction. In material science, a ...

  4. ESTIMATION OF WIDE BAND RADAR CROSS SECTION (RCS OF REGULAR SHAPED OBJECTS USING METHOD OF MOMENTS (MOM

    Directory of Open Access Journals (Sweden)

    M. Madheswaran

    2012-06-01

    Full Text Available Modern fighter aircrafts, ships, missiles etc need to be very low Radar Cross Section (RCS designs, to avoid detection by hostile radars. Hence accurate prediction of RCS of complex objects like aircrafts is essential to meet this requirement. A simple and efficient numerical procedure for treating problems of wide band RCS prediction Perfect Electric Conductor (PEC objects is developed using Method of Moment (MoM. Implementation of MoM for prediction of RCS involves solving Electric Field Integral Equation (EFIE for electric current using the vector and scalar potential solutions, which satisfy the boundary condition that the tangential electric field at the boundary of the PEC body is zero. For numerical purposes, the objects are modeled using planar triangular surfaces patches. Set of special sub-domain type basis functions are defined on pairs of adjacent triangular patches. These basis functions yield a current representation free of line or point charges at sub-domain boundaries. Once the current distribution is obtained, dipole model is used to find Scattering field in free space. RCS can be calculated from the scattered and incident fields. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth.

  5. An adaptation of Krylov subspace methods to path following

    Energy Technology Data Exchange (ETDEWEB)

    Walker, H.F. [Utah State Univ., Logan, UT (United States)

    1996-12-31

    Krylov subspace methods at present constitute a very well known and highly developed class of iterative linear algebra methods. These have been effectively applied to nonlinear system solving through Newton-Krylov methods, in which Krylov subspace methods are used to solve the linear systems that characterize steps of Newton`s method (the Newton equations). Here, we will discuss the application of Krylov subspace methods to path following problems, in which the object is to track a solution curve as a parameter varies. Path following methods are typically of predictor-corrector form, in which a point near the solution curve is {open_quotes}predicted{close_quotes} by some easy but relatively inaccurate means, and then a series of Newton-like corrector iterations is used to return approximately to the curve. The analogue of the Newton equation is underdetermined, and an additional linear condition must be specified to determine corrector steps uniquely. This is typically done by requiring that the steps be orthogonal to an approximate tangent direction. Augmenting the under-determined system with this orthogonality condition in a straightforward way typically works well if direct linear algebra methods are used, but Krylov subspace methods are often ineffective with this approach. We will discuss recent work in which this orthogonality condition is imposed directly as a constraint on the corrector steps in a certain way. The means of doing this preserves problem conditioning, allows the use of preconditioners constructed for the fixed-parameter case, and has certain other advantages. Experiments on standard PDE continuation test problems indicate that this approach is effective.

  6. LEACH-A: An Adaptive Method for Improving LEACH Protocol

    Directory of Open Access Journals (Sweden)

    Jianli ZHAO

    2014-01-01

    Full Text Available Energy has become one of the most important constraints on wireless sensor networks. Hence, many researchers in this field focus on how to design a routing protocol to prolong the lifetime of the network. The classical hierarchical protocols such as LEACH and LEACH-C have better performance in saving the energy consumption. However, the choosing strategy only based on the largest residue energy or shortest distance will still consume more energy. In this paper an adaptive routing protocol named “LEACH-A” which has an energy threshold E0 is proposed. If there are cluster nodes whose residual energy are greater than E0, the node of largest residual energy is selected to communicated with the base station; When all the cluster nodes energy are less than E0, the node nearest to the base station is select to communication with the base station. Simulations show that our improved protocol LEACH-A performs better than the LEACH and the LEACH-C.

  7. Adapting Western research methods to indigenous ways of knowing.

    Science.gov (United States)

    Simonds, Vanessa W; Christopher, Suzanne

    2013-12-01

    Indigenous communities have long experienced exploitation by researchers and increasingly require participatory and decolonizing research processes. We present a case study of an intervention research project to exemplify a clash between Western research methodologies and Indigenous methodologies and how we attempted reconciliation. We then provide implications for future research based on lessons learned from Native American community partners who voiced concern over methods of Western deductive qualitative analysis. Decolonizing research requires constant reflective attention and action, and there is an absence of published guidance for this process. Continued exploration is needed for implementing Indigenous methods alone or in conjunction with appropriate Western methods when conducting research in Indigenous communities. Currently, examples of Indigenous methods and theories are not widely available in academic texts or published articles, and are often not perceived as valid.

  8. Sparse regularization for force identification using dictionaries

    Science.gov (United States)

    Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng

    2016-04-01

    The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.

  9. Robust and Adaptive Block Tracking Method Based on Particle Filter

    Directory of Open Access Journals (Sweden)

    Bin Sun

    2015-10-01

    Full Text Available In the field of video analysis and processing, object tracking is attracting more and more attention especially in traffic management, digital surveillance and so on. However problems such as objects’ abrupt motion, occlusion and complex target structures would bring difficulties to academic study and engineering application. In this paper, a fragmentsbased tracking method using the block relationship coefficient is proposed. In this method, we use particle filter algorithm and object region is divided into blocks initially. The contribution of this method is that object features are not extracted just from a single block, the relationship between current block and its neighbor blocks are extracted to describe the variation of the block. Each block is weighted according to the block relationship coefficient when the block is voted on the most matched region in next frame. This method can make full use of the relationship between blocks. The experimental results demonstrate that our method can provide good performance in condition of occlusion and abrupt posture variation.

  10. Adaptation to Climate Change: A Comparative Analysis of Modeling Methods for Heat-Related Mortality.

    Science.gov (United States)

    Gosling, Simon N; Hondula, David M; Bunker, Aditi; Ibarreta, Dolores; Liu, Junguo; Zhang, Xinxin; Sauerborn, Rainer

    2017-08-16

    Multiple methods are employed for modeling adaptation when projecting the impact of climate change on heat-related mortality. The sensitivity of impacts to each is unknown because they have never been systematically compared. In addition, little is known about the relative sensitivity of impacts to "adaptation uncertainty" (i.e., the inclusion/exclusion of adaptation modeling) relative to using multiple climate models and emissions scenarios. This study had three aims: a ) Compare the range in projected impacts that arises from using different adaptation modeling methods; b ) compare the range in impacts that arises from adaptation uncertainty with ranges from using multiple climate models and emissions scenarios; c ) recommend modeling method(s) to use in future impact assessments. We estimated impacts for 2070-2099 for 14 European cities, applying six different methods for modeling adaptation; we also estimated impacts with five climate models run under two emissions scenarios to explore the relative effects of climate modeling and emissions uncertainty. The range of the difference (percent) in impacts between including and excluding adaptation, irrespective of climate modeling and emissions uncertainty, can be as low as 28% with one method and up to 103% with another (mean across 14 cities). In 13 of 14 cities, the ranges in projected impacts due to adaptation uncertainty are larger than those associated with climate modeling and emissions uncertainty. Researchers should carefully consider how to model adaptation because it is a source of uncertainty that can be greater than the uncertainty in emissions and climate modeling. We recommend absolute threshold shifts and reductions in slope. https://doi.org/10.1289/EHP634.

  11. Ecological Scarcity Method: Adaptation and Implementation for Different Countries

    Science.gov (United States)

    Grinberg, Marina; Ackermann, Robert; Finkbeiner, Matthias

    2012-12-01

    The Ecological Scarcity Method is one of the methods for impact assessment in LCA. It enables to express different environmental impacts in single score units, eco-points. Such results are handy for decision-makers in policy or enterprises to improve environmental management. So far this method is mostly used in the country of its origin, Switzerland. Eco-factors derive from the national conditions. For other countries sometimes it is impossible to calculate all ecofactors. The solution of the problem is to create a set of transformation rules. The rules should take into account the regional differences, the level of society development, the grade of scarcity and other factors. The research is focused on the creation of transformation rules between Switzerland, Germany and the Russian Federation in case of GHG emissions.

  12. An adaptive discontinuous finite element method for the transport equation

    International Nuclear Information System (INIS)

    Lang, J.; Walter, A.

    1995-01-01

    In this paper we introduce a discontinuous finite element method. In our approach, it is possible to combine the advantages of finite element and finite difference methods. The main ingredients are numerical flux approximation and local orthogonal basis functions. The scheme is defined on arbitrary triangulations and two different error indicators are derived. Especially the second one is closely connected to our approach and able to handle arbitrary varying flow directions. Numerical results are given for boundary value problems in two dimensions. They demonstrate the performance of the scheme, combined with the two error indicators

  13. Methods of veld reinforcement, their action and adaptability to ...

    African Journals Online (AJOL)

    Methods of veld reinforcement are categorised on the basis of the disturbance caused to the vegetation and soil during their application. The extent of mixing of seed and fertilizer with the soil is also considered. Limitations imposed by landscape position and soil erodibility on the use of the various techniques are proposed ...

  14. An adaptive image denoising method based on local parameters ...

    Indian Academy of Sciences (India)

    ML); peak signal-to-noise ratio (PSNR). 1. Introduction. Image denoising is one of the major research topics in image processing. An efficient image denoising method is that in which a compromise has to be found between the noise reduction.

  15. An adaptive proper orthogonal decomposition method for model order reduction of multi-disc rotor system

    Science.gov (United States)

    Jin, Yulin; Lu, Kuan; Hou, Lei; Chen, Yushu

    2017-12-01

    The proper orthogonal decomposition (POD) method is a main and efficient tool for order reduction of high-dimensional complex systems in many research fields. However, the robustness problem of this method is always unsolved, although there are some modified POD methods which were proposed to solve this problem. In this paper, a new adaptive POD method called the interpolation Grassmann manifold (IGM) method is proposed to address the weakness of local property of the interpolation tangent-space of Grassmann manifold (ITGM) method in a wider parametric region. This method is demonstrated here by a nonlinear rotor system of 33-degrees of freedom (DOFs) with a pair of liquid-film bearings and a pedestal looseness fault. The motion region of the rotor system is divided into two parts: simple motion region and complex motion region. The adaptive POD method is compared with the ITGM method for the large and small spans of parameter in the two parametric regions to present the advantage of this method and disadvantage of the ITGM method. The comparisons of the responses are applied to verify the accuracy and robustness of the adaptive POD method, as well as the computational efficiency is also analyzed. As a result, the new adaptive POD method has a strong robustness and high computational efficiency and accuracy in a wide scope of parameter.

  16. Solving delay differential equations in S-ADAPT by method of steps.

    Science.gov (United States)

    Bauer, Robert J; Mo, Gary; Krzyzanski, Wojciech

    2013-09-01

    S-ADAPT is a version of the ADAPT program that contains additional simulation and optimization abilities such as parametric population analysis. S-ADAPT utilizes LSODA to solve ordinary differential equations (ODEs), an algorithm designed for large dimension non-stiff and stiff problems. However, S-ADAPT does not have a solver for delay differential equations (DDEs). Our objective was to implement in S-ADAPT a DDE solver using the methods of steps. The method of steps allows one to solve virtually any DDE system by transforming it to an ODE system. The solver was validated for scalar linear DDEs with one delay and bolus and infusion inputs for which explicit analytic solutions were derived. Solutions of nonlinear DDE problems coded in S-ADAPT were validated by comparing them with ones obtained by the MATLAB DDE solver dde23. The estimation of parameters was tested on the MATLB simulated population pharmacodynamics data. The comparison of S-ADAPT generated solutions for DDE problems with the explicit solutions as well as MATLAB produced solutions which agreed to at least 7 significant digits. The population parameter estimates from using importance sampling expectation-maximization in S-ADAPT agreed with ones used to generate the data. Published by Elsevier Ireland Ltd.

  17. A high-throughput multiplex method adapted for GMO detection.

    Science.gov (United States)

    Chaouachi, Maher; Chupeau, Gaëlle; Berard, Aurélie; McKhann, Heather; Romaniuk, Marcel; Giancola, Sandra; Laval, Valérie; Bertheau, Yves; Brunel, Dominique

    2008-12-24

    A high-throughput multiplex assay for the detection of genetically modified organisms (GMO) was developed on the basis of the existing SNPlex method designed for SNP genotyping. This SNPlex assay allows the simultaneous detection of up to 48 short DNA sequences (approximately 70 bp; "signature sequences") from taxa endogenous reference genes, from GMO constructions, screening targets, construct-specific, and event-specific targets, and finally from donor organisms. This assay avoids certain shortcomings of multiplex PCR-based methods already in widespread use for GMO detection. The assay demonstrated high specificity and sensitivity. The results suggest that this assay is reliable, flexible, and cost- and time-effective for high-throughput GMO detection.

  18. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  19. Adaptive Finite Temperature String Method in Collective Variables.

    Science.gov (United States)

    Zinovjev, Kirill; Tuñón, Iñaki

    2017-12-28

    Here we present a modified version of the on-the-fly string method for the localization of the minimum free energy path in a space of arbitrary collective variables. In the proposed approach the shape of the biasing potential is controlled by only two force constants, defining the width of the potential along the string and orthogonal to it. The force constants and the distribution of the string nodes are optimized during the simulation, improving the convergence. The optimized parameters can be used for umbrella sampling with a path CV along the converged string as the reaction coordinate. We test the new method with three fundamentally different processes: chloride attack to chloromethane in bulk water, alanine dipeptide isomerization, and the enzymatic conversion of isochorismate to piruvate. In each case the same set of parameters resulted in a rapidly converging simulation and a precise estimation of the potential of mean force. Therefore, the default settings can be used for a wide range of processes, making the method essentially parameter free and more user-friendly.

  20. The Pilates method and cardiorespiratory adaptation to training.

    Science.gov (United States)

    Tinoco-Fernández, Maria; Jiménez-Martín, Miguel; Sánchez-Caravaca, M Angeles; Fernández-Pérez, Antonio M; Ramírez-Rodrigo, Jesús; Villaverde-Gutiérrez, Carmen

    2016-01-01

    Although all authors report beneficial health changes following training based on the Pilates method, no explicit analysis has been performed of its cardiorespiratory effects. The objective of this study was to evaluate possible changes in cardiorespiratory parameters with the Pilates method. A total of 45 university students aged 18-35 years (77.8% female and 22.2% male), who did not routinely practice physical exercise or sports, volunteered for the study and signed informed consent. The Pilates training was conducted over 10 weeks, with three 1-hour sessions per week. Physiological cardiorespiratory responses were assessed using a MasterScreen CPX apparatus. After the 10-week training, statistically significant improvements were observed in mean heart rate (135.4-124.2 beats/min), respiratory exchange ratio (1.1-0.9) and oxygen equivalent (30.7-27.6) values, among other spirometric parameters, in submaximal aerobic testing. These findings indicate that practice of the Pilates method has a positive influence on cardiorespiratory parameters in healthy adults who do not routinely practice physical exercise activities.

  1. Multicriteria classification method for dimensionality reduction adapted to hyperspectral images

    Science.gov (United States)

    Khoder, Mahdi; Kashana, Serge; Khoder, Jihan; Younes, Rafic

    2017-04-01

    Due to the incredible growth of high dimensional datasets, we address the problem of unsupervised methods sensitive to undergoing different variations, such as noise degradation, and to preserving rare information. Therefore, researchers nowadays are forced to develop techniques to meet the needed requirements. In this work, we introduce a dimensionality reduction method that focuses on the multiobjectives of multiple images taken from multiple frequency bands, which form a hyperspectral image. The multicriteria classification algorithm technique compares and classifies these images based on multiple similarity criteria, which allows the selection of particular images from the whole set of images. The selected images are the ones chosen to represent the original set of data while respecting certain quality thresholds. Knowing that the number of images in a hyperspectral image signifies its dimension, choosing a smaller number of images to represent the data leads to dimensionality reduction. Also, results of tests of the developed algorithm on multiple hyperspectral image samples are shown. A comparative study later on will show the advantages of this technique compared to other common methods used in the field of dimensionality reduction.

  2. The adaptive problems of female teenage refugees and their behavioral adjustment methods for coping

    Directory of Open Access Journals (Sweden)

    Mhaidat F

    2016-04-01

    Full Text Available Fatin Mhaidat Department of Educational Psychology, Faculty of Educational Sciences, The Hashemite University, Zarqa, Jordan Abstract: This study aimed at identifying the levels of adaptive problems among teenage female refugees in the government schools and explored the behavioral methods that were used to cope with the problems. The sample was composed of 220 Syrian female students (seventh to first secondary grades enrolled at government schools within the Zarqa Directorate and who came to Jordan due to the war conditions in their home country. The study used the scale of adaptive problems that consists of four dimensions (depression, anger and hostility, low self-esteem, and feeling insecure and a questionnaire of the behavioral adjustment methods for dealing with the problem of asylum. The results indicated that the Syrian teenage female refugees suffer a moderate degree of adaptation problems, and the positive adjustment methods they have used are more than the negatives. Keywords: adaptive problems, female teenage refugees, behavioral adjustment

  3. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Egoryan, E.Sh.

    1982-01-01

    A generalized scheme of dimensional regularization which preserves supersymmetry is proposed. The scheme is applicable to all supersymmetric theories. Two models with extended supersymmetry are considered. The Slavnov naive supersymmetric identities are shown to hold at a dimensional regularized level

  4. An Adaptive Filtering Method Based on Crowdsourced Big Trace Data

    Directory of Open Access Journals (Sweden)

    TANG Luliang

    2016-12-01

    Full Text Available Vehicles' GPS traces collected by crowds have being as a new kind of big data and are widely applied to mine urban geographic information with low-cost, quick-update and rich-informative. However, the growing volume of vehicles' GPS traces has caused difficulties in data processing and their low quality adds uncertainty when information mining. Thus, it is a hot topic to extract high-quality GPS data from the crowdsourced traces based on the expected accuracy. In this paper, we propose an efficient partition-and-filter model to filter trajectories with expected accuracy according to the spatial feature of high-precision GPS data and the error rule of GPS data. First, the proposed partition-and-filter model to partition a trajectory into sub-trajectories based on the constrained distance and angle, which are chosen as the basic unit for the next processing step. Secondly, the proposed method collects high-quality GPS data from each sub-trajectory according to the similarity between GPS tracking points and the reference baselines constructed using random sample consensus algorithm. Experimental results demonstrate that the proposed method can effectively pick up high quality GPS data from crowdsourced trace data sets with the expected accuracy.

  5. Financing an efficient adaptation programme to climate change: A contingent valuation method tested in Malaysia

    Directory of Open Access Journals (Sweden)

    Banna Hasanul

    2016-03-01

    Full Text Available This paper assesses farmers’ willingness to pay for an efficient adaptation programme to climate change for Malaysian agriculture. We used the contingent valuation method to determine the monetary assessment of farmers’ preferences for an adaptation programme. We distributed a structured questionnaire to farmers in Selangor, Malaysia. Based on the survey, 74% of respondents are willing to pay for the adaptation programme with several factors such as socio-economic and motivational factors exerting greater influences over their willingness to pay. However, a significant number of respondents are not willing to pay for the adaptation programme. The Malaysian government, along with social institutions, banks, NGOs, and media could come up with fruitful awareness programmes to motivate financing the programme. Financial institutions such as banks, insurances, leasing firms, etc. along with government and farmers could also donate a substantial portion for the adaptation programme as part of their corporate social responsibility (CSR.

  6. Denoising imaging polarimetry by adapted BM3D method.

    Science.gov (United States)

    Tibbs, Alexander B; Daly, Ilse M; Roberts, Nicholas W; Bull, David R

    2018-04-01

    In addition to the visual information contained in intensity and color, imaging polarimetry allows visual information to be extracted from the polarization of light. However, a major challenge of imaging polarimetry is image degradation due to noise. This paper investigates the mitigation of noise through denoising algorithms and compares existing denoising algorithms with a new method, based on BM3D (Block Matching 3D). This algorithm, Polarization-BM3D (PBM3D), gives visual quality superior to the state of the art across all images and noise standard deviations tested. We show that denoising polarization images using PBM3D allows the degree of polarization to be more accurately calculated by comparing it with spectral polarimetry measurements.

  7. Adaptive Mesh Iteration Method for Trajectory Optimization Based on Hermite-Pseudospectral Direct Transcription

    Directory of Open Access Journals (Sweden)

    Humin Lei

    2017-01-01

    Full Text Available An adaptive mesh iteration method based on Hermite-Pseudospectral is described for trajectory optimization. The method uses the Legendre-Gauss-Lobatto points as interpolation points; then the state equations are approximated by Hermite interpolating polynomials. The method allows for changes in both number of mesh points and the number of mesh intervals and produces significantly smaller mesh sizes with a higher accuracy tolerance solution. The derived relative error estimate is then used to trade the number of mesh points with the number of mesh intervals. The adaptive mesh iteration method is applied successfully to the examples of trajectory optimization of Maneuverable Reentry Research Vehicle, and the simulation experiment results show that the adaptive mesh iteration method has many advantages.

  8. Numerical simulations of multicomponent ecological models with adaptive methods.

    Science.gov (United States)

    Owolabi, Kolade M; Patidar, Kailash C

    2016-01-08

    The study of dynamic relationship between a multi-species models has gained a huge amount of scientific interest over the years and will continue to maintain its dominance in both ecology and mathematical ecology in the years to come due to its practical relevance and universal existence. Some of its emergence phenomena include spatiotemporal patterns, oscillating solutions, multiple steady states and spatial pattern formation. Many time-dependent partial differential equations are found combining low-order nonlinear with higher-order linear terms. In attempt to obtain a reliable results of such problems, it is desirable to use higher-order methods in both space and time. Most computations heretofore are restricted to second order in time due to some difficulties introduced by the combination of stiffness and nonlinearity. Hence, the dynamics of a reaction-diffusion models considered in this paper permit the use of two classic mathematical ideas. As a result, we introduce higher order finite difference approximation for the spatial discretization, and advance the resulting system of ODE with a family of exponential time differencing schemes. We present the stability properties of these methods along with the extensive numerical simulations for a number of multi-species models. When the diffusivity is small many of the models considered in this paper are found to exhibit a form of localized spatiotemporal patterns. Such patterns are correctly captured in the local analysis of the model equations. An extended 2D results that are in agreement with Turing typical patterns such as stripes and spots, as well as irregular snakelike structures are presented. We finally show that the designed schemes are dynamically consistent. The dynamic complexities of some ecological models are studied by considering their linear stability analysis. Based on the choices of parameters in transforming the system into a dimensionless form, we were able to obtain a well-balanced system that

  9. Impedance adaptation methods of the piezoelectric energy harvesting

    Science.gov (United States)

    Kim, Hyeoungwoo

    In this study, the important issues of energy recovery were addressed and a comprehensive investigation was performed on harvesting electrical power from an ambient mechanical vibration source. Also discussed are the impedance matching methods used to increase the efficiency of energy transfer from the environment to the application. Initially, the mechanical impedance matching method was investigated to increase mechanical energy transferred to the transducer from the environment. This was done by reducing the mechanical impedance such as damping factor and energy reflection ratio. The vibration source and the transducer were modeled by a two-degree-of-freedom dynamic system with mass, spring constant, and damper. The transmissibility employed to show how much mechanical energy that was transferred in this system was affected by the damping ratio and the stiffness of elastic materials. The mechanical impedance of the system was described by electrical system using analogy between the two systems in order to simply the total mechanical impedance. Secondly, the transduction rate of mechanical energy to electrical energy was improved by using a PZT material which has a high figure of merit and a high electromechanical coupling factor for electrical power generation, and a piezoelectric transducer which has a high transduction rate was designed and fabricated. The high g material (g33 = 40 [10-3Vm/N]) was developed to improve the figure of merit of the PZT ceramics. The cymbal composite transducer has been found as a promising structure for piezoelectric energy harvesting under high force at cyclic conditions (10--200 Hz), because it has almost 40 times higher effective strain coefficient than PZT ceramics. The endcap of cymbal also enhances the endurance of the ceramic to sustain ac load along with stress amplification. In addition, a macro fiber composite (MFC) was employed as a strain component because of its flexibility and the high electromechanical coupling

  10. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...

  11. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  12. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Qi Huang

    2017-06-01

    Full Text Available Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC, by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC. We compared PAC performance with incremental support vector classifier (ISVC and non-adapting SVC (NSVC in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05 and ISVC (13.38% ± 2.62%, p = 0.001, and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle.

  13. A Novel Unsupervised Adaptive Learning Method for Long-Term Electromyography (EMG) Pattern Recognition.

    Science.gov (United States)

    Huang, Qi; Yang, Dapeng; Jiang, Li; Zhang, Huajie; Liu, Hong; Kotani, Kiyoshi

    2017-06-13

    Performance degradation will be caused by a variety of interfering factors for pattern recognition-based myoelectric control methods in the long term. This paper proposes an adaptive learning method with low computational cost to mitigate the effect in unsupervised adaptive learning scenarios. We presents a particle adaptive classifier (PAC), by constructing a particle adaptive learning strategy and universal incremental least square support vector classifier (LS-SVC). We compared PAC performance with incremental support vector classifier (ISVC) and non-adapting SVC (NSVC) in a long-term pattern recognition task in both unsupervised and supervised adaptive learning scenarios. Retraining time cost and recognition accuracy were compared by validating the classification performance on both simulated and realistic long-term EMG data. The classification results of realistic long-term EMG data showed that the PAC significantly decreased the performance degradation in unsupervised adaptive learning scenarios compared with NSVC (9.03% ± 2.23%, p < 0.05) and ISVC (13.38% ± 2.62%, p = 0.001), and reduced the retraining time cost compared with ISVC (2 ms per updating cycle vs. 50 ms per updating cycle).

  14. Adaptation for Regularization Operators in Learning Theory

    Science.gov (United States)

    2006-09-10

    fρ over specific prior classes defined in term of finiteness of the constants Cr and Ds. The main assumption is the requirement mv ≥ m/ log m. Since...dimensional, the choice R(λ̇) = r̄ fulfills trivially the required conditions. Second, from definition (25), it is clear that if the sequence {a(λi...59–85, February 2005. [7] E. De Vito, L. Rosasco, and A. Caponnetto. Discretization error analysis for tikhonov regu- larization. to appear in Analisys

  15. Marginal and Internal Adaptation of Zirconia Crowns: A Comparative Study of Assessment Methods

    OpenAIRE

    Cunali, Rafael Schlögel; Saab, Rafaella Caramori; Correr, Gisele Maria; Cunha, Leonardo Fernandes da; Ornaghi, Bárbara Pick; Ritter, André V.; Gonzaga, Carla Castiglia

    2017-01-01

    Abstract Marginal and internal adaptation is critical for the success of indirect restorations. New imaging systems make it possible to evaluate these parameters with precision and non-destructively. This study evaluated the marginal and internal adaptation of zirconia copings fabricated with two different systems using both silicone replica and microcomputed tomography (micro-CT) assessment methods. A metal master model, representing a preparation for an all-ceramic full crown, was digitally...

  16. The method of adaptation under the parameters of the subject of the information interaction

    Directory of Open Access Journals (Sweden)

    Инесса Анатольевна Воробьёва

    2014-12-01

    Full Text Available To ensure the effectiveness of settings (adaptation created software and hardware on the particular subject of the method was developed for adaptation under the parameters of the subject of information interaction in the form of a set of operations to build a network dialog procedures on the basis of accounting for entry-level qualification of the subject, assessment of the current level of skills and operational restructuring of the network in accordance with the assessment of his level.

  17. Research on Adaptive Optics Image Restoration Algorithm by Improved Expectation Maximization Method

    Directory of Open Access Journals (Sweden)

    Lijuan Zhang

    2014-01-01

    Full Text Available To improve the effect of adaptive optics images’ restoration, we put forward a deconvolution algorithm improved by the EM algorithm which joints multiframe adaptive optics images based on expectation-maximization theory. Firstly, we need to make a mathematical model for the degenerate multiframe adaptive optics images. The function model is deduced for the points that spread with time based on phase error. The AO images are denoised using the image power spectral density and support constraint. Secondly, the EM algorithm is improved by combining the AO imaging system parameters and regularization technique. A cost function for the joint-deconvolution multiframe AO images is given, and the optimization model for their parameter estimations is built. Lastly, the image-restoration experiments on both analog images and the real AO are performed to verify the recovery effect of our algorithm. The experimental results show that comparing with the Wiener-IBD or RL-IBD algorithm, our iterations decrease 14.3% and well improve the estimation accuracy. The model distinguishes the PSF of the AO images and recovers the observed target images clearly.

  18. Construction of a Mean Square Error Adaptive Euler–Maruyama Method With Applications in Multilevel Monte Carlo

    KAUST Repository

    Hoel, Hakon

    2016-06-13

    A formal mean square error expansion (MSE) is derived for Euler-Maruyama numerical solutions of stochastic differential equations (SDE). The error expansion is used to construct a pathwise, a posteriori, adaptive time-stepping Euler-Maruyama algorithm for numerical solutions of SDE, and the resulting algorithm is incorporated into a multilevel Monte Carlo (MLMC) algorithm for weak approximations of SDE. This gives an efficient MSE adaptive MLMC algorithm for handling a number of low-regularity approximation problems. In low-regularity numerical example problems, the developed adaptive MLMC algorithm is shown to outperform the uniform time-stepping MLMC algorithm by orders of magnitude, producing output whose error with high probability is bounded by TOL > 0 at the near-optimal MLMC cost rate б(TOL log(TOL)) that is achieved when the cost of sample generation is б(1).

  19. Distortion analysis of subband adaptive filtering methods for FMRI active noise control systems.

    Science.gov (United States)

    Milani, Ali A; Panahi, Issa M; Briggs, Richard

    2007-01-01

    Delayless subband filtering structure, as a high performance frequency domain filtering technique, is used for canceling broadband fMRI noise (8 kHz bandwidth). In this method, adaptive filtering is done in subbands and the coefficients of the main canceling filter are computed by stacking the subband weights together. There are two types of stacking methods called FFT and FFT-2. In this paper, we analyze the distortion introduced by these two stacking methods. The effect of the stacking distortion on the performance of different adaptive filters in FXLMS algorithm with non-minimum phase secondary path is explored. The investigation is done for different adaptive algorithms (nLMS, APA and RLS), different weight stacking methods, and different number of subbands.

  20. Adaptive spline autoregression threshold method in forecasting Mitsubishi car sales volume at PT Srikandi Diamond Motors

    Science.gov (United States)

    Susanti, D.; Hartini, E.; Permana, A.

    2017-01-01

    Sale and purchase of the growing competition between companies in Indonesian, make every company should have a proper planning in order to win the competition with other companies. One of the things that can be done to design the plan is to make car sales forecast for the next few periods, it’s required that the amount of inventory of cars that will be sold in proportion to the number of cars needed. While to get the correct forecasting, on of the methods that can be used is the method of Adaptive Spline Threshold Autoregression (ASTAR). Therefore, this time the discussion will focus on the use of Adaptive Spline Threshold Autoregression (ASTAR) method in forecasting the volume of car sales in PT.Srikandi Diamond Motors using time series data.In the discussion of this research, forecasting using the method of forecasting value Adaptive Spline Threshold Autoregression (ASTAR) produce approximately correct.

  1. An adaptive wavelet stochastic collocation method for irregular solutions of stochastic partial differential equations

    Energy Technology Data Exchange (ETDEWEB)

    Webster, Clayton G [ORNL; Zhang, Guannan [ORNL; Gunzburger, Max D [ORNL

    2012-10-01

    Accurate predictive simulations of complex real world applications require numerical approximations to first, oppose the curse of dimensionality and second, converge quickly in the presence of steep gradients, sharp transitions, bifurcations or finite discontinuities in high-dimensional parameter spaces. In this paper we present a novel multi-dimensional multi-resolution adaptive (MdMrA) sparse grid stochastic collocation method, that utilizes hierarchical multiscale piecewise Riesz basis functions constructed from interpolating wavelets. The basis for our non-intrusive method forms a stable multiscale splitting and thus, optimal adaptation is achieved. Error estimates and numerical examples will used to compare the efficiency of the method with several other techniques.

  2. Adaptive Finite Volume Method for the Shallow Water Equations on Triangular Grids

    Directory of Open Access Journals (Sweden)

    Sudi Mungkasi

    2016-01-01

    Full Text Available This paper presents a numerical entropy production (NEP scheme for two-dimensional shallow water equations on unstructured triangular grids. We implement NEP as the error indicator for adaptive mesh refinement or coarsening in solving the shallow water equations using a finite volume method. Numerical simulations show that NEP is successful to be a refinement/coarsening indicator in the adaptive mesh finite volume method, as the method refines the mesh or grids around nonsmooth regions and coarsens them around smooth regions.

  3. Proceedings of the workshop on adaptive grid methods for fusion plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Koniges, A.E.; Craddock, G.G.; Schnack, D.D.; Strauss, H.R.

    1995-07-01

    The purpose of the workshop was to assemble workers, both within and outside of the fusion-related computations areas, for discussion regarding the issues of dynamically adaptive gridding. There were three invited talks related to adaptive gridding application experiences in various related fields of computational fluid dynamics (CFD), and nine short talks reporting on the progress of adaptive techniques in the specific areas of scrape-off-layer (SOL) modeling and magnetohydrodynamic (MHD) stability. Adaptive mesh methods have been successful in a number of diverse fields of CFD for over a decade. The method involves dynamic refinement of computed field profiles in a way that disperses uniformly the numerical errors associated with discrete approximations. Because the process optimizes computational effort, adaptive mesh methods can be used to study otherwise the intractable physical problems that involve complex boundary shapes or multiple spatial/temporal scales. Recent results indicate that these adaptive techniques will be required for tokamak fluid-based simulations involving the diverted tokamak SOL modeling and MHD simulations problems related to the highest priority ITER relevant issues.Individual papers are indexed separately on the energy data bases.

  4. Utilizing and Adapting the Delphi Method for Use in Qualitative Research

    Directory of Open Access Journals (Sweden)

    Shane R. Brady

    2015-12-01

    Full Text Available The Delphi method is a pragmatic research method created in the 1950s by researchers at the RAND Corporation for use in policy making, organizational decision making, and to inform direct practices. While the Delphi method has been regularly utilized in mixed methods studies, far fewer studies have been completed using the Delphi method for qualitative research. Despite the utility of the Delphi method in social science research, little guidance is provided for using the Delphi in the context of theory building, in primarily qualitative studies, and in the context of community-engaged research (CER. This article will emphasize new and modest innovations in the Delphi method for improving the overall rigor of the method in theory building and CER.

  5. An adaptive singular spectrum analysis method for extracting brain rhythms of electroencephalography

    Directory of Open Access Journals (Sweden)

    Hai Hu

    2017-06-01

    Full Text Available Artifacts removal and rhythms extraction from electroencephalography (EEG signals are important for portable and wearable EEG recording devices. Incorporating a novel grouping rule, we proposed an adaptive singular spectrum analysis (SSA method for artifacts removal and rhythms extraction. Based on the EEG signal amplitude, the grouping rule determines adaptively the first one or two SSA reconstructed components as artifacts and removes them. The remaining reconstructed components are then grouped based on their peak frequencies in the Fourier transform to extract the desired rhythms. The grouping rule thus enables SSA to be adaptive to EEG signals containing different levels of artifacts and rhythms. The simulated EEG data based on the Markov Process Amplitude (MPA EEG model and the experimental EEG data in the eyes-open and eyes-closed states were used to verify the adaptive SSA method. Results showed a better performance in artifacts removal and rhythms extraction, compared with the wavelet decomposition (WDec and another two recently reported SSA methods. Features of the extracted alpha rhythms using adaptive SSA were calculated to distinguish between the eyes-open and eyes-closed states. Results showed a higher accuracy (95.8% than those of the WDec method (79.2% and the infinite impulse response (IIR filtering method (83.3%.

  6. The adaptation method in the Monte Carlo simulation for computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hyoung Gun; Yoon, Chang Yeon; Lee, Won Ho [Dept. of Bio-convergence Engineering, Korea University, Seoul (Korea, Republic of); Cho, Seung Ryong [Dept. of Nuclear and Quantum Engineering, Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Sung Ho [Dept. of Neurosurgery, Ulsan University Hospital, Ulsan (Korea, Republic of)

    2015-06-15

    The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT). To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA) and a human-like voxel phantom (KTMAN-2) (Los Alamos National Laboratory, Los Alamos, NM, USA). For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations-assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  7. Performance of the adaptive collision source (ACS) method for discrete ordinates in parallel environments

    International Nuclear Information System (INIS)

    Walters, W.J.; Haghighat, A.

    2013-01-01

    A new collision source method has been developed to solve the Linear Boltzmann Equation (LBE) more efficiently by adaptation of the angular quadrature order. The angular adaptation method is unique in that the flux from each scattering source iteration is obtained separately, with potentially a different quadrature order. This allows for an optimal use of processing power, by using a high order quadrature for the first few iterations that need it, before shifting to lower order quadratures for the remaining iterations. This is essentially an extension of the first collision source method, and we call it the adaptive collision source method (ACS). The ACS methodology has been implemented in the TITAN discrete ordinates code, and has shown a speedup of 2-3 on a test problem, with very little loss of accuracy (within a provided adaptive tolerance). Further, the code has been extended to work in parallel environments by angular decomposition. Although the method requires increased parallel communication, tests have shown excellent scale adaptation, with parallel fractions of up to 99%. (authors)

  8. The adaptation method in the Monte Carlo simulation for computed tomography

    Directory of Open Access Journals (Sweden)

    Hyounggun Lee

    2015-06-01

    Full Text Available The patient dose incurred from diagnostic procedures during advanced radiotherapy has become an important issue. Many researchers in medical physics are using computational simulations to calculate complex parameters in experiments. However, extended computation times make it difficult for personal computers to run the conventional Monte Carlo method to simulate radiological images with high-flux photons such as images produced by computed tomography (CT. To minimize the computation time without degrading imaging quality, we applied a deterministic adaptation to the Monte Carlo calculation and verified its effectiveness by simulating CT image reconstruction for an image evaluation phantom (Catphan; Phantom Laboratory, New York NY, USA and a human-like voxel phantom (KTMAN-2 (Los Alamos National Laboratory, Los Alamos, NM, USA. For the deterministic adaptation, the relationship between iteration numbers and the simulations was estimated and the option to simulate scattered radiation was evaluated. The processing times of simulations using the adaptive method were at least 500 times faster than those using a conventional statistical process. In addition, compared with the conventional statistical method, the adaptive method provided images that were more similar to the experimental images, which proved that the adaptive method was highly effective for a simulation that requires a large number of iterations—assuming no radiation scattering in the vicinity of detectors minimized artifacts in the reconstructed image.

  9. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  10. Cancer survival analysis using semi-supervised learning method based on Cox and AFT models with L1/2 regularization.

    Science.gov (United States)

    Liang, Yong; Chai, Hua; Liu, Xiao-Ying; Xu, Zong-Ben; Zhang, Hai; Leung, Kwong-Sak

    2016-03-01

    One of the most important objectives of the clinical cancer research is to diagnose cancer more accurately based on the patients' gene expression profiles. Both Cox proportional hazards model (Cox) and accelerated failure time model (AFT) have been widely adopted to the high risk and low risk classification or survival time prediction for the patients' clinical treatment. Nevertheless, two main dilemmas limit the accuracy of these prediction methods. One is that the small sample size and censored data remain a bottleneck for training robust and accurate Cox classification model. In addition to that, similar phenotype tumours and prognoses are actually completely different diseases at the genotype and molecular level. Thus, the utility of the AFT model for the survival time prediction is limited when such biological differences of the diseases have not been previously identified. To try to overcome these two main dilemmas, we proposed a novel semi-supervised learning method based on the Cox and AFT models to accurately predict the treatment risk and the survival time of the patients. Moreover, we adopted the efficient L1/2 regularization approach in the semi-supervised learning method to select the relevant genes, which are significantly associated with the disease. The results of the simulation experiments show that the semi-supervised learning model can significant improve the predictive performance of Cox and AFT models in survival analysis. The proposed procedures have been successfully applied to four real microarray gene expression and artificial evaluation datasets. The advantages of our proposed semi-supervised learning method include: 1) significantly increase the available training samples from censored data; 2) high capability for identifying the survival risk classes of patient in Cox model; 3) high predictive accuracy for patients' survival time in AFT model; 4) strong capability of the relevant biomarker selection. Consequently, our proposed semi

  11. A multilevel correction adaptive finite element method for Kohn-Sham equation

    Science.gov (United States)

    Hu, Guanghui; Xie, Hehu; Xu, Fei

    2018-02-01

    In this paper, an adaptive finite element method is proposed for solving Kohn-Sham equation with the multilevel correction technique. In the method, the Kohn-Sham equation is solved on a fixed and appropriately coarse mesh with the finite element method in which the finite element space is kept improving by solving the derived boundary value problems on a series of adaptively and successively refined meshes. A main feature of the method is that solving large scale Kohn-Sham system is avoided effectively, and solving the derived boundary value problems can be handled efficiently by classical methods such as the multigrid method. Hence, the significant acceleration can be obtained on solving Kohn-Sham equation with the proposed multilevel correction technique. The performance of the method is examined by a variety of numerical experiments.

  12. Regular simplex refinement by regular simplices

    NARCIS (Netherlands)

    Casado, L.G.; Tóth, B.G.; Hendrix, E.M.T.; García, I.

    2014-01-01

    A naturalway to define branching in Branch-and-Bound for blending problemis to do bisection. The disadvantage of bisectioning is that partition sets are in general irregular. A regular simplex with fixed orientation can be determined by its center and size, allowing storage savings in a Branchand-

  13. θ-regular spaces

    Directory of Open Access Journals (Sweden)

    Dragan S. Janković

    1985-01-01

    Full Text Available In this paper we define a topological space X to be θ-regular if every filterbase in X with a nonempty θ-adherence has a nonempty adherence. It is shown that the class of θ-regular topological spaces includes rim-compact topological spaces and that θ-regular H(i (Hausdorff topological spaces are compact (regular. The concept of θ-regularity is used to extend a closed graph theorem of Rose [1]. It is established that an r-subcontinuous closed graph function into a θ-regular topological space is continuous. Another sufficient condition for continuity of functions due to Rose [1] is also extended by introducing the concept of almost weak continuity which is weaker than both weak continuity of Levine and almost continuity of Husain. It is shown that an almost weakly continuous closed graph function into a strongly locally compact topological space is continuous.

  14. Model Reference Adaptive Control of the Air Flow Rate of Centrifugal Compressor Using State Space Method

    International Nuclear Information System (INIS)

    Han, Jaeyoung; Jung, Mooncheong; Yu, Sangseok; Yi, Sun

    2016-01-01

    In this study, a model reference adaptive controller is developed to regulate the outlet air flow rate of centrifugal compressor for automotive supercharger. The centrifugal compressor is developed using the analytical based method to predict the transient behavior of operating and the designed model is validated with experimental data to confirm the system accuracy. The model reference adaptive control structure consists of a compressor model and a MRAC(model reference adaptive control) mechanism. The feedback control do not robust with variation of system parameter but the applied adaptive control is robust even if the system parameter is changed. As a result, the MRAC was regulated to reference air flow rate. Also MRAC was found to be more robust control compared with the feedback control even if the system parameter is changed.

  15. Shock Capturing with PDE-Based Artificial Viscosity for an Adaptive, Higher-Order Discontinuous Galerkin Finite Element Method

    National Research Council Canada - National Science Library

    Barter, Garrett E

    2008-01-01

    ...), adaptive computational fluid dynamics (CFD). Since these cases involve flow velocities greater than the speed of sound, an appropriate shock capturing for higher-order, adaptive methods is necessary...

  16. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  17. Development of parallel implementation of adaptive numerical methods with industrial applications in fluid mechanics

    International Nuclear Information System (INIS)

    Laucoin, E.

    2008-10-01

    Numerical resolution of partial differential equations can be made reliable and efficient through the use of adaptive numerical methods.We present here the work we have done for the design, the implementation and the validation of such a method within an industrial software platform with applications in thermohydraulics. From the geometric point of view, this method can deal both with mesh refinement and mesh coarsening, while ensuring the quality of the mesh cells. Numerically, we use the mortar elements formalism in order to extend the Finite Volumes-Elements method implemented in the Trio-U platform and to deal with the non-conforming meshes arising from the adaptation procedure. Finally, we present an implementation of this method using concepts from domain decomposition methods for ensuring its efficiency while running in a parallel execution context. (author)

  18. Control of beam halo-chaos using neural network self-adaptation method

    International Nuclear Information System (INIS)

    Fang Jinqing; Huang Guoxian; Luo Xiaoshu

    2004-11-01

    Taking the advantages of neural network control method for nonlinear complex systems, control of beam halo-chaos in the periodic focusing channels (network) of high intensity accelerators is studied by feed-forward back-propagating neural network self-adaptation method. The envelope radius of high-intensity proton beam is reached to the matching beam radius by suitably selecting the control structure of neural network and the linear feedback coefficient, adjusted the right-coefficient of neural network. The beam halo-chaos is obviously suppressed and shaking size is much largely reduced after the neural network self-adaptation control is applied. (authors)

  19. A comparison of locally adaptive multigrid methods: LDC, FAC and FIC

    Science.gov (United States)

    Khadra, Khodor; Angot, Philippe; Caltagirone, Jean-Paul

    1993-01-01

    This study is devoted to a comparative analysis of three 'Adaptive ZOOM' (ZOom Overlapping Multi-level) methods based on similar concepts of hierarchical multigrid local refinement: LDC (Local Defect Correction), FAC (Fast Adaptive Composite), and FIC (Flux Interface Correction)--which we proposed recently. These methods are tested on two examples of a bidimensional elliptic problem. We compare, for V-cycle procedures, the asymptotic evolution of the global error evaluated by discrete norms, the corresponding local errors, and the convergence rates of these algorithms.

  20. The Adapted Ordering Method for Lie algebras and superalgebras and their generalizations

    Energy Technology Data Exchange (ETDEWEB)

    Gato-Rivera, Beatriz [Instituto de Matematicas y Fisica Fundamental, CSIC, Serrano 123, Madrid 28006 (Spain); NIKHEF-H, Kruislaan 409, NL-1098 SJ Amsterdam (Netherlands)

    2008-02-01

    In 1998 the Adapted Ordering Method was developed for the representation theory of the superconformal algebras in two dimensions. It allows us to determine maximal dimensions for a given type of space of singular vectors, to identify all singular vectors by only a few coefficients, to spot subsingular vectors and to set the basis for constructing embedding diagrams. In this paper we present the Adapted Ordering Method for general Lie algebras and superalgebras and their generalizations, provided they can be triangulated. We also review briefly the results obtained for the Virasoro algebra and for the N = 2 and Ramond N = 1 superconformal algebras.

  1. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  2. pySeismicFMM: Python based Travel Time Calculation in Regular 2D and 3D Grids in Cartesian and Geographic Coordinates using Fast Marching Method

    Science.gov (United States)

    Wilde-Piorko, M.; Polkowski, M.

    2016-12-01

    Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation final release of a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. Source code of pySeismicFMM will be published before Fall Meeting. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.

  3. A simple method to adapt time sampling of the analog signal

    International Nuclear Information System (INIS)

    Kalinin, Yu.G.; Martyanov, I.S.; Sadykov, Kh.; Zastrozhnova, N.N.

    2004-01-01

    In this paper we briefly describe the time sampling method, which is adapted to the speed of the signal change. Principally, this method is based on a simple idea--the combination of discrete integration with differentiation of the analog signal. This method can be used in nuclear electronics research into the characteristics of detectors and the shape of the pulse signal, pulse and transitive characteristics of inertial systems of processing of signals, etc

  4. An adaptive robust regression method: Application to galaxy spectrum baseline estimation

    OpenAIRE

    Bacher, Raphael; Chatelain, Florent; Michel, Olivier

    2016-01-01

    International audience; In this paper, a new robust regression method based on the Least Trimmed Squares (LTS) is proposed. The novelty of this approach consists in a simple adaptive estimation of the number of outliers. This method can be applied to baseline estimation, for example to improve the detection of gas spectral signature in astronomical hy-perspectral data such as those produced by the new Multi Unit Spec-troscopic Explorer (MUSE) instrument. To do so a method following the genera...

  5. Convergence acceleration of Navier-Stokes equation using adaptive wavelet method

    International Nuclear Information System (INIS)

    Kang, Hyung Min; Ghafoor, Imran; Lee, Do Hyung

    2010-01-01

    An efficient adaptive wavelet method is proposed for the enhancement of computational efficiency of the Navier-Stokes equations. The method is based on sparse point representation (SPR), which uses the wavelet decomposition and thresholding to obtain a sparsely distributed dataset. The threshold mechanism is modified in order to maintain the spatial accuracy of a conventional Navier-Stokes solver by adapting the threshold value to the order of spatial truncation error. The computational grid can be dynamically adapted to a transient solution to reflect local changes in the solution. The flux evaluation is then carried out only at the points of the adapted dataset, which reduces the computational effort and memory requirements. A stabilization technique is also implemented to avoid the additional numerical errors introduced by the threshold procedure. The numerical results of the adaptive wavelet method are compared with a conventional solver to validate the enhancement in computational efficiency of Navier-Stokes equations without the degeneration of the numerical accuracy of a conventional solver

  6. Regularization algorithms based on total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; O'Leary, Dianne P.

    1996-01-01

    Discretizations of inverse problems lead to systems of linear equations with a highly ill-conditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. Classical regularization methods, such as Tikhonov's method or trunc...

  7. A constrained Delaunay discretization method for adaptively meshing highly discontinuous geological media

    Science.gov (United States)

    Wang, Yang; Ma, Guowei; Ren, Feng; Li, Tuo

    2017-12-01

    A constrained Delaunay discretization method is developed to generate high-quality doubly adaptive meshes of highly discontinuous geological media. Complex features such as three-dimensional discrete fracture networks (DFNs), tunnels, shafts, slopes, boreholes, water curtains, and drainage systems are taken into account in the mesh generation. The constrained Delaunay triangulation method is used to create adaptive triangular elements on planar fractures. Persson's algorithm (Persson, 2005), based on an analogy between triangular elements and spring networks, is enriched to automatically discretize a planar fracture into mesh points with varying density and smooth-quality gradient. The triangulated planar fractures are treated as planar straight-line graphs (PSLGs) to construct piecewise-linear complex (PLC) for constrained Delaunay tetrahedralization. This guarantees the doubly adaptive characteristic of the resulted mesh: the mesh is adaptive not only along fractures but also in space. The quality of elements is compared with the results from an existing method. It is verified that the present method can generate smoother elements and a better distribution of element aspect ratios. Two numerical simulations are implemented to demonstrate that the present method can be applied to various simulations of complex geological media that contain a large number of discontinuities.

  8. Method and system for training dynamic nonlinear adaptive filters which have embedded memory

    Science.gov (United States)

    Rabinowitz, Matthew (Inventor)

    2002-01-01

    Described herein is a method and system for training nonlinear adaptive filters (or neural networks) which have embedded memory. Such memory can arise in a multi-layer finite impulse response (FIR) architecture, or an infinite impulse response (IIR) architecture. We focus on filter architectures with separate linear dynamic components and static nonlinear components. Such filters can be structured so as to restrict their degrees of computational freedom based on a priori knowledge about the dynamic operation to be emulated. The method is detailed for an FIR architecture which consists of linear FIR filters together with nonlinear generalized single layer subnets. For the IIR case, we extend the methodology to a general nonlinear architecture which uses feedback. For these dynamic architectures, we describe how one can apply optimization techniques which make updates closer to the Newton direction than those of a steepest descent method, such as backpropagation. We detail a novel adaptive modified Gauss-Newton optimization technique, which uses an adaptive learning rate to determine both the magnitude and direction of update steps. For a wide range of adaptive filtering applications, the new training algorithm converges faster and to a smaller value of cost than both steepest-descent methods such as backpropagation-through-time, and standard quasi-Newton methods. We apply the algorithm to modeling the inverse of a nonlinear dynamic tracking system 5, as well as a nonlinear amplifier 6.

  9. A Sequential Kriging reliability analysis method with characteristics of adaptive sampling regions and parallelizability

    International Nuclear Information System (INIS)

    Wen, Zhixun; Pei, Haiqing; Liu, Hai; Yue, Zhufeng

    2016-01-01

    The sequential Kriging reliability analysis (SKRA) method has been developed in recent years for nonlinear implicit response functions which are expensive to evaluate. This type of method includes EGRA: the efficient reliability analysis method, and AK-MCS: the active learning reliability method combining Kriging model and Monte Carlo simulation. The purpose of this paper is to improve SKRA by adaptive sampling regions and parallelizability. The adaptive sampling regions strategy is proposed to avoid selecting samples in regions where the probability density is so low that the accuracy of these regions has negligible effects on the results. The size of the sampling regions is adapted according to the failure probability calculated by last iteration. Two parallel strategies are introduced and compared, aimed at selecting multiple sample points at a time. The improvement is verified through several troublesome examples. - Highlights: • The ISKRA method improves the efficiency of SKRA. • Adaptive sampling regions strategy reduces the number of needed samples. • The two parallel strategies reduce the number of needed iterations. • The accuracy of the optimal value impacts the number of samples significantly.

  10. An adaptive multi-element probabilistic collocation method for statistical EMC/EMI characterization

    KAUST Repository

    Yücel, Abdulkadir C.

    2013-12-01

    An adaptive multi-element probabilistic collocation (ME-PC) method for quantifying uncertainties in electromagnetic compatibility and interference phenomena involving electrically large, multi-scale, and complex platforms is presented. The method permits the efficient and accurate statistical characterization of observables (i.e., quantities of interest such as coupled voltages) that potentially vary rapidly and/or are discontinuous in the random variables (i.e., parameters that characterize uncertainty in a system\\'s geometry, configuration, or excitation). The method achieves its efficiency and accuracy by recursively and adaptively dividing the domain of the random variables into subdomains using as a guide the decay rate of relative error in a polynomial chaos expansion of the observables. While constructing local polynomial expansions on each subdomain, a fast integral-equation-based deterministic field-cable-circuit simulator is used to compute the observable values at the collocation/integration points determined by the adaptive ME-PC scheme. The adaptive ME-PC scheme requires far fewer (computationally costly) deterministic simulations than traditional polynomial chaos collocation and Monte Carlo methods for computing averages, standard deviations, and probability density functions of rapidly varying observables. The efficiency and accuracy of the method are demonstrated via its applications to the statistical characterization of voltages in shielded/unshielded microwave amplifiers and magnetic fields induced on car tire pressure sensors. © 2013 IEEE.

  11. [Comparative adaptation of crowns of selective laser melting and wax-lost-casting method].

    Science.gov (United States)

    Li, Guo-qiang; Shen, Qing-yi; Gao, Jian-hua; Wu, Xue-ying; Chen, Li; Dai, Wen-an

    2012-07-01

    To investigate the marginal adaptation of crowns fabricated by selective laser melting (SLM) and wax-lost-casting method, so as to provide an experimental basis for clinic. Co-Cr alloy full crown were fabricated by SLM and wax-lost-casting for 24 samples in each group. All crowns were cemented with zinc phosphate cement and cut along longitudinal axis by line cutting machine. The gap between crown tissue surface and die was measured by 6-point measuring method with scanning electron microscope (SEM). The marginal adaptation of crowns fabricated by SLM and wax-lost-casting were compared statistically. The gap between SLM crowns were (36.51 ± 2.94), (49.36 ± 3.31), (56.48 ± 3.35), (42.20 ± 3.60) µm, and wax-lost-casting crowns were (68.86 ± 5.41), (58.86 ± 6.10), (70.62 ± 5.79), (69.90 ± 6.00) µm. There were significant difference between two groups (P < 0.05). Co-Cr alloy full crown fabricated by wax-lost-casting method and SLM method provide acceptable marginal adaptation in clinic, and the marginal adaptation of SLM is better than that of wax-lost-casting method.

  12. Automatic off-body overset adaptive Cartesian mesh method based on an octree approach

    International Nuclear Information System (INIS)

    Péron, Stéphanie; Benoit, Christophe

    2013-01-01

    This paper describes a method for generating adaptive structured Cartesian grids within a near-body/off-body mesh partitioning framework for the flow simulation around complex geometries. The off-body Cartesian mesh generation derives from an octree structure, assuming each octree leaf node defines a structured Cartesian block. This enables one to take into account the large scale discrepancies in terms of resolution between the different bodies involved in the simulation, with minimum memory requirements. Two different conversions from the octree to Cartesian grids are proposed: the first one generates Adaptive Mesh Refinement (AMR) type grid systems, and the second one generates abutting or minimally overlapping Cartesian grid set. We also introduce an algorithm to control the number of points at each adaptation, that automatically determines relevant values of the refinement indicator driving the grid refinement and coarsening. An application to a wing tip vortex computation assesses the capability of the method to capture accurately the flow features.

  13. Adaptive wavelet method for pricing two-asset Asian options with floating strike

    Science.gov (United States)

    Černá, Dana

    2017-12-01

    Asian options are path-dependent option contracts which payoff depends on the average value of the asset price over some period of time. We focus on pricing of Asian options on two assets. The model for pricing these options is represented by a parabolic equation with time variable and three state variables, but using substitution it can be reduced to the equation with only two state variables. For time discretization we use the θ-scheme. We propose a wavelet basis that is adapted to boundary conditions and use an adaptive scheme with this basis for discretization on the given time level. The main advantage of this scheme is small number of degrees of freedom. We present numerical experiments for the Asian put option with floating strike and compare the results for the proposed adaptive method and the Galerkin method.

  14. Adaptive oriented PDEs filtering methods based on new controlling speed function for discontinuous optical fringe patterns

    Science.gov (United States)

    Zhou, Qiuling; Tang, Chen; Li, Biyuan; Wang, Linlin; Lei, Zhenkun; Tang, Shuwei

    2018-01-01

    The filtering of discontinuous optical fringe patterns is a challenging problem faced in this area. This paper is concerned with oriented partial differential equations (OPDEs)-based image filtering methods for discontinuous optical fringe patterns. We redefine a new controlling speed function to depend on the orientation coherence. The orientation coherence can be used to distinguish the continuous regions and the discontinuous regions, and can be calculated by utilizing fringe orientation. We introduce the new controlling speed function to the previous OPDEs and propose adaptive OPDEs filtering models. According to our proposed adaptive OPDEs filtering models, the filtering in the continuous and discontinuous regions can be selectively carried out. We demonstrate the performance of the proposed adaptive OPDEs via application to the simulated and experimental fringe patterns, and compare our methods with the previous OPDEs.

  15. A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Baoguo Yu

    2016-01-01

    Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.

  16. An Improved NMS-Based Adaptive Edge Detection Method and Its FPGA Implementation

    Directory of Open Access Journals (Sweden)

    Enzeng Dong

    2016-01-01

    Full Text Available For improving the processing speed and accuracy of edge detection, an adaptive edge detection method based on improved NMS (nonmaximum suppression was proposed in this paper. In the method, the gradient image was computed by four directional Sobel operators. Then, the gradient image was processed by using NMS method. By defining a power map function, the elements values of gradient image histogram were mapped into a wider value range. By calculating the maximal between-class variance according to the mapped histogram, the corresponding threshold was obtained as adaptive threshold value in edge detection. Finally, to be convenient for engineering application, the proposed method was realized in FPGA (Field Programmable Gate Array. The experiment results demonstrated that the proposed method was effective in edge detection and suitable for real-time application.

  17. Regularities of multifractal measures

    Indian Academy of Sciences (India)

    Abstract. First, we prove the decomposition theorem for the regularities of multifractal. Hausdorff measure and packing measure in Rd . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next ...

  18. Regularities of multifractal measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  19. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  20. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  1. Development and evaluation of a method of calibrating medical displays based on fixed adaptation

    International Nuclear Information System (INIS)

    Sund, Patrik; Månsson, Lars Gunnar; Båth, Magnus

    2015-01-01

    Purpose: The purpose of this work was to develop and evaluate a new method for calibration of medical displays that includes the effect of fixed adaptation and by using equipment and luminance levels typical for a modern radiology department. Methods: Low contrast sinusoidal test patterns were derived at nine luminance levels from 2 to 600 cd/m 2 and used in a two alternative forced choice observer study, where the adaptation level was fixed at the logarithmic average of 35 cd/m 2 . The contrast sensitivity at each luminance level was derived by establishing a linear relationship between the ten pattern contrast levels used at every luminance level and a detectability index (d′) calculated from the fraction of correct responses. A Gaussian function was fitted to the data and normalized to the adaptation level. The corresponding equation was used in a display calibration method that included the grayscale standard display function (GSDF) but compensated for fixed adaptation. In the evaluation study, the contrast of circular objects with a fixed pixel contrast was displayed using both calibration methods and was rated on a five-grade scale. Results were calculated using a visual grading characteristics method. Error estimations in both observer studies were derived using a bootstrap method. Results: The contrast sensitivities for the darkest and brightest patterns compared to the contrast sensitivity at the adaptation luminance were 37% and 56%, respectively. The obtained Gaussian fit corresponded well with similar studies. The evaluation study showed a higher degree of equally distributed contrast throughout the luminance range with the calibration method compensated for fixed adaptation than for the GSDF. The two lowest scores for the GSDF were obtained for the darkest and brightest patterns. These scores were significantly lower than the lowest score obtained for the compensated GSDF. For the GSDF, the scores for all luminance levels were statistically separated

  2. Investigation of the Adaptability of Transient Stability Assessment Methods to Real-Time Operation

    DEFF Research Database (Denmark)

    Weckesser, Johannes Tilman Gabriel; Jóhannsson, Hjörtur; Sommer, Stefan

    2012-01-01

    In this paper, an investigation of the adaptability of available transient stability assessment methods to real-time operation and their real-time performance is carried out. Two approaches based on Lyapunov’s method and the equal area criterion are analyzed. The results allow to determine...... the runtime of each method with respect to the number of inputs. Furthermore, it allows to identify, which method is preferable in case of changes in the power system such as the integration of distributed power resources (DER). A comparison of the performance of the analyzed methods leads to the suggestion...... that matrix reduction and time domain simulation are the most critical operations....

  3. Adaptive control method for core power control in TRIGA Mark II reactor

    Science.gov (United States)

    Sabri Minhat, Mohd; Selamat, Hazlina; Subha, Nurul Adilla Mohd

    2018-01-01

    The 1MWth Reactor TRIGA PUSPATI (RTP) Mark II type has undergone more than 35 years of operation. The existing core power control uses feedback control algorithm (FCA). It is challenging to keep the core power stable at the desired value within acceptable error bands to meet the safety demand of RTP due to the sensitivity of nuclear research reactor operation. Currently, the system is not satisfied with power tracking performance and can be improved. Therefore, a new design core power control is very important to improve the current performance in tracking and regulate reactor power by control the movement of control rods. In this paper, the adaptive controller and focus on Model Reference Adaptive Control (MRAC) and Self-Tuning Control (STC) were applied to the control of the core power. The model for core power control was based on mathematical models of the reactor core, adaptive controller model, and control rods selection programming. The mathematical models of the reactor core were based on point kinetics model, thermal hydraulic models, and reactivity models. The adaptive control model was presented using Lyapunov method to ensure stable close loop system and STC Generalised Minimum Variance (GMV) Controller was not necessary to know the exact plant transfer function in designing the core power control. The performance between proposed adaptive control and FCA will be compared via computer simulation and analysed the simulation results manifest the effectiveness and the good performance of the proposed control method for core power control.

  4. Applications of automatic mesh generation and adaptive methods in computational medicine

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, J.A.; Macleod, R.S. [Univ. of Utah, Salt Lake City, UT (United States); Johnson, C.R.; Eason, J.C. [Duke Univ., Durham, NC (United States)

    1995-12-31

    Important problems in Computational Medicine exist that can benefit from the implementation of adaptive mesh refinement techniques. Biological systems are so inherently complex that only efficient models running on state of the art hardware can begin to simulate reality. To tackle the complex geometries associated with medical applications we present a general purpose mesh generation scheme based upon the Delaunay tessellation algorithm and an iterative point generator. In addition, automatic, two- and three-dimensional adaptive mesh refinement methods are presented that are derived from local and global estimates of the finite element error. Mesh generation and adaptive refinement techniques are utilized to obtain accurate approximations of bioelectric fields within anatomically correct models of the heart and human thorax. Specifically, we explore the simulation of cardiac defibrillation and the general forward and inverse problems in electrocardiography (ECG). Comparisons between uniform and adaptive refinement techniques are made to highlight the computational efficiency and accuracy of adaptive methods in the solution of field problems in computational medicine.

  5. Comparative study of adaptive controller using MIT rules and Lyapunov method for MPPT standalone PV systems

    Science.gov (United States)

    Tariba, N.; Bouknadel, A.; Haddou, A.; Ikken, N.; Omari, Hafsa El; Omari, Hamid El

    2017-01-01

    The Photovoltaic Generator have a nonlinear characteristic function relating the intensity at the voltage I = f (U) and depend on the variation of solar irradiation and temperature, In addition, its point of operation depends directly on the load that it supplies. To fix this drawback, and to extract the maximum power available to the terminal of the generator, an adaptation stage is introduced between the generator and the load to couple the two elements as perfectly as possible. The adaptation stage is associated with a command called MPPT MPPT (Maximum Power Point Tracker) whose is used to force the PVG to operate at the MPP (Maximum Power Point) under variation of climatic conditions and load variation. This paper presents a comparative study between the adaptive controller for PV Systems using MIT rules and Lyapunov method to regulate the PV voltage. The Incremental Conductance (IC) algorithm is used to extract the maximum power from the PVG by calculating the voltage Vref, and the adaptive controller is used to regulate and track quickly the PV voltage. The two methods of the adaptive controller will be compared to prove their performance by using the PSIM tools and experimental test, and the mathematical model of step-up with PVG model will be presented.

  6. An improved adaptive sampling and experiment design method for aerodynamic optimization

    Directory of Open Access Journals (Sweden)

    Huang Jiangtao

    2015-10-01

    Full Text Available Experiment design method is a key to construct a highly reliable surrogate model for numerical optimization in large-scale project. Within the method, the experimental design criterion directly affects the accuracy of the surrogate model and the optimization efficient. According to the shortcomings of the traditional experimental design, an improved adaptive sampling method is proposed in this paper. The surrogate model is firstly constructed by basic sparse samples. Then the supplementary sampling position is detected according to the specified criteria, which introduces the energy function and curvature sampling criteria based on radial basis function (RBF network. Sampling detection criteria considers both the uniformity of sample distribution and the description of hypersurface curvature so as to significantly improve the prediction accuracy of the surrogate model with much less samples. For the surrogate model constructed with sparse samples, the sample uniformity is an important factor to the interpolation accuracy in the initial stage of adaptive sampling and surrogate model training. Along with the improvement of uniformity, the curvature description of objective function surface gradually becomes more important. In consideration of these issues, crowdness enhance function and root mean square error (RMSE feedback function are introduced in C criterion expression. Thus, a new sampling method called RMSE and crowdness enhance (RCE adaptive sampling is established. The validity of RCE adaptive sampling method is studied through typical test function firstly and then the airfoil/wing aerodynamic optimization design problem, which has high-dimensional design space. The results show that RCE adaptive sampling method not only reduces the requirement for the number of samples, but also effectively improves the prediction accuracy of the surrogate model, which has a broad prospects for applications.

  7. Adaptive grouping for the higher-order multilevel fast multipole method

    DEFF Research Database (Denmark)

    Borries, Oscar Peter; Jørgensen, Erik; Meincke, Peter

    2014-01-01

    An alternative parameter-free adaptive approach for the grouping of the basis function patterns in the multilevel fast multipole method is presented, yielding significant memory savings compared to the traditional Octree grouping for most discretizations, particularly when using higher-order basis...

  8. Adaptive e-learning methods and IMS Learning Design. An integrated approach

    NARCIS (Netherlands)

    Burgos, Daniel; Specht, Marcus

    2006-01-01

    Please, cite this publication as: Burgos, D., & Specht, M. (2006). Adaptive e-learning methods and IMS Learning Design. In Kinshuk, R. Koper, P. Kommers, P. Kirschner, D. G. Sampson & W. Didderen (Eds.), Proceedings of the 6th IEEE International Conference on Advanced Learning Technologies (pp.

  9. Methods of Adapting Digital Content for the Learning Process via Mobile Devices

    Science.gov (United States)

    Lopez, J. L. Gimenez; Royo, T. Magal; Laborda, Jesus Garcia; Calvo, F. Garde

    2009-01-01

    This article analyses different methods of adapting digital content for its delivery via mobile devices taking into account two aspects which are a fundamental part of the learning process; on the one hand, functionality of the contents, and on the other, the actual controlled navigation requirements that the learner needs in order to acquire high…

  10. Adaptive backward difference formula - Discontinuous Galerkin finite element method for the solution of conservation laws

    Czech Academy of Sciences Publication Activity Database

    Dolejší, V.; Kůs, Pavel

    2008-01-01

    Roč. 73, č. 12 (2008), s. 1739-1766 ISSN 0029-5981 Keywords : backward difference formula * discontinuous Galerkin method * adaptive choice of the time step Subject RIV: BA - General Mathematics Impact factor: 2.229, year: 2008 http://onlinelibrary.wiley.com/doi/10.1002/nme.2143/abstract

  11. ADAPTIVE METHODS FOR STOCHASTIC DIFFERENTIAL EQUATIONS VIA NATURAL EMBEDDINGS AND REJECTION SAMPLING WITH MEMORY.

    Science.gov (United States)

    Rackauckas, Christopher; Nie, Qing

    2017-01-01

    Adaptive time-stepping with high-order embedded Runge-Kutta pairs and rejection sampling provides efficient approaches for solving differential equations. While many such methods exist for solving deterministic systems, little progress has been made for stochastic variants. One challenge in developing adaptive methods for stochastic differential equations (SDEs) is the construction of embedded schemes with direct error estimates. We present a new class of embedded stochastic Runge-Kutta (SRK) methods with strong order 1.5 which have a natural embedding of strong order 1.0 methods. This allows for the derivation of an error estimate which requires no additional function evaluations. Next we derive a general method to reject the time steps without losing information about the future Brownian path termed Rejection Sampling with Memory (RSwM). This method utilizes a stack data structure to do rejection sampling, costing only a few floating point calculations. We show numerically that the methods generate statistically-correct and tolerance-controlled solutions. Lastly, we show that this form of adaptivity can be applied to systems of equations, and demonstrate that it solves a stiff biological model 12.28x faster than common fixed timestep algorithms. Our approach only requires the solution to a bridging problem and thus lends itself to natural generalizations beyond SDEs.

  12. From inactive to regular jogger

    DEFF Research Database (Denmark)

    Lund-Cramer, Pernille; Brinkmann Løite, Vibeke; Bredahl, Thomas Viskum Gjelstrup

    limited in terms of maintaining a behavior change. The purpose of this study was to investigate individual, cognitive, social, and contextual factors influencing the adoption and maintenance of regular self-organized jogging, and how they were manifested among former inactive adults. Methods A qualitative...... to translate intention into regular behavior. TTM: Informants expressed rapid progression from the pre-contemplation to the action stage caused by an early shift in the decisional balance towards advantages overweighing disadvantages. This was followed by a continuous improvement in self-efficacy, which...... jogging-related self-efficacy, and deployment of realistic goal setting was significant in the achievement of regular jogging behavior. Cognitive factors included a positive change in both affective and instrumental beliefs about jogging. Expectations from society and social relations had limited effect...

  13. Fuzzy physical programming for Space Manoeuvre Vehicles trajectory optimization based on hp-adaptive pseudospectral method

    Science.gov (United States)

    Chai, Runqi; Savvaris, Al; Tsourdos, Antonios

    2016-06-01

    In this paper, a fuzzy physical programming (FPP) method has been introduced for solving multi-objective Space Manoeuvre Vehicles (SMV) skip trajectory optimization problem based on hp-adaptive pseudospectral methods. The dynamic model of SMV is elaborated and then, by employing hp-adaptive pseudospectral methods, the problem has been transformed to nonlinear programming (NLP) problem. According to the mission requirements, the solutions were calculated for each single-objective scenario. To get a compromised solution for each target, the fuzzy physical programming (FPP) model is proposed. The preference function is established with considering the fuzzy factor of the system such that a proper compromised trajectory can be acquired. In addition, the NSGA-II is tested to obtain the Pareto-optimal solution set and verify the Pareto optimality of the FPP solution. Simulation results indicate that the proposed method is effective and feasible in terms of dealing with the multi-objective skip trajectory optimization for the SMV.

  14. Use of a dynamic grid adaptation in the asymmetric weighted residual method

    International Nuclear Information System (INIS)

    Graf, V.; Romstedt, P.; Werner, W.

    1986-01-01

    A dynamic grid adaptive method has been developed for use with the asymmetric weighted residual method. The method automatically adapts the number and position of the spatial mesh points as the solution of hyperbolic or parabolic vector partial differential equations progresses in time. The mesh selection algorithm is based on the minimization of the L 2 norm of the spatial discretization error. The method permits the accurate calculation of the evolution of inhomogeneities, like wave fronts, shock layers, and other sharp transitions, while generally using a coarse computational grid. The number of required mesh points is significantly reduced, relative to a fixed Eulerian grid. Since the mesh selection algorithm is computationally inexpensive, a corresponding reduction of computing time results

  15. An adaptive mesh refinement approach for average current nodal expansion method in 2-D rectangular geometry

    International Nuclear Information System (INIS)

    Poursalehi, N.; Zolfaghari, A.; Minuchehr, A.

    2013-01-01

    Highlights: ► A new adaptive h-refinement approach has been developed for a class of nodal method. ► The resulting system of nodal equations is more amenable to efficient numerical solution. ► The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. ► Spatially adaptive approach greatly enhances the accuracy of the solution. - Abstract: The aim of this work is to develop a spatially adaptive coarse mesh strategy that progressively refines the nodes in appropriate regions of domain to solve the neutron balance equation by zeroth order nodal expansion method. A flux gradient based a posteriori estimation scheme has been utilized for checking the approximate solutions for various nodes. The relative surface net leakage of nodes has been considered as an assessment criterion. In this approach, the core module is called in by adaptive mesh generator to determine gradients of node surfaces flux to explore the possibility of node refinements in appropriate regions and directions of the problem. The benefit of the approach is reducing computational efforts relative to the uniform fine mesh modeling. For this purpose, a computer program ANRNE-2D, Adaptive Node Refinement Nodal Expansion, has been developed to solve neutron diffusion equation using average current nodal expansion method for 2D rectangular geometries. Implementing the adaptive algorithm confirms its superiority in enhancing the accuracy of the solution without using fine nodes throughout the domain and increasing the number of unknown solution. Some well-known benchmarks have been investigated and improvements are reported

  16. Low-Complexity Regularization Algorithms for Image Deblurring

    KAUST Repository

    Alanazi, Abdulrahman

    2016-11-01

    Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work

  17. Refinement trajectory and determination of eigenstates by a wavelet based adaptive method

    International Nuclear Information System (INIS)

    Pipek, Janos; Nagy, Szilvia

    2006-01-01

    The detail structure of the wave function is analyzed at various refinement levels using the methods of wavelet analysis. The eigenvalue problem of a model system is solved in granular Hilbert spaces, and the trajectory of the eigenstates is traced in terms of the resolution. An adaptive method is developed for identifying the fine structure localization regions, where further refinement of the wave function is necessary

  18. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  19. A class of discontinuous Petrov–Galerkin methods. Part III: Adaptivity

    KAUST Repository

    Demkowicz, Leszek

    2012-04-01

    We continue our theoretical and numerical study on the Discontinuous Petrov-Galerkin method with optimal test functions in context of 1D and 2D convection-dominated diffusion problems and hp-adaptivity. With a proper choice of the norm for the test space, we prove robustness (uniform stability with respect to the diffusion parameter) and mesh-independence of the energy norm of the FE error for the 1D problem. With hp-adaptivity and a proper scaling of the norms for the test functions, we establish new limits for solving convection-dominated diffusion problems numerically: ε=10 -11 for 1D and ε=10 -7 for 2D problems. The adaptive process is fully automatic and starts with a mesh consisting of few elements only. © 2011 IMACS. Published by Elsevier B.V. All rights reserved.

  20. Adaptive extraction method for trend term of machinery signal based on extreme-point symmetric mode decomposition

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Yong; Jiang, Wan-lu; Kong, Xiang-dong [Yanshan University, Hebei (China)

    2017-02-15

    In mechanical fault diagnosis and condition monitoring, extracting and eliminating the trend term of machinery signal are necessary. In this paper, an adaptive extraction method for trend term of machinery signal based on Extreme-point symmetric mode decomposition (ESMD) was proposed. This method fully utilized ESMD, including the self-adaptive decomposition feature and optimal fitting strategy. The effectiveness and practicability of this method are tested through simulation analysis and measured data validation. Results indicate that this method can adaptively extract various trend terms hidden in machinery signal, and has commendable self-adaptability. Moreover, the extraction results are better than those of empirical mode decomposition.

  1. Investigation of the effects of color on judgments of sweetness using a taste adaptation method.

    Science.gov (United States)

    Hidaka, Souta; Shimoda, Kazumasa

    2014-01-01

    It has been reported that color can affect the judgment of taste. For example, a dark red color enhances the subjective intensity of sweetness. However, the underlying mechanisms of the effect of color on taste have not been fully investigated; in particular, it remains unclear whether the effect is based on cognitive/decisional or perceptual processes. Here, we investigated the effect of color on sweetness judgments using a taste adaptation method. A sweet solution whose color was subjectively congruent with sweetness was judged as sweeter than an uncolored sweet solution both before and after adaptation to an uncolored sweet solution. In contrast, subjective judgment of sweetness for uncolored sweet solutions did not differ between the conditions following adaptation to a colored sweet solution and following adaptation to an uncolored one. Color affected sweetness judgment when the target solution was colored, but the colored sweet solution did not modulate the magnitude of taste adaptation. Therefore, it is concluded that the effect of color on the judgment of taste would occur mainly in cognitive/decisional domains.

  2. Simulated annealing method for electronic circuits design: adaptation and comparison with other optimization methods; La methode du recuit simule pour la conception des circuits electroniques: adaptation et comparaison avec d`autres methodes d`optimisation

    Energy Technology Data Exchange (ETDEWEB)

    Berthiau, G.

    1995-10-01

    The circuit design problem consists in determining acceptable parameter values (resistors, capacitors, transistors geometries ...) which allow the circuit to meet various user given operational criteria (DC consumption, AC bandwidth, transient times ...). This task is equivalent to a multidimensional and/or multi objective optimization problem: n-variables functions have to be minimized in an hyper-rectangular domain ; equality constraints can be eventually specified. A similar problem consists in fitting component models. In this way, the optimization variables are the model parameters and one aims at minimizing a cost function built on the error between the model response and the data measured on the component. The chosen optimization method for this kind of problem is the simulated annealing method. This method, provided by the combinatorial optimization domain, has been adapted and compared with other global optimization methods for the continuous variables problems. An efficient strategy of variables discretization and a set of complementary stopping criteria have been proposed. The different parameters of the method have been adjusted with analytical functions of which minima are known, classically used in the literature. Our simulated annealing algorithm has been coupled with an open electrical simulator SPICE-PAC of which the modular structure allows the chaining of simulations required by the circuit optimization process. We proposed, for high-dimensional problems, a partitioning technique which ensures proportionality between CPU-time and variables number. To compare our method with others, we have adapted three other methods coming from combinatorial optimization domain - the threshold method, a genetic algorithm and the Tabu search method - The tests have been performed on the same set of test functions and the results allow a first comparison between these methods applied to continuous optimization variables. (Abstract Truncated)

  3. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  4. A Remote Sensing Image Fusion Method based on adaptive dictionary learning

    Science.gov (United States)

    He, Tongdi; Che, Zongxi

    2018-01-01

    This paper discusses using a remote sensing fusion method, based on' adaptive sparse representation (ASP)', to provide improved spectral information, reduce data redundancy and decrease system complexity. First, the training sample set is formed by taking random blocks from the images to be fused, the dictionary is then constructed using the training samples, and the remaining terms are clustered to obtain the complete dictionary by iterated processing at each step. Second, the self-adaptive weighted coefficient rule of regional energy is used to select the feature fusion coefficients and complete the reconstruction of the image blocks. Finally, the reconstructed image blocks are rearranged and an average is taken to obtain the final fused images. Experimental results show that the proposed method is superior to other traditional remote sensing image fusion methods in both spectral information preservation and spatial resolution.

  5. Adaptive mixed finite element methods for Darcy flow in fractured porous media

    KAUST Repository

    Chen, Huangxin

    2016-09-21

    In this paper, we propose adaptive mixed finite element methods for simulating the single-phase Darcy flow in two-dimensional fractured porous media. The reduced model that we use for the simulation is a discrete fracture model coupling Darcy flows in the matrix and the fractures, and the fractures are modeled by one-dimensional entities. The Raviart-Thomas mixed finite element methods are utilized for the solution of the coupled Darcy flows in the matrix and the fractures. In order to improve the efficiency of the simulation, we use adaptive mixed finite element methods based on novel residual-based a posteriori error estimators. In addition, we develop an efficient upscaling algorithm to compute the effective permeability of the fractured porous media. Several interesting examples of Darcy flow in the fractured porous media are presented to demonstrate the robustness of the algorithm.

  6. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  7. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  8. Inferring Functional Brain States Using Temporal Evolution of Regularized Classifiers

    Directory of Open Access Journals (Sweden)

    Andrey Zhdanov

    2007-08-01

    Full Text Available We present a framework for inferring functional brain state from electrophysiological (MEG or EEG brain signals. Our approach is adapted to the needs of functional brain imaging rather than EEG-based brain-computer interface (BCI. This choice leads to a different set of requirements, in particular to the demand for more robust inference methods and more sophisticated model validation techniques. We approach the problem from a machine learning perspective, by constructing a classifier from a set of labeled signal examples. We propose a framework that focuses on temporal evolution of regularized classifiers, with cross-validation for optimal regularization parameter at each time frame. We demonstrate the inference obtained by this method on MEG data recorded from 10 subjects in a simple visual classification experiment, and provide comparison to the classical nonregularized approach.

  9. Adaptive Mesh Refinement for the Immersed Boundary Lattice Green's Function method

    Science.gov (United States)

    Mengaldo, Gianmarco; Colonius, Tim

    2017-11-01

    The immersed boundary lattice Green's function (IBLGF) method, recently developed by Liska and Colonius, is a recent scalable numerical framework to solve incompressible flows on unbounded domains. It uses an immersed boundary method, based on a 2nd -order mimetic finite volume scheme that is used in conjunction with an adaptive block refinement approach, achieved via lattice Green's functions, whose scope is to limit the computational domain to vortical regions that dominate the flow evolution - e.g. regions in proximity to the immersed body surface and in its wake. The method, as it stands, is competitive for low Reynolds number flows, as the staggered Cartesian mesh employed cannot be stretched or refined locally. In this talk we address this issue by presenting the development of adaptive mesh refinement (AMR) capabilities in the IBLFG method. As we shall see, this new feature and the adaptive block refinement already present in the code help overcoming the limitation of simulating high Reynolds number flows, issue that is endemic to the vast majority of immersed boundary-based methods. Supported by ONR-N00014-16-1-2734.

  10. Gait-Event-Based Synchronization Method for Gait Rehabilitation Robots via a Bioinspired Adaptive Oscillator.

    Science.gov (United States)

    Chen, Gong; Qi, Peng; Guo, Zhao; Yu, Haoyong

    2017-06-01

    In the field of gait rehabilitation robotics, achieving human-robot synchronization is very important. In this paper, a novel human-robot synchronization method using gait event information is proposed. This method includes two steps. First, seven gait events in one gait cycle are detected in real time with a hidden Markov model; second, an adaptive oscillator is utilized to estimate the stride percentage of human gait using any one of the gait events. Synchronous reference trajectories for the robot are then generated with the estimated stride percentage. This method is based on a bioinspired adaptive oscillator, which is a mathematical tool, first proposed to explain the phenomenon of synchronous flashing among fireflies. The proposed synchronization method is implemented in a portable knee-ankle-foot robot and tested in 15 healthy subjects. This method has the advantages of simple structure, flexible selection of gait events, and fast adaptation. Gait event is the only information needed, and hence the performance of synchronization holds when an abnormal gait pattern is involved. The results of the experiments reveal that our approach is efficient in achieving human-robot synchronization and feasible for rehabilitation robotics application.

  11. Investigation of The regularities of the process and development of method of management of technological line operation within the process of mass raw mate-rials supply in terms of dynamics of inbound traffic of unit trains

    Directory of Open Access Journals (Sweden)

    Катерина Ігорівна Сізова

    2015-03-01

    Full Text Available Large-scale sinter plants at metallurgical enterprises incorporate highly productive transport-and-handling complexes (THC that receive and process mass iron-bearing raw materials. Such THCs as a rule include unloading facilities and freight railway station. The central part of the THC is a technological line that carries out operations of reception and unloading of unit trains with raw materials. The technological line consists of transport and freight modules. The latter plays a leading role and, in its turn, consists of rotary car dumpers and conveyor belts. This module represents a determinate system that carries out preparation and unloading operations. Its processing capacity is set in accordance with manufacturing capacity of the sinter plant. The research has shown that in existing operating conditions, which is characterized by “arrhythmia” of interaction between external transport operation and production, technological line of THC functions inefficiently. Thus, it secures just 18-20 % of instances of processing of inbound unit trains within set standard time. It was determined that duration of the cycle of processing of inbound unit train can play a role of regulator, under stochastic characteristics of intervals between inbound unit trains with raw materials on the one hand, and determined unloading system on the other hand. That is why evaluation of interdependence between these factors allows determination of duration of cycle of processing of inbound unit trains. Basing on the results of the study, the method of logistical management of the processing of inbound unit trains was offered. At the same time, real duration of processing of inbound unit train is taken as the regulated value. The regulation process implies regular evaluation and comparison of these values, and, taking into account different disturbances, decision-making concerning adaptation of functioning of technological line. According to the offered principles

  12. Subcortical processing of speech regularities underlies reading and music aptitude in children

    Directory of Open Access Journals (Sweden)

    Strait Dana L

    2011-10-01

    Full Text Available Abstract Background Neural sensitivity to acoustic regularities supports fundamental human behaviors such as hearing in noise and reading. Although the failure to encode acoustic regularities in ongoing speech has been associated with language and literacy deficits, how auditory expertise, such as the expertise that is associated with musical skill, relates to the brainstem processing of speech regularities is unknown. An association between musical skill and neural sensitivity to acoustic regularities would not be surprising given the importance of repetition and regularity in music. Here, we aimed to define relationships between the subcortical processing of speech regularities, music aptitude, and reading abilities in children with and without reading impairment. We hypothesized that, in combination with auditory cognitive abilities, neural sensitivity to regularities in ongoing speech provides a common biological mechanism underlying the development of music and reading abilities. Methods We assessed auditory working memory and attention, music aptitude, reading ability, and neural sensitivity to acoustic regularities in 42 school-aged children with a wide range of reading ability. Neural sensitivity to acoustic regularities was assessed by recording brainstem responses to the same speech sound presented in predictable and variable speech streams. Results Through correlation analyses and structural equation modeling, we reveal that music aptitude and literacy both relate to the extent of subcortical adaptation to regularities in ongoing speech as well as with auditory working memory and attention. Relationships between music and speech processing are specifically driven by performance on a musical rhythm task, underscoring the importance of rhythmic regularity for both language and music. Conclusions These data indicate common brain mechanisms underlying reading and music abilities that relate to how the nervous system responds to

  13. Adaptive Gain Control Method of a Phase-Locked Loop for GNSS Carrier Signal Tracking

    Directory of Open Access Journals (Sweden)

    Zhibin Luo

    2018-01-01

    Full Text Available The global navigation satellite system (GNSS has been widely used in both military and civil fields. This study focuses on enhancing the carrier tracking ability of the phase-locked loop (PLL in GNSS receivers for high-dynamic application. The PLL is a very popular and practical approach for tracking the GNSS carrier signal which propagates in the form of electromagnetic wave. However, a PLL with constant coefficient would be suboptimal. Adaptive loop noise bandwidth techniques proposed by previous researches can improve PLL tracking behavior to some extent. This paper presents a novel PLL with an adaptive loop gain control filter (AGCF-PLL that can provide an alternative. The mathematical model based on second- and third-order PLL was derived. The error characteristics of the AGCF-PLL were also derived and analyzed under different signal conditions, which mainly refers to the different combinations of carrier phase dynamic and signal strength. Based on error characteristic curves, the optimal loop gain control method has been achieved to minimize tracking error. Finally, the completely adaptive loop gain control algorithm was designed. Comparable test results and analysis using the new method, conventional PLL, FLL-assisted PLL, and FAB-LL demonstrate that the AGCF-PLL has stronger adaptability to high target movement dynamic.

  14. A goal-based angular adaptivity method for thermal radiation modelling in non grey media

    Science.gov (United States)

    Soucasse, Laurent; Dargaville, Steven; Buchan, Andrew G.; Pain, Christopher C.

    2017-10-01

    This paper investigates for the first time a goal-based angular adaptivity method for thermal radiation transport, suitable for non grey media when the radiation field is coupled with an unsteady flow field through an energy balance. Anisotropic angular adaptivity is achieved by using a Haar wavelet finite element expansion that forms a hierarchical angular basis with compact support and does not require any angular interpolation in space. The novelty of this work lies in (1) the definition of a target functional to compute the goal-based error measure equal to the radiative source term of the energy balance, which is the quantity of interest in the context of coupled flow-radiation calculations; (2) the use of different optimal angular resolutions for each absorption coefficient class, built from a global model of the radiative properties of the medium. The accuracy and efficiency of the goal-based angular adaptivity method is assessed in a coupled flow-radiation problem relevant for air pollution modelling in street canyons. Compared to a uniform Haar wavelet expansion, the adapted resolution uses 5 times fewer angular basis functions and is 6.5 times quicker, given the same accuracy in the radiative source term.

  15. 3D spatially-adaptive canonical correlation analysis: Local and global methods.

    Science.gov (United States)

    Yang, Zhengshi; Zhuang, Xiaowei; Sreenivasan, Karthik; Mishra, Virendra; Curran, Tim; Byrd, Richard; Nandy, Rajesh; Cordes, Dietmar

    2018-04-01

    Local spatially-adaptive canonical correlation analysis (local CCA) with spatial constraints has been introduced to fMRI multivariate analysis for improved modeling of activation patterns. However, current algorithms require complicated spatial constraints that have only been applied to 2D local neighborhoods because the computational time would be exponentially increased if the same method is applied to 3D spatial neighborhoods. In this study, an efficient and accurate line search sequential quadratic programming (SQP) algorithm has been developed to efficiently solve the 3D local CCA problem with spatial constraints. In addition, a spatially-adaptive kernel CCA (KCCA) method is proposed to increase accuracy of fMRI activation maps. With oriented 3D spatial filters anisotropic shapes can be estimated during the KCCA analysis of fMRI time courses. These filters are orientation-adaptive leading to rotational invariance to better match arbitrary oriented fMRI activation patterns, resulting in improved sensitivity of activation detection while significantly reducing spatial blurring artifacts. The kernel method in its basic form does not require any spatial constraints and analyzes the whole-brain fMRI time series to construct an activation map. Finally, we have developed a penalized kernel CCA model that involves spatial low-pass filter constraints to increase the specificity of the method. The kernel CCA methods are compared with the standard univariate method and with two different local CCA methods that were solved by the SQP algorithm. Results show that SQP is the most efficient algorithm to solve the local constrained CCA problem, and the proposed kernel CCA methods outperformed univariate and local CCA methods in detecting activations for both simulated and real fMRI episodic memory data. Copyright © 2017 Elsevier Inc. All rights reserved.

  16. Adaptive target binarization method based on a dual-camera system

    Science.gov (United States)

    Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing

    2018-01-01

    An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.

  17. Vibration-Based Adaptive Novelty Detection Method for Monitoring Faults in a Kinematic Chain

    Directory of Open Access Journals (Sweden)

    Jesus Adolfo Cariño-Corrales

    2016-01-01

    Full Text Available This paper presents an adaptive novelty detection methodology applied to a kinematic chain for the monitoring of faults. The proposed approach has the premise that only information of the healthy operation of the machine is initially available and fault scenarios will eventually develop. This approach aims to cover some of the challenges presented when condition monitoring is applied under a continuous learning framework. The structure of the method is divided into two recursive stages: first, an offline stage for initialization and retraining of the feature reduction and novelty detection modules and, second, an online monitoring stage to continuously assess the condition of the machine. Contrary to classical static feature reduction approaches, the proposed method reformulates the features by employing first a Laplacian Score ranking and then the Fisher Score ranking for retraining. The proposed methodology is validated experimentally by monitoring the vibration measurements of a kinematic chain driven by an induction motor. Two faults are induced in the motor to validate the method performance to detect anomalies and adapt the feature reduction and novelty detection modules to the new information. The obtained results show the advantages of employing an adaptive approach for novelty detection and feature reduction making the proposed method suitable for industrial machinery diagnosis applications.

  18. Stabilized Conservative Level Set Method with Adaptive Wavelet-based Mesh Refinement

    Science.gov (United States)

    Shervani-Tabar, Navid; Vasilyev, Oleg V.

    2016-11-01

    This paper addresses one of the main challenges of the conservative level set method, namely the ill-conditioned behavior of the normal vector away from the interface. An alternative formulation for reconstruction of the interface is proposed. Unlike the commonly used methods which rely on the unit normal vector, Stabilized Conservative Level Set (SCLS) uses a modified renormalization vector with diminishing magnitude away from the interface. With the new formulation, in the vicinity of the interface the reinitialization procedure utilizes compressive flux and diffusive terms only in the normal direction to the interface, thus, preserving the conservative level set properties, while away from the interfaces the directional diffusion mechanism automatically switches to homogeneous diffusion. The proposed formulation is robust and general. It is especially well suited for use with adaptive mesh refinement (AMR) approaches due to need for a finer resolution in the vicinity of the interface in comparison with the rest of the domain. All of the results were obtained using the Adaptive Wavelet Collocation Method, a general AMR-type method, which utilizes wavelet decomposition to adapt on steep gradients in the solution while retaining a predetermined order of accuracy.

  19. Adaptive EWMA Method Based on Abnormal Network Traffic for LDoS Attacks

    Directory of Open Access Journals (Sweden)

    Dan Tang

    2014-01-01

    Full Text Available The low-rate denial of service (LDoS attacks reduce network services capabilities by periodically sending high intensity pulse data flows. For their concealed performance, it is more difficult for traditional DoS detection methods to detect LDoS attacks; at the same time the accuracy of the current detection methods for LDoS attacks is relatively low. As the fact that LDoS attacks led to abnormal distribution of the ACK traffic, LDoS attacks can be detected by analyzing the distribution characteristics of ACK traffic. Then traditional EWMA algorithm which can smooth the accidental error while being the same as the exceptional mutation may cause some misjudgment; therefore a new LDoS detection method based on adaptive EWMA (AEWMA algorithm is proposed. The AEWMA algorithm which uses an adaptive weighting function instead of the constant weighting of EWMA algorithm can smooth the accidental error and retain the exceptional mutation. So AEWMA method is more beneficial than EWMA method for analyzing and measuring the abnormal distribution of ACK traffic. The NS2 simulations show that AEWMA method can detect LDoS attacks effectively and has a low false negative rate and a false positive rate. Based on DARPA99 datasets, experiment results show that AEWMA method is more efficient than EWMA method.

  20. An adaptive finite element method for simulating surface tension with the gradient theory of fluid interfaces

    KAUST Repository

    Kou, Jisheng

    2014-01-01

    The gradient theory for the surface tension of simple fluids and mixtures is rigorously analyzed based on mathematical theory. The finite element approximation of surface tension is developed and analyzed, and moreover, an adaptive finite element method based on a physical-based estimator is proposed and it can be coupled efficiently with Newton\\'s method as well. The numerical tests are carried out both to verify the proposed theory and to demonstrate the efficiency of the proposed method. © 2013 Elsevier B.V. All rights reserved.

  1. Marginal and Internal Adaptation of Zirconia Crowns: A Comparative Study of Assessment Methods.

    Science.gov (United States)

    Cunali, Rafael Schlögel; Saab, Rafaella Caramori; Correr, Gisele Maria; Cunha, Leonardo Fernandes da; Ornaghi, Bárbara Pick; Ritter, André V; Gonzaga, Carla Castiglia

    2017-01-01

    Marginal and internal adaptation is critical for the success of indirect restorations. New imaging systems make it possible to evaluate these parameters with precision and non-destructively. This study evaluated the marginal and internal adaptation of zirconia copings fabricated with two different systems using both silicone replica and microcomputed tomography (micro-CT) assessment methods. A metal master model, representing a preparation for an all-ceramic full crown, was digitally scanned and polycrystalline zirconia copings were fabricated with either Ceramill Zi (Amann-Girrbach) or inCoris Zi (Dentslpy-Sirona), n=10. For each coping, marginal and internal gaps were evaluated by silicone replica and micro-CT assessment methods. Four assessment points of each replica cross-section and micro-CT image were evaluated using imaging software: marginal gap (MG), axial wall (AW), axio-occlusal angle (AO) and mid-occlusal wall (MO). Data were statistically analyzed by factorial ANOVA and Tukey test (a=0.05). There was no statistically significant difference between the methods for MG and AW. For AO, there were significant differences between methods for Amann copings, while for Dentsply-Sirona copings similar values were observed. For MO, both methods presented statistically significant differences. A positive correlation was observed determined by the two assessment methods for MG values. In conclusion, the assessment method influenced the evaluation of marginal and internal adaptation of zirconia copings. Micro-CT showed lower marginal and internal gap values when compared to the silicone replica technique, although the difference was not always statistically significant. Marginal gap and axial wall assessment points showed the lower gap values, regardless of ceramic system and assessment method used.

  2. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  3. An integration time adaptive control method for atmospheric composition detection of occultation

    Science.gov (United States)

    Ding, Lin; Hou, Shuai; Yu, Fei; Liu, Cheng; Li, Chao; Zhe, Lin

    2018-01-01

    When sun is used as the light source for atmospheric composition detection, it is necessary to image sun for accurate identification and stable tracking. In the course of 180 second of the occultation, the magnitude of sun light intensity through the atmosphere changes greatly. It is nearly 1100 times illumination change between the maximum atmospheric and the minimum atmospheric. And the process of light change is so severe that 2.9 times per second of light change can be reached. Therefore, it is difficult to control the integration time of sun image camera. In this paper, a novel adaptive integration time control method for occultation is presented. In this method, with the distribution of gray value in the image as the reference variable, and the concepts of speed integral PID control, the integration time adaptive control problem of high frequency imaging. The large dynamic range integration time automatic control in the occultation can be achieved.

  4. Research on a Pulmonary Nodule Segmentation Method Combining Fast Self-Adaptive FCM and Classification

    Directory of Open Access Journals (Sweden)

    Hui Liu

    2015-01-01

    Full Text Available The key problem of computer-aided diagnosis (CAD of lung cancer is to segment pathologically changed tissues fast and accurately. As pulmonary nodules are potential manifestation of lung cancer, we propose a fast and self-adaptive pulmonary nodules segmentation method based on a combination of FCM clustering and classification learning. The enhanced spatial function considers contributions to fuzzy membership from both the grayscale similarity between central pixels and single neighboring pixels and the spatial similarity between central pixels and neighborhood and improves effectively the convergence rate and self-adaptivity of the algorithm. Experimental results show that the proposed method can achieve more accurate segmentation of vascular adhesion, pleural adhesion, and ground glass opacity (GGO pulmonary nodules than other typical algorithms.

  5. Improved methods in neural network-based adaptive output feedback control, with applications to flight control

    Science.gov (United States)

    Kim, Nakwan

    Utilizing the universal approximation property of neural networks, we develop several novel approaches to neural network-based adaptive output feedback control of nonlinear systems, and illustrate these approaches for several flight control applications. In particular, we address the problem of non-affine systems and eliminate the fixed point assumption present in earlier work. All of the stability proofs are carried out in a form that eliminates an algebraic loop in the neural network implementation. An approximate input/output feedback linearizing controller is augmented with a neural network using input/output sequences of the uncertain system. These approaches permit adaptation to both parametric uncertainty and unmodeled dynamics. All physical systems also have control position and rate limits, which may either deteriorate performance or cause instability for a sufficiently high control bandwidth. Here we apply a method for protecting an adaptive process from the effects of input saturation and time delays, known as "pseudo control hedging". This method was originally developed for the state feedback case, and we provide a stability analysis that extends its domain of applicability to the case of output feedback. The approach is illustrated by the design of a pitch-attitude flight control system for a linearized model of an R-50 experimental helicopter, and by the design of a pitch-rate control system for a 58-state model of a flexible aircraft consisting of rigid body dynamics coupled with actuator and flexible modes. A new approach to augmentation of an existing linear controller is introduced. It is especially useful when there is limited information concerning the plant model, and the existing controller. The approach is applied to the design of an adaptive autopilot for a guided munition. Design of a neural network adaptive control that ensures asymptotically stable tracking performance is also addressed.

  6. Adapting Human Videofluoroscopic Swallow Study Methods to Detect and Characterize Dysphagia in Murine Disease Models

    OpenAIRE

    Lever, Teresa E.; Braun, Sabrina M.; Brooks, Ryan T.; Harris, Rebecca A.; Littrell, Loren L.; Neff, Ryan M.; Hinkel, Cameron J.; Allen, Mitchell J.; Ulsas, Mollie A.

    2015-01-01

    This study adapted human videofluoroscopic swallowing study (VFSS) methods for use with murine disease models for the purpose of facilitating translational dysphagia research. Successful outcomes are dependent upon three critical components: test chambers that permit self-feeding while standing unrestrained in a confined space, recipes that mask the aversive taste/odor of commercially-available oral contrast agents, and a step-by-step test protocol that permits quantification of swallow physi...

  7. A Virtual Mind Palace: Adapting the Method of Loci to Virtual Reality.

    OpenAIRE

    Vindenes, Joakim

    2017-01-01

    This master's thesis investigates the design and development of an application in the medium of Virtual Reality (VR). The application, called the Mind Palace Application (MPA), is an adaptation of a popular mnemonic called the the Method of Loci (MOL). The application is designed to answer research questions regarding how different features of VR impact our memory of Virtual Environments (VEs) and who benefits from this technology. The research design involves a controlled experiment on three...

  8. Adaptive Wavelet Galerkin Methods on Distorted Domains: Setup of the Algebraic System

    Science.gov (United States)

    2000-01-01

    Lipschitz domain Q C Rn (n > 1), its numerical approximation by a variational method (Galerkin, Petrov-Galerkin, weighted residuals, ... ) requires the...an adaptive approximation di ,of of Remark 2. The construction of A* according to (13) seems to require the explicit knowledge of all the (infinite...Istituto di Analisi Numerica del C.N.R. in Pavia, Italy. References 1. Barinka, A., T. Barsch, P. Charton, A. Cohen, S. Dahlke, W. Dahmen, and K. Urban

  9. An Adaptive Physics-Based Method for the Solution of One-Dimensional Wave Motion Problems

    Directory of Open Access Journals (Sweden)

    Masoud Shafiei

    2015-12-01

    Full Text Available In this paper, an adaptive physics-based method is developed for solving wave motion problems in one dimension (i.e., wave propagation in strings, rods and beams. The solution of the problem includes two main parts. In the first part, after discretization of the domain, a physics-based method is developed considering the conservation of mass and the balance of momentum. In the second part, adaptive points are determined using the wavelet theory. This part is done employing the Deslauries-Dubuc (D-D wavelets. By solving the problem in the first step, the domain of the problem is discretized by the same cells taking into consideration the load and characteristics of the structure. After the first trial solution, the D-D interpolation shows the lack and redundancy of points in the domain. These points will be added or eliminated for the next solution. This process may be repeated for obtaining an adaptive mesh for each step. Also, the smoothing spline fit is used to eliminate the noisy portion of the solution. Finally, the results of the proposed method are compared with the results available in the literature. The comparison shows excellent agreement between the obtained results and those already reported.

  10. An Adaptive Privacy Protection Method for Smart Home Environments Using Supervised Learning

    Directory of Open Access Journals (Sweden)

    Jingsha He

    2017-03-01

    Full Text Available In recent years, smart home technologies have started to be widely used, bringing a great deal of convenience to people’s daily lives. At the same time, privacy issues have become particularly prominent. Traditional encryption methods can no longer meet the needs of privacy protection in smart home applications, since attacks can be launched even without the need for access to the cipher. Rather, attacks can be successfully realized through analyzing the frequency of radio signals, as well as the timestamp series, so that the daily activities of the residents in the smart home can be learnt. Such types of attacks can achieve a very high success rate, making them a great threat to users’ privacy. In this paper, we propose an adaptive method based on sample data analysis and supervised learning (SDASL, to hide the patterns of daily routines of residents that would adapt to dynamically changing network loads. Compared to some existing solutions, our proposed method exhibits advantages such as low energy consumption, low latency, strong adaptability, and effective privacy protection.

  11. Multiobjective Trajectory Optimization and Adaptive Backstepping Control for Rubber Unstacking Robot Based on RFWNN Method

    Directory of Open Access Journals (Sweden)

    Le Liang

    2018-01-01

    Full Text Available Multiobjective trajectory optimization and adaptive backstepping control method based on recursive fuzzy wavelet neural network (RFWNN are proposed to solve the problem of dynamic modeling uncertainties and strong external disturbance of the rubber unstacking robot during recycling process. First, according to the rubber viscoelastic properties, the Hunt-Crossley nonlinear model is used to construct the robot dynamics model. Then, combined with the dynamic model and the recycling process characteristics, the multiobjective trajectory optimization of the rubber unstacking robot is carried out for the operational efficiency, the running trajectory smoothness, and the energy consumption. Based on the trajectory optimization results, the adaptive backstepping control method based on RFWNN is adopted. The RFWNN method is applied in the main controller to cope with time-varying uncertainties of the robot dynamic system. Simultaneously, an adaptive robust control law is developed to eliminate inevitable approximation errors and unknown disturbances and relax the requirement for prior knowledge of the controlled system. Finally, the validity of the proposed control strategy is verified by experiment.

  12. Optimal Tikhonov regularization for DEER spectroscopy

    Science.gov (United States)

    Edwards, Thomas H.; Stoll, Stefan

    2018-03-01

    Tikhonov regularization is the most commonly used method for extracting distance distributions from experimental double electron-electron resonance (DEER) spectroscopy data. This method requires the selection of a regularization parameter, α , and a regularization operator, L. We analyze the performance of a large set of α selection methods and several regularization operators, using a test set of over half a million synthetic noisy DEER traces. These are generated from distance distributions obtained from in silico double labeling of a protein crystal structure of T4 lysozyme with the spin label MTSSL. We compare the methods and operators based on their ability to recover the model distance distributions from the noisy time traces. The results indicate that several α selection methods perform quite well, among them the Akaike information criterion and the generalized cross validation method with either the first- or second-derivative operator. They perform significantly better than currently utilized L-curve methods.

  13. Perturbative Noncommutative Regularization

    CERN Document Server

    Hawkins, E J

    1999-01-01

    I propose a nonperturbative regularization of quantum field theories with contact interactions (primarily, scalar field theories). This is given by the geometric quantization of compact Kähler manifolds and generalizes what has already been proposed by Madore, Grosse, Klimčík, and Prešnajder for the two-sphere. I discuss the perturbation theory derived from this regularized model and propose an approximation technique for evaluating the Feynman diagrams. This amounts to a momentum cutoff combined with phase factors at vertices. To illustrate the exact and approximate calculations, I present, as examples, the simplest diagrams for the lf4 model on the spaces S2,S 2×S2 , and CP2 . This regularization fails for noncompact spaces. I give a brief dimensional analysis argument as to why this is so. I also discuss the relevance of the topology of Feynman diagrams to their ultra-violet and infra-red divergence behavior in this model.

  14. Regular phantom black holes.

    Science.gov (United States)

    Bronnikov, K A; Fabris, J C

    2006-06-30

    We study self-gravitating, static, spherically symmetric phantom scalar fields with arbitrary potentials (favored by cosmological observations) and single out 16 classes of possible regular configurations with flat, de Sitter, and anti-de Sitter asymptotics. Among them are traversable wormholes, bouncing Kantowski-Sachs (KS) cosmologies, and asymptotically flat black holes (BHs). A regular BH has a Schwarzschild-like causal structure, but the singularity is replaced by a de Sitter infinity, giving a hypothetic BH explorer a chance to survive. It also looks possible that our Universe has originated in a phantom-dominated collapse in another universe, with KS expansion and isotropization after crossing the horizon. Explicit examples of regular solutions are built and discussed. Possible generalizations include k-essence type scalar fields (with a potential) and scalar-tensor gravity.

  15. How does playing adapted sports affect quality of life of people with mobility limitations? Results from a mixed-method sequential explanatory study.

    Science.gov (United States)

    Côté-Leclerc, Félix; Boileau Duchesne, Gabrielle; Bolduc, Patrick; Gélinas-Lafrenière, Amélie; Santerre, Corinne; Desrosiers, Johanne; Levasseur, Mélanie

    2017-01-25

    Occupations, including physical activity, are a strong determinant of health. However, mobility limitations can restrict opportunities to perform these occupations, which may affect quality of life. Some people will turn to adapted sports to meet their need to be involved in occupations. Little is known, however, about how participation in adapted sports affects the quality of life of people with mobility limitations. This study thus aimed to explore the influence of adapted sports on quality of life in adult wheelchair users. A mixed-method sequential explanatory design was used, including a quantitative and a qualitative component with a clinical research design. A total of 34 wheelchair users aged 18 to 62, who regularly played adapted sports, completed the Quality of Life Index (/30). Their scores were compared to those obtained by people of similar age without limitations (general population). Ten of the wheelchair users also participated in individual semi-structured interviews exploring their perceptions regarding how sports-related experiences affected their quality of life. The participants were 9 women and 25 men with paraplegia, the majority of whom worked and played an individual adapted sport (athletics, tennis or rugby) at the international or national level. People with mobility limitations who participated in adapted sports had a quality of life comparable to the group without limitations (21.9 ± 3.3 vs 22.3 ± 2.9 respectively), except for poorer family-related quality of life (21.0 ± 5.3 vs 24.1 ± 4.9 respectively). Based on the interviews, participants reported that the positive effect of adapted sports on the quality of life of people with mobility limitations operates mainly through the following: personal factors (behavior-related abilities and health), social participation (in general and through interpersonal relationships), and environmental factors (society's perceptions and support from the environment). Some contextual

  16. An adaptive reentry guidance method considering the influence of blackout zone

    Science.gov (United States)

    Wu, Yu; Yao, Jianyao; Qu, Xiangju

    2018-01-01

    Reentry guidance has been researched as a popular topic because it is critical for a successful flight. In view that the existing guidance methods do not take into account the accumulated navigation error of Inertial Navigation System (INS) in the blackout zone, in this paper, an adaptive reentry guidance method is proposed to obtain the optimal reentry trajectory quickly with the target of minimum aerodynamic heating rate. The terminal error in position and attitude can be also reduced with the proposed method. In this method, the whole reentry guidance task is divided into two phases, i.e., the trajectory updating phase and the trajectory planning phase. In the first phase, the idea of model predictive control (MPC) is used, and the receding optimization procedure ensures the optimal trajectory in the next few seconds. In the trajectory planning phase, after the vehicle has flown out of the blackout zone, the optimal reentry trajectory is obtained by online planning to adapt to the navigation information. An effective swarm intelligence algorithm, i.e. pigeon inspired optimization (PIO) algorithm, is applied to obtain the optimal reentry trajectory in both of the two phases. Compared to the trajectory updating method, the proposed method can reduce the terminal error by about 30% considering both the position and attitude, especially, the terminal error of height has almost been eliminated. Besides, the PIO algorithm performs better than the particle swarm optimization (PSO) algorithm both in the trajectory updating phase and the trajectory planning phases.

  17. A new hybrid optimization method inspired from swarm intelligence: Fuzzy adaptive swallow swarm optimization algorithm (FASSO

    Directory of Open Access Journals (Sweden)

    Mehdi Neshat

    2015-11-01

    Full Text Available In this article, the objective was to present effective and optimal strategies aimed at improving the Swallow Swarm Optimization (SSO method. The SSO is one of the best optimization methods based on swarm intelligence which is inspired by the intelligent behaviors of swallows. It has been able to offer a relatively strong method for solving optimization problems. However, despite its many advantages, the SSO suffers from two shortcomings. Firstly, particles movement speed is not controlled satisfactorily during the search due to the lack of an inertia weight. Secondly, the variables of the acceleration coefficient are not able to strike a balance between the local and the global searches because they are not sufficiently flexible in complex environments. Therefore, the SSO algorithm does not provide adequate results when it searches in functions such as the Step or Quadric function. Hence, the fuzzy adaptive Swallow Swarm Optimization (FASSO method was introduced to deal with these problems. Meanwhile, results enjoy high accuracy which are obtained by using an adaptive inertia weight and through combining two fuzzy logic systems to accurately calculate the acceleration coefficients. High speed of convergence, avoidance from falling into local extremum, and high level of error tolerance are the advantages of proposed method. The FASSO was compared with eleven of the best PSO methods and SSO in 18 benchmark functions. Finally, significant results were obtained.

  18. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments.

    Science.gov (United States)

    Daneshmand, Saeed; Marathe, Thyagaraja; Lachapelle, Gérard

    2016-10-31

    The use of antenna arrays in Global Navigation Satellite System (GNSS) applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA) information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.

  19. Moving finite elements: A continuously adaptive method for computational fluid dynamics

    International Nuclear Information System (INIS)

    Glasser, A.H.; Miller, K.; Carlson, N.

    1991-01-01

    Moving Finite Elements (MFE), a recently developed method for computational fluid dynamics, promises major advances in the ability of computers to model the complex behavior of liquids, gases, and plasmas. Applications of computational fluid dynamics occur in a wide range of scientifically and technologically important fields. Examples include meteorology, oceanography, global climate modeling, magnetic and inertial fusion energy research, semiconductor fabrication, biophysics, automobile and aircraft design, industrial fluid processing, chemical engineering, and combustion research. The improvements made possible by the new method could thus have substantial economic impact. Moving Finite Elements is a moving node adaptive grid method which has a tendency to pack the grid finely in regions where it is most needed at each time and to leave it coarse elsewhere. It does so in a manner which is simple and automatic, and does not require a large amount of human ingenuity to apply it to each particular problem. At the same time, it often allows the time step to be large enough to advance a moving shock by many shock thicknesses in a single time step, moving the grid smoothly with the solution and minimizing the number of time steps required for the whole problem. For 2D problems (two spatial variables) the grid is composed of irregularly shaped and irregularly connected triangles which are very flexible in their ability to adapt to the evolving solution. While other adaptive grid methods have been developed which share some of these desirable properties, this is the only method which combines them all. In many cases, the method can save orders of magnitude of computing time, equivalent to several generations of advancing computer hardware

  20. Errors in the estimation method for the rejection of vibrations in adaptive optics systems

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    In recent years the problem of the mechanical vibrations impact in adaptive optics (AO) systems has been renewed. These signals are damped sinusoidal signals and have deleterious effect on the system. One of software solutions to reject the vibrations is an adaptive method called AVC (Adaptive Vibration Cancellation) where the procedure has three steps: estimation of perturbation parameters, estimation of the frequency response of the plant, update the reference signal to reject/minimalize the vibration. In the first step a very important problem is the estimation method. A very accurate and fast (below 10 ms) estimation method of these three parameters has been presented in several publications in recent years. The method is based on using the spectrum interpolation and MSD time windows and it can be used to estimate multifrequency signals. In this paper the estimation method is used in the AVC method to increase the system performance. There are several parameters that affect the accuracy of obtained results, e.g. CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, b - number of ADC bits, γ - damping ratio of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value for systematic error is approximately 10^-10 Hz/Hz for N = 2048 and CiR = 0.1. This paper presents equations that can used to estimate maximum systematic errors for given values of H, CiR and N before the start of the estimation process.

  1. Removing damped sinusoidal vibrations in adaptive optics systems using a DFT-based estimation method

    Science.gov (United States)

    Kania, Dariusz

    2017-06-01

    The problem of a vibrations rejection in adaptive optics systems is still present in publications. These undesirable signals emerge because of shaking the system structure, the tracking process, etc., and they usually are damped sinusoidal signals. There are some mechanical solutions to reduce the signals but they are not very effective. One of software solutions are very popular adaptive methods. An AVC (Adaptive Vibration Cancellation) method has been presented and developed in recent years. The method is based on the estimation of three vibrations parameters and values of frequency, amplitude and phase are essential to produce and adjust a proper signal to reduce or eliminate vibrations signals. This paper presents a fast (below 10 ms) and accurate estimation method of frequency, amplitude and phase of a multifrequency signal that can be used in the AVC method to increase the AO system performance. The method accuracy depends on several parameters: CiR - number of signal periods in a measurement window, N - number of samples in the FFT procedure, H - time window order, SNR, THD, b - number of A/D converter bits in a real time system, γ - the damping ratio of the tested signal, φ - the phase of the tested signal. Systematic errors increase when N, CiR, H decrease and when γ increases. The value of systematic error for γ = 0.1%, CiR = 1.1 and N = 32 is approximately 10^-4 Hz/Hz. This paper focuses on systematic errors of and effect of the signal phase and values of γ on the results.

  2. Sparse Pseudo Spectral Projection Methods with Directional Adaptation for Uncertainty Quantification

    KAUST Repository

    Winokur, J.

    2015-12-19

    We investigate two methods to build a polynomial approximation of a model output depending on some parameters. The two approaches are based on pseudo-spectral projection (PSP) methods on adaptively constructed sparse grids, and aim at providing a finer control of the resolution along two distinct subsets of model parameters. The control of the error along different subsets of parameters may be needed for instance in the case of a model depending on uncertain parameters and deterministic design variables. We first consider a nested approach where an independent adaptive sparse grid PSP is performed along the first set of directions only, and at each point a sparse grid is constructed adaptively in the second set of directions. We then consider the application of aPSP in the space of all parameters, and introduce directional refinement criteria to provide a tighter control of the projection error along individual dimensions. Specifically, we use a Sobol decomposition of the projection surpluses to tune the sparse grid adaptation. The behavior and performance of the two approaches are compared for a simple two-dimensional test problem and for a shock-tube ignition model involving 22 uncertain parameters and 3 design parameters. The numerical experiments indicate that whereas both methods provide effective means for tuning the quality of the representation along distinct subsets of parameters, PSP in the global parameter space generally requires fewer model evaluations than the nested approach to achieve similar projection error. In addition, the global approach is better suited for generalization to more than two subsets of directions.

  3. Regularity of Bound States

    DEFF Research Database (Denmark)

    Faupin, Jeremy; Møller, Jacob Schach; Skibsted, Erik

    2011-01-01

    We study regularity of bound states pertaining to embedded eigenvalues of a self-adjoint operator H, with respect to an auxiliary operator A that is conjugate to H in the sense of Mourre. We work within the framework of singular Mourre theory which enables us to deal with confined massless Pauli–......–Fierz models, our primary example, and many-body AC-Stark Hamiltonians. In the simpler context of regular Mourre theory, our results boil down to an improvement of results obtained recently in [8, 9]....

  4. Interaction of high-speed compressible viscous flow and structure by adaptive finite element method

    International Nuclear Information System (INIS)

    Limtrakarn, Wiroj; Dechaumphai, Pramote

    2004-01-01

    Interaction behaviors of high-speed compressible viscous flow and thermal-structural response of structure are presented. The compressible viscous laminar flow behavior based on the Navier-Stokes equations is predicted by using an adaptive cell-centered finite-element method. The energy equation and the quasi-static structural equations for aerodynamically heated structures are solved by applying the Galerkin finite-element method. The finite-element formulation and computational procedure are described. The performance of the combined method is evaluated by solving Mach 4 flow past a flat plate and comparing with the solution from the finite different method. To demonstrate their interaction, the high-speed flow, structural heat transfer, and deformation phenomena are studied by applying the present method to Mach 10 flow past a flat plate

  5. A local adaptive method for the numerical approximation in seismic wave modelling

    Directory of Open Access Journals (Sweden)

    Galuzzi Bruno G.

    2017-12-01

    Full Text Available We propose a new numerical approach for the solution of the 2D acoustic wave equation to model the predicted data in the field of active-source seismic inverse problems. This method consists in using an explicit finite difference technique with an adaptive order of approximation of the spatial derivatives that takes into account the local velocity at the grid nodes. Testing our method to simulate the recorded seismograms in a marine seismic acquisition, we found that the low computational time and the low approximation error of the proposed approach make it suitable in the context of seismic inversion problems.

  6. Method for Adapting to Rough Terrain Based on Environmental Modes for Biped Robots

    Science.gov (United States)

    Ohashi, Eijiro; Sato, Tomoya; Ohnishi, Kouhei

    This paper describes a method for adapting to rough terrain for biped robots. The robots obtain information of reaction force from the ground by sensors located at each corner of rectangular soles. From the sensor information, environmental modes are extracted. The environmental modes consist of four modes: heaving, rolling, pitching, and twisting, which represent contact states between the ground and the soles. On the basis of the twisting mode, the robot detects the unevenness of the ground, makes contact with the uneven ground stably with three corners of the sole, and modifies the trajectory to continue stable walking. The validity of the proposed method is confirmed by experimental results.

  7. Adaptive Multilevel Methods with Local Smoothing for $H^1$- and $H^{\\mathrm{curl}}$-Conforming High Order Finite Element Methods

    KAUST Repository

    Janssen, Bärbel

    2011-01-01

    A multilevel method on adaptive meshes with hanging nodes is presented, and the additional matrices appearing in the implementation are derived. Smoothers of overlapping Schwarz type are discussed; smoothing is restricted to the interior of the subdomains refined to the current level; thus it has optimal computational complexity. When applied to conforming finite element discretizations of elliptic problems and Maxwell equations, the method\\'s convergence rates are very close to those for the nonadaptive version. Furthermore, the smoothers remain efficient for high order finite elements. We discuss the implementation in a general finite element code using the example of the deal.II library. © 2011 Societ y for Industrial and Applied Mathematics.

  8. Methods and evaluations of MRI content-adaptive finite element mesh generation for bioelectromagnetic problems

    International Nuclear Information System (INIS)

    Lee, W H; Kim, T-S; Cho, M H; Ahn, Y B; Lee, S Y

    2006-01-01

    In studying bioelectromagnetic problems, finite element analysis (FEA) offers several advantages over conventional methods such as the boundary element method. It allows truly volumetric analysis and incorporation of material properties such as anisotropic conductivity. For FEA, mesh generation is the first critical requirement and there exist many different approaches. However, conventional approaches offered by commercial packages and various algorithms do not generate content-adaptive meshes (cMeshes), resulting in numerous nodes and elements in modelling the conducting domain, and thereby increasing computational load and demand. In this work, we present efficient content-adaptive mesh generation schemes for complex biological volumes of MR images. The presented methodology is fully automatic and generates FE meshes that are adaptive to the geometrical contents of MR images, allowing optimal representation of conducting domain for FEA. We have also evaluated the effect of cMeshes on FEA in three dimensions by comparing the forward solutions from various cMesh head models to the solutions from the reference FE head model in which fine and equidistant FEs constitute the model. The results show that there is a significant gain in computation time with minor loss in numerical accuracy. We believe that cMeshes should be useful in the FEA of bioelectromagnetic problems

  9. An Adaptive INS-Aided PLL Tracking Method for GNSS Receivers in Harsh Environments.

    Science.gov (United States)

    Cong, Li; Li, Xin; Jin, Tian; Yue, Song; Xue, Rui

    2016-01-23

    As the weak link in global navigation satellite system (GNSS) signal processing, the phase-locked loop (PLL) is easily influenced with frequent cycle slips and loss of lock as a result of higher vehicle dynamics and lower signal-to-noise ratios. With inertial navigation system (INS) aid, PLLs' tracking performance can be improved. However, for harsh environments with high dynamics and signal attenuation, the traditional INS-aided PLL with fixed loop parameters has some limitations to improve the tracking adaptability. In this paper, an adaptive INS-aided PLL capable of adjusting its noise bandwidth and coherent integration time has been proposed. Through theoretical analysis, the relation between INS-aided PLL phase tracking error and carrier to noise density ratio (C/N₀), vehicle dynamics, aiding information update time, noise bandwidth, and coherent integration time has been built. The relation formulae are used to choose the optimal integration time and bandwidth for a given application under the minimum tracking error criterion. Software and hardware simulation results verify the correctness of the theoretical analysis, and demonstrate that the adaptive tracking method can effectively improve the PLL tracking ability and integrated GNSS/INS navigation performance. For harsh environments, the tracking sensitivity is increased by 3 to 5 dB, velocity errors are decreased by 36% to 50% and position errors are decreased by 6% to 24% when compared with other INS-aided PLL methods.

  10. A hyper-spherical adaptive sparse-grid method for high-dimensional discontinuity detection

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Guannan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Webster, Clayton G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gunzburger, Max D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Burkardt, John V. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2014-03-01

    This work proposes and analyzes a hyper-spherical adaptive hierarchical sparse-grid method for detecting jump discontinuities of functions in high-dimensional spaces is proposed. The method is motivated by the theoretical and computational inefficiencies of well-known adaptive sparse-grid methods for discontinuity detection. Our novel approach constructs a function representation of the discontinuity hyper-surface of an N-dimensional dis- continuous quantity of interest, by virtue of a hyper-spherical transformation. Then, a sparse-grid approximation of the transformed function is built in the hyper-spherical coordinate system, whose value at each point is estimated by solving a one-dimensional discontinuity detection problem. Due to the smoothness of the hyper-surface, the new technique can identify jump discontinuities with significantly reduced computational cost, compared to existing methods. Moreover, hierarchical acceleration techniques are also incorporated to further reduce the overall complexity. Rigorous error estimates and complexity analyses of the new method are provided as are several numerical examples that illustrate the effectiveness of the approach.

  11. Solution verification, goal-oriented adaptive methods for stochastic advection–diffusion problems

    KAUST Repository

    Almeida, Regina C.

    2010-08-01

    A goal-oriented analysis of linear, stochastic advection-diffusion models is presented which provides both a method for solution verification as well as a basis for improving results through adaptation of both the mesh and the way random variables are approximated. A class of model problems with random coefficients and source terms is cast in a variational setting. Specific quantities of interest are specified which are also random variables. A stochastic adjoint problem associated with the quantities of interest is formulated and a posteriori error estimates are derived. These are used to guide an adaptive algorithm which adjusts the sparse probabilistic grid so as to control the approximation error. Numerical examples are given to demonstrate the methodology for a specific model problem. © 2010 Elsevier B.V.

  12. Parallel simulation of multiphase flows using octree adaptivity and the volume-of-fluid method

    Science.gov (United States)

    Agbaglah, Gilou; Delaux, Sébastien; Fuster, Daniel; Hoepffner, Jérôme; Josserand, Christophe; Popinet, Stéphane; Ray, Pascal; Scardovelli, Ruben; Zaleski, Stéphane

    2011-02-01

    We describe computations performed using the Gerris code, an open-source software implementing finite volume solvers on an octree adaptive grid together with a piecewise linear volume of fluid interface tracking method. The parallelisation of Gerris is achieved by domain decomposition. We show examples of the capabilities of Gerris on several types of problems. The impact of a droplet on a layer of the same liquid results in the formation of a thin air layer trapped between the droplet and the liquid layer that the adaptive refinement allows to capture. It is followed by the jetting of a thin corolla emerging from below the impacting droplet. The jet atomisation problem is another extremely challenging computational problem, in which a large number of small scales are generated. Finally we show an example of a turbulent jet computation in an equivalent resolution of 6×1024 cells. The jet simulation is based on the configuration of the Deepwater Horizon oil leak.

  13. Modeling of heterogeneous elastic materials by the multiscale hp-adaptive finite element method

    Science.gov (United States)

    Klimczak, Marek; Cecot, Witold

    2018-01-01

    We present an enhancement of the multiscale finite element method (MsFEM) by combining it with the hp-adaptive FEM. Such a discretization-based homogenization technique is a versatile tool for modeling heterogeneous materials with fast oscillating elasticity coefficients. No assumption on periodicity of the domain is required. In order to avoid direct, so-called overkill mesh computations, a coarse mesh with effective stiffness matrices is used and special shape functions are constructed to account for the local heterogeneities at the micro resolution. The automatic adaptivity (hp-type at the macro resolution and h-type at the micro resolution) increases efficiency of computation. In this paper details of the modified MsFEM are presented and a numerical test performed on a Fichera corner domain is presented in order to validate the proposed approach.

  14. New Adaptive Method for IQ Imbalance Compensation of Quadrature Modulators in Predistortion Systems

    Directory of Open Access Journals (Sweden)

    Hassan Zareian

    2009-01-01

    Full Text Available Imperfections in quadrature modulators (QMs, such as inphase and quadrature (IQ imbalance, can severely impact the performance of power amplifier (PA linearization systems, in particular in adaptive digital predistorters (PDs. In this paper, we first analyze the effect of IQ imbalance on the performance of a memory orthogonal polynomials predistorter (MOP PD, and then we propose a new adaptive algorithm to estimate and compensate the unknown IQ imbalance in QM. Unlike previous compensation techniques, the proposed method was capable of online IQ imbalance compensation with faster convergence, and no special calibration or training signals were needed. The effectiveness of the proposed IQ imbalance compensator was validated by simulations. The results clearly show the performance of the MOP PD to be enhanced significantly by adding the proposed IQ imbalance compensator.

  15. Annotation of Regular Polysemy

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector

    Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...

  16. Feasibility of an online adaptive replanning method for cranial frameless intensity-modulated radiosurgery

    Energy Technology Data Exchange (ETDEWEB)

    Calvo, Juan Francisco, E-mail: jfcdrr@gmail.com [Departamento de Oncología Radioterápica, Hospital Quirón, Barcelona (Spain); San José, Sol [Departamento de Oncología Radioterápica, Hospital Quirón, Barcelona (Spain); Garrido, LLuís [Institut de Ciències del Cosmos i Departament ECM, Universitat de Barcelona, Barcelona (Spain); Puertas, Enrique; Moragues, Sandra; Pozo, Miquel [Departamento de Oncología Radioterápica, Hospital Quirón, Barcelona (Spain); Casals, Joan, E-mail: jfcdrr@yahoo.es [Departamento de Oncología Radioterápica, Hospital Quirón, Barcelona (Spain)

    2013-10-01

    To introduce an approach for online adaptive replanning (i.e., dose-guided radiosurgery) in frameless stereotactic radiosurgery, when a 6-dimensional (6D) robotic couch is not available in the linear accelerator (linac). Cranial radiosurgical treatments are planned in our department using intensity-modulated technique. Patients are immobilized using thermoplastic mask. A cone-beam computed tomography (CBCT) scan is acquired after the initial laser-based patient setup (CBCT{sub setup}). The online adaptive replanning procedure we propose consists of a 6D registration-based mapping of the reference plan onto actual CBCT{sub setup}, followed by a reoptimization of the beam fluences (“6D plan”) to achieve similar dosage as originally was intended, while the patient is lying in the linac couch and the original beam arrangement is kept. The goodness of the online adaptive method proposed was retrospectively analyzed for 16 patients with 35 targets treated with CBCT-based frameless intensity modulated technique. Simulation of reference plan onto actual CBCT{sub setup}, according to the 4 degrees of freedom, supported by linac couch was also generated for each case (4D plan). Target coverage (D99%) and conformity index values of 6D and 4D plans were compared with the corresponding values of the reference plans. Although the 4D-based approach does not always assure the target coverage (D99% between 72% and 103%), the proposed online adaptive method gave a perfect coverage in all cases analyzed as well as a similar conformity index value as was planned. Dose-guided radiosurgery approach is effective to assure the dose coverage and conformity of an intracranial target volume, avoiding resetting the patient inside the mask in a “trial and error” way so as to remove the pitch and roll errors when a robotic table is not available.

  17. Empirical mode decomposition-adaptive least squares method for dynamic calibration of pressure sensors

    Science.gov (United States)

    Yao, Zhenjian; Wang, Zhongyu; Yi-Lin Forrest, Jeffrey; Wang, Qiyue; Lv, Jing

    2017-04-01

    In this paper, an approach combining empirical mode decomposition (EMD) with adaptive least squares (ALS) is proposed to improve the dynamic calibration accuracy of pressure sensors. With EMD, the original output of the sensor can be represented as sums of zero-mean amplitude modulation frequency modulation components. By identifying and excluding those components involved in noises, the noise-free output could be reconstructed with the useful frequency modulation ones. Then the least squares method is iteratively performed to estimate the optimal order and parameters of the mathematical model. The dynamic characteristic parameters of the sensor can be derived from the model in both time and frequency domains. A series of shock tube calibration tests are carried out to validate the performance of this method. Experimental results show that the proposed method works well in reducing the influence of noise and yields an appropriate mathematical model. Furthermore, comparative experiments also demonstrate the superiority of the proposed method over the existing ones.

  18. A Cartesian Adaptive Level Set Method for Two-Phase Flows

    Science.gov (United States)

    Ham, F.; Young, Y.-N.

    2003-01-01

    In the present contribution we develop a level set method based on local anisotropic Cartesian adaptation as described in Ham et al. (2002). Such an approach should allow for the smallest possible Cartesian grid capable of resolving a given flow. The remainder of the paper is organized as follows. In section 2 the level set formulation for free surface calculations is presented and its strengths and weaknesses relative to the other free surface methods reviewed. In section 3 the collocated numerical method is described. In section 4 the method is validated by solving the 2D and 3D drop oscilation problem. In section 5 we present some results from more complex cases including the 3D drop breakup in an impulsively accelerated free stream, and the 3D immiscible Rayleigh-Taylor instability. Conclusions are given in section 6.

  19. Adaptive moving grid methods for two-phase flow in porous media

    KAUST Repository

    Dong, Hao

    2014-08-01

    In this paper, we present an application of the moving mesh method for approximating numerical solutions of the two-phase flow model in porous media. The numerical schemes combine a mixed finite element method and a finite volume method, which can handle the nonlinearities of the governing equations in an efficient way. The adaptive moving grid method is then used to distribute more grid points near the sharp interfaces, which enables us to obtain accurate numerical solutions with fewer computational resources. The numerical experiments indicate that the proposed moving mesh strategy could be an effective way to approximate two-phase flows in porous media. © 2013 Elsevier B.V. All rights reserved.

  20. Tracking Maneuvering Group Target with Extension Predicted and Best Model Augmentation Method Adapted

    Directory of Open Access Journals (Sweden)

    Linhai Gan

    2017-01-01

    Full Text Available The random matrix (RM method is widely applied for group target tracking. The assumption that the group extension keeps invariant in conventional RM method is not yet valid, as the orientation of the group varies rapidly while it is maneuvering; thus, a new approach with group extension predicted is derived here. To match the group maneuvering, a best model augmentation (BMA method is introduced. The existing BMA method uses a fixed basic model set, which may lead to a poor performance when it could not ensure basic coverage of true motion modes. Here, a maneuvering group target tracking algorithm is proposed, where the group extension prediction and the BMA adaption are exploited. The performance of the proposed algorithm will be illustrated by simulation.

  1. A novel ECG data compression method based on adaptive Fourier decomposition

    Science.gov (United States)

    Tan, Chunyu; Zhang, Liming

    2017-12-01

    This paper presents a novel electrocardiogram (ECG) compression method based on adaptive Fourier decomposition (AFD). AFD is a newly developed signal decomposition approach, which can decompose a signal with fast convergence, and hence reconstruct ECG signals with high fidelity. Unlike most of the high performance algorithms, our method does not make use of any preprocessing operation before compression. Huffman coding is employed for further compression. Validated with 48 ECG recordings of MIT-BIH arrhythmia database, the proposed method achieves the compression ratio (CR) of 35.53 and the percentage root mean square difference (PRD) of 1.47% on average with N = 8 decomposition times and a robust PRD-CR relationship. The results demonstrate that the proposed method has a good performance compared with the state-of-the-art ECG compressors.

  2. A fully general and adaptive inverse analysis method for cementitious materials

    DEFF Research Database (Denmark)

    Jepsen, Michael S.; Damkilde, Lars; Lövgren, Ingemar

    2016-01-01

    are applied when modeling the fracture mechanisms in cementitious materials, but the vast development of pseudo-strain hardening, fiber reinforced cementitious materials require inverse methods, capable of treating multi-linear σ - w functions. The proposed method is fully general in the sense that it relies...... on least square fitting between test data obtained from various kinds of test setup, three-point bending or wedge splitting test, and simulated data obtained by either FEA or analytical models. In the current paper adaptive inverse analysis is conducted on test data obtained from three-point bending...... of notched specimens and simulated data from a nonlinear hinge model. The paper shows that the results obtained by means of the proposed method is independent on the initial shape of the σ - w function and the initial guess of the tensile strength. The method provides very accurate fits, and the increased...

  3. A cellular automaton - finite volume method for the simulation of dendritic and eutectic growth in binary alloys using an adaptive mesh refinement

    Science.gov (United States)

    Dobravec, Tadej; Mavrič, Boštjan; Šarler, Božidar

    2017-11-01

    A two-dimensional model to simulate the dendritic and eutectic growth in binary alloys is developed. A cellular automaton method is adopted to track the movement of the solid-liquid interface. The diffusion equation is solved in the solid and liquid phases by using an explicit finite volume method. The computational domain is divided into square cells that can be hierarchically refined or coarsened using an adaptive mesh based on the quadtree algorithm. Such a mesh refines the regions of the domain near the solid-liquid interface, where the highest concentration gradients are observed. In the regions where the lowest concentration gradients are observed the cells are coarsened. The originality of the work is in the novel, adaptive approach to the efficient and accurate solution of the posed multiscale problem. The model is verified and assessed by comparison with the analytical results of the Lipton-Glicksman-Kurz model for the steady growth of a dendrite tip and the Jackson-Hunt model for regular eutectic growth. Several examples of typical microstructures are simulated and the features of the method as well as further developments are discussed.

  4. A vertical parallax reduction method for stereoscopic video based on adaptive interpolation

    Science.gov (United States)

    Li, Qingyu; Zhao, Yan

    2016-10-01

    The existence of vertical parallax is the main factor of affecting the viewing comfort of stereo video. Visual fatigue is gaining widespread attention with the booming development of 3D stereoscopic video technology. In order to reduce the vertical parallax without affecting the horizontal parallax, a self-adaptive image scaling algorithm is proposed, which can use the edge characteristics efficiently. In the meantime, the nonlinear Levenberg-Marquardt (L-M) algorithm is introduced in this paper to improve the accuracy of the transformation matrix. Firstly, the self-adaptive scaling algorithm is used for the original image interpolation. When the pixel point of original image is in the edge areas, the interpretation is implemented adaptively along the edge direction obtained by Sobel operator. Secondly the SIFT algorithm, which is invariant to scaling, rotation and affine transformation, is used to detect the feature matching points from the binocular images. Then according to the coordinate position of matching points, the transformation matrix, which can reduce the vertical parallax, is calculated using Levenberg-Marquardt algorithm. Finally, the transformation matrix is applied to target image to calculate the new coordinate position of each pixel from the view image. The experimental results show that: comparing with the method which reduces the vertical parallax using linear algorithm to calculate two-dimensional projective transformation, the proposed method improves the vertical parallax reduction obviously. At the same time, in terms of the impact on horizontal parallax, the proposed method has more similar horizontal parallax to that of the original image after vertical parallax reduction. Therefore, the proposed method can optimize the vertical parallax reduction.

  5. A Self-Adaptive Model-Based Wi-Fi Indoor Localization Method.

    Science.gov (United States)

    Tuta, Jure; Juric, Matjaz B

    2016-12-06

    This paper presents a novel method for indoor localization, developed with the main aim of making it useful for real-world deployments. Many indoor localization methods exist, yet they have several disadvantages in real-world deployments-some are static, which is not suitable for long-term usage; some require costly human recalibration procedures; and others require special hardware such as Wi-Fi anchors and transponders. Our method is self-calibrating and self-adaptive thus maintenance free and based on Wi-Fi only. We have employed two well-known propagation models-free space path loss and ITU models-which we have extended with additional parameters for better propagation simulation. Our self-calibrating procedure utilizes one propagation model to infer parameters of the space and the other to simulate the propagation of the signal without requiring any additional hardware beside Wi-Fi access points, which is suitable for real-world usage. Our method is also one of the few model-based Wi-Fi only self-adaptive approaches that do not require the mobile terminal to be in the access-point mode. The only input requirements of the method are Wi-Fi access point positions, and positions and properties of the walls. Our method has been evaluated in single- and multi-room environments, with measured mean error of 2-3 and 3-4 m, respectively, which is similar to existing methods. The evaluation has proven that usable localization accuracy can be achieved in real-world environments solely by the proposed Wi-Fi method that relies on simple hardware and software requirements.

  6. Adaptive finite element method for fractional differential equations using hierarchical matrices

    Science.gov (United States)

    Zhao, Xuan; Hu, Xiaozhe; Cai, Wei; Karniadakis, George Em

    2017-10-01

    A robust and fast solver for the fractional differential equation (FDEs) involving the Riesz fractional derivative is developed using an adaptive finite element method on non-uniform meshes. It is based on the utilization of hierarchical matrices ($\\mathcal{H}$-Matrices) for the representation of the stiffness matrix resulting from the finite element discretization of the FDEs. We employ a geometric multigrid method for the solution of the algebraic system of equations. We combine it with an adaptive algorithm based on a posteriori error estimation to deal with general-type singularities arising in the solution of the FDEs. Through various test examples we demonstrate the efficiency of the method and the high-accuracy of the numerical solution even in the presence of singularities. The proposed technique has been verified effectively through fundamental examples including Riesz, Left/Right Riemann-Liouville fractional derivative and, furthermore, it can be readily extended to more general fractional differential equations with different boundary conditions and low-order terms. To the best of our knowledge, there are currently no other methods for FDEs that resolve singularities accurately at linear complexity as the one we propose here.

  7. AK-SYS: An adaptation of the AK-MCS method for system reliability

    International Nuclear Information System (INIS)

    Fauriat, W.; Gayton, N.

    2014-01-01

    A lot of research work has been proposed over the last two decades to evaluate the probability of failure of a structure involving a very time-consuming mechanical model. Surrogate model approaches based on Kriging, such as the Efficient Global Reliability Analysis (EGRA) or the Active learning and Kriging-based Monte-Carlo Simulation (AK-MCS) methods, are very efficient and each has advantages of its own. EGRA is well suited to evaluating small probabilities, as the surrogate can be used to classify any population. AK-MCS is built in relation to a given population and requires no optimization program for the active learning procedure to be performed. It is therefore easier to implement and more likely to spend computational effort on areas with a significant probability content. When assessing system reliability, analytical approaches and first-order approximation are widely used in the literature. However, in the present paper we rather focus on sampling techniques and, considering the recent adaptation of the EGRA method for systems, a strategy is presented to adapt the AK-MCS method for system reliability. The AK-SYS method, “Active learning and Kriging-based SYStem reliability method”, is presented. Its high efficiency and accuracy are illustrated via various examples

  8. Adaptation Method Bligh & Dyer a Lipid Extraction of Colomb ian Microalgas Biodiesel Production for Third Generation

    Directory of Open Access Journals (Sweden)

    González Delgado Ángel

    2012-06-01

    Full Text Available In the biodiesel production process from microalgae, the cell disruption and lipid extraction stages are important for obtaining triglycerides that can be transesterified to biodiesel and glycerol. In this work, the Bligh & Dyer method was adapted for lipid extraction from native microalgae using organosolv pretreatment or acid hydrolysis as cell disruption mechanism for improve the extraction process. Chloroform-methanol-water are the solvents employed in the Bligh & Dyer extraction method. The microalgae species Botryococcus braunii, Nannocloropsis, Closterium, Guinardia y Amphiprora were employed for the experimental part. Adaptation of the method was found the best extraction conditions, these were: 1:20 biomass/solvent ratio, initial ratio solvents CHCl3:CH3OH:H2O (1:2:0, stirring conditions of 5000 rpm for 14 minutes and centrifuge of 3400 rpm for 15 minutes. The cell disruption mechanisms allowed to obtain extracts with high lipid content after performing the extraction with Bligh & Dyer method, but decreases significantly the total extraction yield. Finally, the fatty acids profiles showed that Botryococcus braunii specie contains higher acylglycerol percentage area suitable for the production of biodiesel.

  9. An Adaptive Method for Mining Hierarchical Spatial Co-location Patterns

    Directory of Open Access Journals (Sweden)

    CAI Jiannan

    2016-04-01

    Full Text Available Mining spatial co-location patterns plays a key role in spatial data mining. Spatial co-location patterns refer to subsets of features whose objects are frequently located in close geographic proximity. Due to spatial heterogeneity, spatial co-location patterns are usually not the same across geographic space. However, existing methods are mainly designed to discover global spatial co-location patterns, and not suitable for detecting regional spatial co-location patterns. On that account, an adaptive method for mining hierarchical spatial co-location patterns is proposed in this paper. Firstly, global spatial co-location patterns are detected and other non-prevalent co-location patterns are identified as candidate regional co-location patterns. Then, for each candidate pattern, adaptive spatial clustering method is used to delineate localities of that pattern in the study area, and participation ratio is utilized to measure the prevalence of the candidate co-location pattern. Finally, an overlap operation is developed to deduce localities of (k+1-size co-location patterns from localities of k-size co-location patterns. Experiments on both simulated and real-life datasets show that the proposed method is effective for detecting hierarchical spatial co-location patterns.

  10. Research on adaptive optics image restoration algorithm based on improved joint maximum a posteriori method

    Science.gov (United States)

    Zhang, Lijuan; Li, Yang; Wang, Junnan; Liu, Ying

    2018-03-01

    In this paper, we propose a point spread function (PSF) reconstruction method and joint maximum a posteriori (JMAP) estimation method for the adaptive optics image restoration. Using the JMAP method as the basic principle, we establish the joint log likelihood function of multi-frame adaptive optics (AO) images based on the image Gaussian noise models. To begin with, combining the observed conditions and AO system characteristics, a predicted PSF model for the wavefront phase effect is developed; then, we build up iterative solution formulas of the AO image based on our proposed algorithm, addressing the implementation process of multi-frame AO images joint deconvolution method. We conduct a series of experiments on simulated and real degraded AO images to evaluate our proposed algorithm. Compared with the Wiener iterative blind deconvolution (Wiener-IBD) algorithm and Richardson-Lucy IBD algorithm, our algorithm has better restoration effects including higher peak signal-to-noise ratio ( PSNR) and Laplacian sum ( LS) value than the others. The research results have a certain application values for actual AO image restoration.

  11. Adaptive coarse graining method for energy transfer and dissociation kinetics of polyatomic species

    Science.gov (United States)

    Sahai, A.; Lopez, B.; Johnston, C. O.; Panesi, M.

    2017-08-01

    A novel reduced-order method is presented for modeling reacting flows characterized by strong non-equilibrium of the internal energy level distribution of chemical species in the gas. The approach seeks for a reduced-order representation of the distribution function by grouping individual energy states into macroscopic bins, and then reconstructing state population using the maximum entropy principle. This work introduces an adaptive grouping methodology to identify and lump together groups of states that are likely to equilibrate faster with respect to each other. To this aim, two algorithms have been considered: the modified island algorithm and the spectral clustering method. Both methods require a measure of dissimilarity between internal energy states. This is achieved by defining "metrics" based on the strength of the elementary rate coefficients included in the state-specific kinetic mechanism. Penalty terms are used to avoid grouping together states characterized by distinctively different energies. The two methods are used to investigate excitation and dissociation of N2 (g+1Σ) molecules due to interaction with N (S4u ) atoms in an ideal chemical reactor. The results are compared with a direct numerical simulation of the state-specific kinetics obtained by solving the master equations for the complete set of energy levels. It is found that adaptive grouping techniques outperform the more conventional uniform energy grouping algorithm by providing a more accurate description of the distribution function, mole fraction and energy profiles during non-equilibrium relaxation.

  12. Maximum mutual information regularized classification

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-09-07

    In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.

  13. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  14. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  15. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  16. Global optimization method using SLE and adaptive RBF based on fuzzy clustering

    Science.gov (United States)

    Zhu, Huaguang; Liu, Li; Long, Teng; Zhao, Junfeng

    2012-07-01

    High fidelity analysis models, which are beneficial to improving the design quality, have been more and more widely utilized in the modern engineering design optimization problems. However, the high fidelity analysis models are so computationally expensive that the time required in design optimization is usually unacceptable. In order to improve the efficiency of optimization involving high fidelity analysis models, the optimization efficiency can be upgraded through applying surrogates to approximate the computationally expensive models, which can greately reduce the computation time. An efficient heuristic global optimization method using adaptive radial basis function (RBF) based on fuzzy clustering (ARFC) is proposed. In this method, a novel algorithm of maximin Latin hypercube design using successive local enumeration (SLE) is employed to obtain sample points with good performance in both space-filling and projective uniformity properties, which does a great deal of good to metamodels accuracy. RBF method is adopted for constructing the metamodels, and with the increasing the number of sample points the approximation accuracy of RBF is gradually enhanced. The fuzzy c-means clustering method is applied to identify the reduced attractive regions in the original design space. The numerical benchmark examples are used for validating the performance of ARFC. The results demonstrates that for most application examples the global optima are effectively obtained and comparison with adaptive response surface method (ARSM) proves that the proposed method can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum. This method improves the efficiency and global convergence of the optimization problems, and gives a new optimization strategy for engineering design optimization problems involving computationally expensive models.

  17. A Multilevel Adaptive Reaction-splitting Simulation Method for Stochastic Reaction Networks

    KAUST Repository

    Moraes, Alvaro

    2016-07-07

    In this work, we present a novel multilevel Monte Carlo method for kinetic simulation of stochastic reaction networks characterized by having simultaneously fast and slow reaction channels. To produce efficient simulations, our method adaptively classifies the reactions channels into fast and slow channels. To this end, we first introduce a state-dependent quantity named level of activity of a reaction channel. Then, we propose a low-cost heuristic that allows us to adaptively split the set of reaction channels into two subsets characterized by either a high or a low level of activity. Based on a time-splitting technique, the increments associated with high-activity channels are simulated using the tau-leap method, while those associated with low-activity channels are simulated using an exact method. This path simulation technique is amenable for coupled path generation and a corresponding multilevel Monte Carlo algorithm. To estimate expected values of observables of the system at a prescribed final time, our method bounds the global computational error to be below a prescribed tolerance, TOL, within a given confidence level. This goal is achieved with a computational complexity of order O(TOL-2), the same as with a pathwise-exact method, but with a smaller constant. We also present a novel low-cost control variate technique based on the stochastic time change representation by Kurtz, showing its performance on a numerical example. We present two numerical examples extracted from the literature that show how the reaction-splitting method obtains substantial gains with respect to the standard stochastic simulation algorithm and the multilevel Monte Carlo approach by Anderson and Higham. © 2016 Society for Industrial and Applied Mathematics.

  18. Adaptive optics in spinning disk microscopy: improved contrast and brightness by a simple and fast method.

    Science.gov (United States)

    Fraisier, V; Clouvel, G; Jasaitis, A; Dimitrov, A; Piolot, T; Salamero, J

    2015-09-01

    Multiconfocal microscopy gives a good compromise between fast imaging and reasonable resolution. However, the low intensity of live fluorescent emitters is a major limitation to this technique. Aberrations induced by the optical setup, especially the mismatch of the refractive index and the biological sample itself, distort the point spread function and further reduce the amount of detected photons. Altogether, this leads to impaired image quality, preventing accurate analysis of molecular processes in biological samples and imaging deep in the sample. The amount of detected fluorescence can be improved with adaptive optics. Here, we used a compact adaptive optics module (adaptive optics box for sectioning optical microscopy), which was specifically designed for spinning disk confocal microscopy. The module overcomes undesired anomalies by correcting for most of the aberrations in confocal imaging. Existing aberration detection methods require prior illumination, which bleaches the sample. To avoid multiple exposures of the sample, we established an experimental model describing the depth dependence of major aberrations. This model allows us to correct for those aberrations when performing a z-stack, gradually increasing the amplitude of the correction with depth. It does not require illumination of the sample for aberration detection, thus minimizing photobleaching and phototoxicity. With this model, we improved both signal-to-background ratio and image contrast. Here, we present comparative studies on a variety of biological samples. © 2015 The Authors Journal of Microscopy © 2015 Royal Microscopical Society.

  19. Unsupervised Remote Sensing Domain Adaptation Method with Adversarial Network and Auxiliary Task

    Directory of Open Access Journals (Sweden)

    XU Suhui

    2017-12-01

    Full Text Available An important prerequisite when annotating the remote sensing images by machine learning is that there are enough training samples for training, but labeling the samples is very time-consuming. In this paper, we solve the problem of unsupervised learning with small sample size in remote sensing image scene classification by domain adaptation method. A new domain adaptation framework is proposed which combines adversarial network and auxiliary task. Firstly, a novel remote sensing scene classification framework is established based on deep convolution neural networks. Secondly, a domain classifier is added to the network, in order to learn the domain-invariant features. The gradient direction of the domain loss is opposite to the label loss during the back propagation, which makes the domain predictor failed to distinguish the sample's domain. Lastly, we introduce an auxiliary task for the network, which augments the training samples and improves the generalization ability of the network. The experiments demonstrate better results in unsupervised classification with small sample sizes of remote sensing images compared to the baseline unsupervised domain adaptation approaches.

  20. Application of Symmetry Adapted Function Method for Three-Dimensional Reconstruction of Octahedral Biological Macromolecules

    Directory of Open Access Journals (Sweden)

    Songjun Zeng

    2010-01-01

    Full Text Available A method for three-dimensional (3D reconstruction of macromolecule assembles, that is, octahedral symmetrical adapted functions (OSAFs method, was introduced in this paper and a series of formulations for reconstruction by OSAF method were derived. To verify the feasibility and advantages of the method, two octahedral symmetrical macromolecules, that is, heat shock protein Degp24 and the Red-cell L Ferritin, were utilized as examples to implement reconstruction by the OSAF method. The schedule for simulation was designed as follows: 2000 random orientated projections of single particles with predefined Euler angles and centers of origins were generated, then different levels of noises that is signal-to-noise ratio (S/N =0.1,0.5, and 0.8 were added. The structures reconstructed by the OSAF method were in good agreement with the standard models and the relative errors of the structures reconstructed by the OSAF method to standard structures were very little even for high level noise. The facts mentioned above account for that the OSAF method is feasible and efficient approach to reconstruct structures of macromolecules and have ability to suppress the influence of noise.

  1. Sparse structure regularized ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2014-04-17

    Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.

  2. Adaptive Sliding Mode Control Method Based on Nonlinear Integral Sliding Surface for Agricultural Vehicle Steering Control

    Directory of Open Access Journals (Sweden)

    Taochang Li

    2014-01-01

    Full Text Available Automatic steering control is the key factor and essential condition in the realization of the automatic navigation control of agricultural vehicles. In order to get satisfactory steering control performance, an adaptive sliding mode control method based on a nonlinear integral sliding surface is proposed in this paper for agricultural vehicle steering control. First, the vehicle steering system is modeled as a second-order mathematic model; the system uncertainties and unmodeled dynamics as well as the external disturbances are regarded as the equivalent disturbances satisfying a certain boundary. Second, a transient process of the desired system response is constructed in each navigation control period. Based on the transient process, a nonlinear integral sliding surface is designed. Then the corresponding sliding mode control law is proposed to guarantee the fast response characteristics with no overshoot in the closed-loop steering control system. Meanwhile, the switching gain of sliding mode control is adaptively adjusted to alleviate the control input chattering by using the fuzzy control method. Finally, the effectiveness and the superiority of the proposed method are verified by a series of simulation and actual steering control experiments.

  3. Planetary gearbox fault feature enhancement based on combined adaptive filter method

    Directory of Open Access Journals (Sweden)

    Shuangshu Tian

    2015-12-01

    Full Text Available The reliability of vibration signals acquired from a planetary gear system (the indispensable part of wind turbine gearbox is directly related to the accuracy of fault diagnosis. The complex operation environment leads to lots of interference signals which are included in the vibration signals. Furthermore, both multiple gears meshing with each other and the differences in transmission rout produce strong nonlinearity in the vibration signals, which makes it difficult to eliminate the noise. This article presents a combined adaptive filter method by taking a delayed signal as reference signal, the Self-Adaptive Noise Cancellation method is adopted to eliminate the white noise. In the meanwhile, by applying Gaussian function to transform the input signal into high-dimension feature-space signal, the kernel least mean square algorithm is used to cancel the nonlinear interference. Effectiveness of the method has been verified by simulation signals and test rig signals. By dealing with simulation signal, the signal-to-noise ratio can be improved around 30 dB (white noise and the amplitude of nonlinear interference signal can be depressed up to 50%. Experimental results show remarkable improvements and enhance gear fault features.

  4. Adaptive explicit and implicit finite element methods for transient thermal analysis

    Science.gov (United States)

    Probert, E. J.; Hassan, O.; Morgan, K.; Peraire, J.

    1992-01-01

    The application of adaptive finite element methods to the solution of transient heat conduction problems in two dimensions is investigated. The computational domain is represented by an unstructured assembly of linear triangular elements and the mesh adaptation is achieved by local regeneration of the grid, using an error estimation procedure coupled to an automatic triangular mesh generator. Two alternative solution procedures are considered. In the first procedure, the solution is advanced by explicit timestepping, with domain decomposition being used to improve the computational efficiency of the method. In the second procedure, an algorithm for constructing continuous lines which pass only once through each node of the mesh is employed. The lines are used as the basis of a fully implicit method, in which the equation system is solved by line relaxation using a block tridiagonal equation solver. The numerical performance of the two procedures is compared for the analysis of a problem involving a moving heat source applied to a convectively cooled cylindrical leading edge.

  5. A feature extraction method for adaptive DBS using an improved EMD.

    Science.gov (United States)

    Sun, Qifeng; Zhao, Dechun; Cheng, Shanshan; Hou, Xiaorong; Zhao, Xing; Tian, Yin

    2018-03-22

    Local field potential (LFP) of a patient with Parkinson's disease often shows abnormal oscillation phenomenon. Extracting and studying this phenomenon and designing adaptive deep brain stimulation (DBS) control library have great significance in the treatment of disease. This paper has designed a feature extraction method based on modified empirical mode decomposition (EMD) which extracts the abnormal oscillation signal in the time domain to increase the overall performance. The intrinsic mode function (IMF) component which contains abnormal oscillation is extracted by using EMD before an intrinsic characteristic of the oscillation signal is obtained. Abnormal oscillation signal is acquired using signal normalization, peak counting, and envelope method with a threshold which in turn keeps the integrity and accuracy as well as the efficiency. Comparative study of eight patients (six patients with DBS closed and drugs stopped; two patients with stimulation) has verified the feasibility of using modified EMD in extracting abnormal oscillation signal. The results showed that patients who take DBS suffer less abnormal oscillation than those who take no treatment. These results match the energy rise in the band of 3-30 Hz on local field potential spectrum of the patient with Parkinson's disease. Unlike previous oscillation extraction algorithm, improved EMD feature extraction method directly isolates abnormal oscillation signal from LFP. Significant improvement has been made in feature extraction algorithm in adaptability, real-time performance, and accuracy.

  6. Comparison of an adaptive local thresholding method on CBCT and µCT endodontic images

    Science.gov (United States)

    Michetti, Jérôme; Basarab, Adrian; Diemer, Franck; Kouame, Denis

    2018-01-01

    Root canal segmentation on cone beam computed tomography (CBCT) images is difficult because of the noise level, resolution limitations, beam hardening and dental morphological variations. An image processing framework, based on an adaptive local threshold method, was evaluated on CBCT images acquired on extracted teeth. A comparison with high quality segmented endodontic images on micro computed tomography (µCT) images acquired from the same teeth was carried out using a dedicated registration process. Each segmented tooth was evaluated according to volume and root canal sections through the area and the Feret’s diameter. The proposed method is shown to overcome the limitations of CBCT and to provide an automated and adaptive complete endodontic segmentation. Despite a slight underestimation (-4, 08%), the local threshold segmentation method based on edge-detection was shown to be fast and accurate. Strong correlations between CBCT and µCT segmentations were found both for the root canal area and diameter (respectively 0.98 and 0.88). Our findings suggest that combining CBCT imaging with this image processing framework may benefit experimental endodontology, teaching and could represent a first development step towards the clinical use of endodontic CBCT segmentation during pulp cavity treatment.

  7. A review of some a posteriori error estimates for adaptive finite element methods

    Czech Academy of Sciences Publication Activity Database

    Segeth, Karel

    2010-01-01

    Roč. 80, č. 8 (2010), s. 1589-1600 ISSN 0378-4754. [European Seminar on Coupled Problems. Jetřichovice, 08.06.2008-13.06.2008] R&D Projects: GA AV ČR(CZ) IAA100190803 Institutional research plan: CEZ:AV0Z10190503 Keywords : hp-adaptive finite element method * a posteriori error estimators * computational error estimates Subject RIV: BA - General Mathematics Impact factor: 0.812, year: 2010 http://www.sciencedirect.com/science/article/pii/S0378475408004230

  8. Adaptive control system having hedge unit and related apparatus and methods

    Science.gov (United States)

    Johnson, Eric Norman (Inventor); Calise, Anthony J. (Inventor)

    2007-01-01

    The invention includes an adaptive control system used to control a plant. The adaptive control system includes a hedge unit that receives at least one control signal and a plant state signal. The hedge unit generates a hedge signal based on the control signal, the plant state signal, and a hedge model including a first model having one or more characteristics to which the adaptive control system is not to adapt, and a second model not having the characteristic(s) to which the adaptive control system is not to adapt. The hedge signal is used in the adaptive control system to remove the effect of the characteristic from a signal supplied to an adaptation law unit of the adaptive control system so that the adaptive control system does not adapt to the characteristic in controlling the plant.

  9. 'Regular' and 'emergency' repair

    International Nuclear Information System (INIS)

    Luchnik, N.V.

    1975-01-01

    Experiments on the combined action of radiation and a DNA inhibitor using Crepis roots and on split-dose irradiation of human lymphocytes lead to the conclusion that there are two types of repair. The 'regular' repair takes place twice in each mitotic cycle and ensures the maintenance of genetic stability. The 'emergency' repair is induced at all stages of the mitotic cycle by high levels of injury. (author)

  10. Regularization of divergent integrals

    OpenAIRE

    Felder, Giovanni; Kazhdan, David

    2016-01-01

    We study the Hadamard finite part of divergent integrals of differential forms with singularities on submanifolds. We give formulae for the dependence of the finite part on the choice of regularization and express them in terms of a suitable local residue map. The cases where the submanifold is a complex hypersurface in a complex manifold and where it is a boundary component of a manifold with boundary, arising in string perturbation theory, are treated in more detail.

  11. Analytic stochastic regularization in fermionic gauge theories

    International Nuclear Information System (INIS)

    Abdalla, E.; Viana, R.L.

    1987-11-01

    We analyse the influence of the Analytic Stochastic Regularization method in gauge symmetry, evaluating the 1-loop photon propagator correction for spinor QED. Consequences in the non-abelian case are discussed. (author) [pt

  12. Millimetre Level Accuracy GNSS Positioning with the Blind Adaptive Beamforming Method in Interference Environments

    Directory of Open Access Journals (Sweden)

    Saeed Daneshmand

    2016-10-01

    Full Text Available The use of antenna arrays in Global Navigation Satellite System (GNSS applications is gaining significant attention due to its superior capability to suppress both narrowband and wideband interference. However, the phase distortions resulting from array processing may limit the applicability of these methods for high precision applications using carrier phase based positioning techniques. This paper studies the phase distortions occurring with the adaptive blind beamforming method in which satellite angle of arrival (AoA information is not employed in the optimization problem. To cater to non-stationary interference scenarios, the array weights of the adaptive beamformer are continuously updated. The effects of these continuous updates on the tracking parameters of a GNSS receiver are analyzed. The second part of this paper focuses on reducing the phase distortions during the blind beamforming process in order to allow the receiver to perform carrier phase based positioning by applying a constraint on the structure of the array configuration and by compensating the array uncertainties. Limitations of the previous methods are studied and a new method is proposed that keeps the simplicity of the blind beamformer structure and, at the same time, reduces tracking degradations while achieving millimetre level positioning accuracy in interference environments. To verify the applicability of the proposed method and analyze the degradations, array signals corresponding to the GPS L1 band are generated using a combination of hardware and software simulators. Furthermore, the amount of degradation and performance of the proposed method under different conditions are evaluated based on Monte Carlo simulations.

  13. An in vitro digestion method adapted for carotenoids and carotenoid esters: moving forward towards standardization.

    Science.gov (United States)

    Rodrigues, Daniele Bobrowski; Mariutti, Lilian Regina Barros; Mercadante, Adriana Zerlotti

    2016-12-07

    In vitro digestion methods are a useful approach to predict the bioaccessibility of food components and overcome some limitations or disadvantages associated with in vivo methodologies. Recently, the INFOGEST network published a static method of in vitro digestion with a proposal for assay standardization. The INFOGEST method is not specific for any food component; therefore, we aimed to adapt this method to assess the in vitro bioaccessibility of carotenoids and carotenoid esters in a model fruit (Byrsonima crassifolia). Two additional steps were coupled to the in vitro digestion procedure, centrifugation at 20 000g for the separation of the aqueous phase containing mixed micelles and exhaustive carotenoid extraction with an organic solvent. The effect of electrolytes, enzymes and bile acids on carotenoid micellarization and stability was also tested. The results were compared with those found with a simpler method that has already been used for carotenoid bioaccessibility analysis. These values were in the expected range for free carotenoids (5-29%), monoesters (9-26%) and diesters (4-28%). In general, the in vitro bioaccessibility of carotenoids assessed by the adapted INFOGEST method was significantly higher (p < 0.05) than those assessed by the simplest protocol, with or without the addition of simulated fluids. Although no trend was observed, differences in bioaccessibility values depended on the carotenoid form (free, monoester or diester), isomerization (Z/E) and the in vitro digestion protocol. To the best of our knowledge, it was the first time that a systematic identification of carotenoid esters by HPLC-DAD-MS/MS after in vitro digestion using the INFOGEST protocol was carried out.

  14. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  15. Grouping pursuit through a regularization solution surface.

    Science.gov (United States)

    Shen, Xiaotong; Huang, Hsin-Cheng

    2010-06-01

    Extracting grouping structure or identifying homogenous subgroups of predictors in regression is crucial for high-dimensional data analysis. A low-dimensional structure in particular-grouping, when captured in a regression model, enables to enhance predictive performance and to facilitate a model's interpretability Grouping pursuit extracts homogenous subgroups of predictors most responsible for outcomes of a response. This is the case in gene network analysis, where grouping reveals gene functionalities with regard to progression of a disease. To address challenges in grouping pursuit, we introduce a novel homotopy method for computing an entire solution surface through regularization involving a piecewise linear penalty. This nonconvex and overcomplete penalty permits adaptive grouping and nearly unbiased estimation, which is treated with a novel concept of grouped subdifferentials and difference convex programming for efficient computation. Finally, the proposed method not only achieves high performance as suggested by numerical analysis, but also has the desired optimality with regard to grouping pursuit and prediction as showed by our theoretical results.

  16. Adaptive and robust statistical methods for processing near-field scanning microwave microscopy images.

    Science.gov (United States)

    Coakley, K J; Imtiaz, A; Wallis, T M; Weber, J C; Berweger, S; Kabos, P

    2015-03-01

    Near-field scanning microwave microscopy offers great potential to facilitate characterization, development and modeling of materials. By acquiring microwave images at multiple frequencies and amplitudes (along with the other modalities) one can study material and device physics at different lateral and depth scales. Images are typically noisy and contaminated by artifacts that can vary from scan line to scan line and planar-like trends due to sample tilt errors. Here, we level images based on an estimate of a smooth 2-d trend determined with a robust implementation of a local regression method. In this robust approach, features and outliers which are not due to the trend are automatically downweighted. We denoise images with the Adaptive Weights Smoothing method. This method smooths out additive noise while preserving edge-like features in images. We demonstrate the feasibility of our methods on topography images and microwave |S11| images. For one challenging test case, we demonstrate that our method outperforms alternative methods from the scanning probe microscopy data analysis software package Gwyddion. Our methods should be useful for massive image data sets where manual selection of landmarks or image subsets by a user is impractical. Published by Elsevier B.V.

  17. An Adaptive Orientation Estimation Method for Magnetic and Inertial Sensors in the Presence of Magnetic Disturbances

    Directory of Open Access Journals (Sweden)

    Bingfei Fan

    2017-05-01

    Full Text Available Magnetic and inertial sensors have been widely used to estimate the orientation of human segments due to their low cost, compact size and light weight. However, the accuracy of the estimated orientation is easily affected by external factors, especially when the sensor is used in an environment with magnetic disturbances. In this paper, we propose an adaptive method to improve the accuracy of orientation estimations in the presence of magnetic disturbances. The method is based on existing gradient descent algorithms, and it is performed prior to sensor fusion algorithms. The proposed method includes stationary state detection and magnetic disturbance severity determination. The stationary state detection makes this method immune to magnetic disturbances in stationary state, while the magnetic disturbance severity determination helps to determine the credibility of magnetometer data under dynamic conditions, so as to mitigate the negative effect of the magnetic disturbances. The proposed method was validated through experiments performed on a customized three-axis instrumented gimbal with known orientations. The error of the proposed method and the original gradient descent algorithms were calculated and compared. Experimental results demonstrate that in stationary state, the proposed method is completely immune to magnetic disturbances, and in dynamic conditions, the error caused by magnetic disturbance is reduced by 51.2% compared with original MIMU gradient descent algorithm.

  18. An adaptive bin framework search method for a beta-sheet protein homopolymer model

    Directory of Open Access Journals (Sweden)

    Hoos Holger H

    2007-04-01

    Full Text Available Abstract Background The problem of protein structure prediction consists of predicting the functional or native structure of a protein given its linear sequence of amino acids. This problem has played a prominent role in the fields of biomolecular physics and algorithm design for over 50 years. Additionally, its importance increases continually as a result of an exponential growth over time in the number of known protein sequences in contrast to a linear increase in the number of determined structures. Our work focuses on the problem of searching an exponentially large space of possible conformations as efficiently as possible, with the goal of finding a global optimum with respect to a given energy function. This problem plays an important role in the analysis of systems with complex search landscapes, and particularly in the context of ab initio protein structure prediction. Results In this work, we introduce a novel approach for solving this conformation search problem based on the use of a bin framework for adaptively storing and retrieving promising locally optimal solutions. Our approach provides a rich and general framework within which a broad range of adaptive or reactive search strategies can be realized. Here, we introduce adaptive mechanisms for choosing which conformations should be stored, based on the set of conformations already stored in memory, and for biasing choices when retrieving conformations from memory in order to overcome search stagnation. Conclusion We show that our bin framework combined with a widely used optimization method, Monte Carlo search, achieves significantly better performance than state-of-the-art generalized ensemble methods for a well-known protein-like homopolymer model on the face-centered cubic lattice.

  19. Locomotor adaptation to a powered ankle-foot orthosis depends on control method

    Directory of Open Access Journals (Sweden)

    Gordon Keith E

    2007-12-01

    Full Text Available Abstract Background We studied human locomotor adaptation to powered ankle-foot orthoses with the intent of identifying differences between two different orthosis control methods. The first orthosis control method used a footswitch to provide bang-bang control (a kinematic control and the second orthosis control method used a proportional myoelectric signal from the soleus (a physiological control. Both controllers activated an artificial pneumatic muscle providing plantar flexion torque. Methods Subjects walked on a treadmill for two thirty-minute sessions spaced three days apart under either footswitch control (n = 6 or myoelectric control (n = 6. We recorded lower limb electromyography (EMG, joint kinematics, and orthosis kinetics. We compared stance phase EMG amplitudes, correlation of joint angle patterns, and mechanical work performed by the powered orthosis between the two controllers over time. Results During steady state at the end of the second session, subjects using proportional myoelectric control had much lower soleus and gastrocnemius activation than the subjects using footswitch control. The substantial decrease in triceps surae recruitment allowed the proportional myoelectric control subjects to walk with ankle kinematics close to normal and reduce negative work performed by the orthosis. The footswitch control subjects walked with substantially perturbed ankle kinematics and performed more negative work with the orthosis. Conclusion These results provide evidence that the choice of orthosis control method can greatly alter how humans adapt to powered orthosis assistance during walking. Specifically, proportional myoelectric control results in larger reductions in muscle activation and gait kinematics more similar to normal compared to footswitch control.

  20. Regularized Label Relaxation Linear Regression.

    Science.gov (United States)

    Fang, Xiaozhao; Xu, Yong; Li, Xuelong; Lai, Zhihui; Wong, Wai Keung; Fang, Bingwu

    2018-04-01

    Linear regression (LR) and some of its variants have been widely used for classification problems. Most of these methods assume that during the learning phase, the training samples can be exactly transformed into a strict binary label matrix, which has too little freedom to fit the labels adequately. To address this problem, in this paper, we propose a novel regularized label relaxation LR method, which has the following notable characteristics. First, the proposed method relaxes the strict binary label matrix into a slack variable matrix by introducing a nonnegative label relaxation matrix into LR, which provides more freedom to fit the labels and simultaneously enlarges the margins between different classes as much as possible. Second, the proposed method constructs the class compactness graph based on manifold learning and uses it as the regularization item to avoid the problem of overfitting. The class compactness graph is used to ensure that the samples sharing the same labels can be kept close after they are transformed. Two different algorithms, which are, respectively, based on -norm and -norm loss functions are devised. These two algorithms have compact closed-form solutions in each iteration so that they are easily implemented. Extensive experiments show that these two algorithms outperform the state-of-the-art algorithms in terms of the classification accuracy and running time.