WorldWideScience

Sample records for no-prior levenberg-marquardt regularization

  1. A Modified Levenberg-Marquardt Method for Nonsmooth Equations with Finitely Many Maximum Functions

    Directory of Open Access Journals (Sweden)

    Shou-qiang Du

    2008-01-01

    Full Text Available For solving nonsmooth systems of equations, the Levenberg-Marquardt method and its variants are of particular importance because of their locally fast convergent rates. Finitely many maximum functions systems are very useful in the study of nonlinear complementarity problems, variational inequality problems, Karush-Kuhn-Tucker systems of nonlinear programming problems, and many problems in mechanics and engineering. In this paper, we present a modified Levenberg-Marquardt method for nonsmooth equations with finitely many maximum functions. Under mild assumptions, the present method is shown to be convergent Q-linearly. Some numerical results comparing the proposed method with classical reformulations indicate that the modified Levenberg-Marquardt algorithm works quite well in practice.

  2. Data classification using metaheuristic Cuckoo Search technique for Levenberg Marquardt back propagation (CSLM) algorithm

    Science.gov (United States)

    Nawi, Nazri Mohd.; Khan, Abdullah; Rehman, M. Z.

    2015-05-01

    A nature inspired behavior metaheuristic techniques which provide derivative-free solutions to solve complex problems. One of the latest additions to the group of nature inspired optimization procedure is Cuckoo Search (CS) algorithm. Artificial Neural Network (ANN) training is an optimization task since it is desired to find optimal weight set of a neural network in training process. Traditional training algorithms have some limitation such as getting trapped in local minima and slow convergence rate. This study proposed a new technique CSLM by combining the best features of two known algorithms back-propagation (BP) and Levenberg Marquardt algorithm (LM) for improving the convergence speed of ANN training and avoiding local minima problem by training this network. Some selected benchmark classification datasets are used for simulation. The experiment result show that the proposed cuckoo search with Levenberg Marquardt algorithm has better performance than other algorithm used in this study.

  3. Application of the Levenberg-Marquardt Scheme to the MUSIC Algorithm for AOA Estimation

    Directory of Open Access Journals (Sweden)

    Joon-Ho Lee

    2013-01-01

    can be expressed in a least squares form. Based on this observation, we present a rigorous Levenberg-Marquardt (LM formulation of the MUSIC algorithm for simultaneous estimation of an azimuth and an elevation. We show a convergence property and compare the performance of the LM-based MUSIC algorithm with that of the standard MUSIC algorithm via Monte-Carlo simulation. We also compare the performance of the MUSIC algorithm with that of the Capon algorithm both for the standard implementation and for the LM-based implementation.

  4. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    Science.gov (United States)

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  5. Assessment of Polarization Effect on Efficiency of Levenberg-Marquardt Algorithm in Case of Thin Atmosphere over Black Surface

    Science.gov (United States)

    Korkin, S.; Lyapustin, A.

    2012-12-01

    The Levenberg-Marquardt algorithm [1, 2] provides a numerical iterative solution to the problem of minimization of a function over a space of its parameters. In our work, the Levenberg-Marquardt algorithm retrieves optical parameters of a thin (single scattering) plane parallel atmosphere irradiated by collimated infinitely wide monochromatic beam of light. Black ground surface is assumed. Computational accuracy, sensitivity to the initial guess and the presence of noise in the signal, and other properties of the algorithm are investigated in scalar (using intensity only) and vector (including polarization) modes. We consider an atmosphere that contains a mixture of coarse and fine fractions. Following [3], the fractions are simulated using Henyey-Greenstein model. Though not realistic, this assumption is very convenient for tests [4, p.354]. In our case it yields analytical evaluation of Jacobian matrix. Assuming the MISR geometry of observation [5] as an example, the average scattering cosines and the ratio of coarse and fine fractions, the atmosphere optical depth, and the single scattering albedo, are the five parameters to be determined numerically. In our implementation of the algorithm, the system of five linear equations is solved using the fast Cramer's rule [6]. A simple subroutine developed by the authors, makes the algorithm independent from external libraries. All Fortran 90/95 codes discussed in the presentation will be available immediately after the meeting from sergey.v.korkin@nasa.gov by request. [1]. Levenberg K, A method for the solution of certain non-linear problems in least squares, Quarterly of Applied Mathematics, 1944, V.2, P.164-168. [2]. Marquardt D, An algorithm for least-squares estimation of nonlinear parameters, Journal on Applied Mathematics, 1963, V.11, N.2, P.431-441. [3]. Hovenier JW, Multiple scattering of polarized light in planetary atmospheres. Astronomy and Astrophysics, 1971, V.13, P.7 - 29. [4]. Mishchenko MI, Travis LD

  6. A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Tao Min

    2014-01-01

    Full Text Available This paper is intended to provide a numerical algorithm involving the combined use of the Levenberg-Marquardt algorithm and the Galerkin finite element method for estimating the diffusion coefficient in an inverse heat conduction problem (IHCP. In the present study, the functional form of the diffusion coefficient is unknown a priori. The unknown diffusion coefficient is approximated by the polynomial form and the present numerical algorithm is employed to find the solution. Numerical experiments are presented to show the efficiency of the proposed method.

  7. Modified Levenberg-Marquardt Method for RÖSSLER Chaotic System Fuzzy Modeling Training

    Science.gov (United States)

    Wang, Yu-Hui; Wu, Qing-Xian; Jiang, Chang-Sheng; Xue, Ya-Li; Fang, Wei

    Generally, fuzzy approximation models require some human knowledge and experience. Operator's experience is involved in the mathematics of fuzzy theory as a collection of heuristic rules. The main goal of this paper is to present a new method for identifying unknown nonlinear dynamics such as Rössler system without any human knowledge. Instead of heuristic rules, the presented method uses the input-output data pairs to identify the Rössler chaotic system. The training algorithm is a modified Levenberg-Marquardt (L-M) method, which can adjust the parameters of each linear polynomial and fuzzy membership functions on line, and do not rely on experts' experience excessively. Finally, it is applied to training Rössler chaotic system fuzzy identification. Comparing this method with the standard L-M method, the convergence speed is accelerated. The simulation results demonstrate the effectiveness of the proposed method.

  8. Efficient Levenberg-Marquardt minimization of the maximum likelihood estimator for Poisson deviates

    International Nuclear Information System (INIS)

    Laurence, T.; Chromy, B.

    2010-01-01

    ) for the Poisson distribution is also well known, but has not become generally used. This is primarily because, in contrast to non-linear least squares fitting, there has been no quick, robust, and general fitting method. In the field of fluorescence lifetime spectroscopy and imaging, there have been some efforts to use this estimator through minimization routines such as Nelder-Mead optimization, exhaustive line searches, and Gauss-Newton minimization. Minimization based on specific one- or multi-exponential models has been used to obtain quick results, but this procedure does not allow the incorporation of the instrument response, and is not generally applicable to models found in other fields. Methods for using the MLE for Poisson-distributed data have been published by the wider spectroscopic community, including iterative minimization schemes based on Gauss-Newton minimization. The slow acceptance of these procedures for fitting event counting histograms may also be explained by the use of the ubiquitous, fast Levenberg-Marquardt (L-M) fitting procedure for fitting non-linear models using least squares fitting (simple searches obtain ∼10000 references - this doesn't include those who use it, but don't know they are using it). The benefits of L-M include a seamless transition between Gauss-Newton minimization and downward gradient minimization through the use of a regularization parameter. This transition is desirable because Gauss-Newton methods converge quickly, but only within a limited domain of convergence; on the other hand the downward gradient methods have a much wider domain of convergence, but converge extremely slowly nearer the minimum. L-M has the advantages of both procedures: relative insensitivity to initial parameters and rapid convergence. Scientists, when wanting an answer quickly, will fit data using L-M, get an answer, and move on. Only those that are aware of the bias issues will bother to fit using the more appropriate MLE for Poisson

  9. Optimización en la solución del problema inverso en geofísica usando el algoritmo de entrenamiento supervisado de Levenberg-Marquardt

    Directory of Open Access Journals (Sweden)

    Figueredo Baez Yaqueline

    2002-08-01

    Full Text Available

    In this work it is introduced a methodology for the supervised training of the Neural Networks, using Levenberg Marquardt's algorithm. This method is applied in gravimetry for the optimization in the convergence at the inverse problem.

    En este trabajo se presenta una metodología para el entrenamiento supervisado de redes neuronales, utilizando el algoritmo de Levenberg-Marquardt. Este método se aplica en gravimetría para optimizar la convergencia en el problema de inversión.

  10. Technical Note: Variance-covariance matrix and averaging kernels for the Levenberg-Marquardt solution of the retrieval of atmospheric vertical profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2010-03-01

    Full Text Available The variance-covariance matrix (VCM and the averaging kernel matrix (AKM are widely used tools to characterize atmospheric vertical profiles retrieved from remote sensing measurements. Accurate estimation of these quantities is essential for both the evaluation of the quality of the retrieved profiles and for the correct use of the profiles themselves in subsequent applications such as data comparison, data assimilation and data fusion. We propose a new method to estimate the VCM and AKM of vertical profiles retrieved using the Levenberg-Marquardt iterative technique. We apply the new method to the inversion of simulated limb emission measurements. Then we compare the obtained VCM and AKM with those resulting from other methods already published in the literature and with accurate estimates derived using statistical and numerical estimators. The proposed method accounts for all the iterations done in the inversion and provides the most accurate VCM and AKM. Furthermore, it correctly estimates the VCM and the AKM also if the retrieval iterations are stopped when a physically meaningful convergence criterion is fulfilled, i.e. before achievement of the numerical convergence at machine precision. The method can be easily implemented in any Levenberg-Marquardt iterative retrieval scheme, either constrained or unconstrained, without significant computational overhead.

  11. Absorption and scattering coefficients estimation in two-dimensional participating media using the generalized maximum entropy and Levenberg-Marquardt methods; Estimacion del coeficiente de absorcion y dispersion en medios participantes bidimensionales utilizando el metodo de maxima entropia generalizada y el metodo Levenberg-Marquardt

    Energy Technology Data Exchange (ETDEWEB)

    Berrocal T, Mariella J. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear]|[Universidad Nacional de Ingenieria, Lima (Peru); Roberty, Nilson C. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear; Silva Neto, Antonio J. [Universidade do Estado, Nova Friburgo, RJ (Brazil). Instituto Politecnico. Dept. de Engenharia Mecanica e Energia]|[Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear

    2002-07-01

    The solution of inverse problems in participating media where there is emission, absorption and dispersion of the radiation possesses several applications in engineering and medicine. The objective of this work is to estimative the coefficients of absorption and dispersion in two-dimensional heterogeneous participating media, using in independent form the Generalized Maximum Entropy and Levenberg Marquardt methods. Both methods are based on the solution of the direct problem that is modeled by the Boltzmann equation in cartesian geometry. Some cases testes are presented. (author)

  12. Absorption and scattering coefficients estimation in two-dimensional participating media using the generalized maximum entropy and Levenberg-Marquardt methods

    International Nuclear Information System (INIS)

    Berrocal T, Mariella J.; Roberty, Nilson C.; Silva Neto, Antonio J.; Universidade Federal, Rio de Janeiro, RJ

    2002-01-01

    The solution of inverse problems in participating media where there is emission, absorption and dispersion of the radiation possesses several applications in engineering and medicine. The objective of this work is to estimative the coefficients of absorption and dispersion in two-dimensional heterogeneous participating media, using in independent form the Generalized Maximum Entropy and Levenberg Marquardt methods. Both methods are based on the solution of the direct problem that is modeled by the Boltzmann equation in cartesian geometry. Some cases testes are presented. (author)

  13. Levenberg-Marquardt application to two-phase nonlinear parameter estimation for finned-tube coil evaporators

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available A procedure for calculation of refrigerant mass flow rate is implemented in the distributed numerical model to simulate the flow in finned-tube coil dry-expansion evaporators, usually found in refrigeration and air-conditioning systems. Two-phase refrigerant flow inside the tubes is assumed to be one-dimensional, unsteady, and homogeneous. In the model the effects of refrigerant pressure drop and the moisture condensation from the air flowing over the external surface of the tubes are considered. The results obtained are the distributions of refrigerant velocity, temperature and void fraction, tube-wall temperature, air temperature, and absolute humidity. The finite volume method is used to discretize the governing equations. Additionally, given the operation conditions and the geometric parameters, the model allows the calculation of the refrigerant mass flow rate. The value of mass flow rate is computed using the process of parameter estimation with the minimization method of Levenberg-Marquardt minimization. In order to validate the developed model, the obtained results using HFC-134a as a refrigerant are compared with available data from the literature.

  14. Hourly predictive Levenberg-Marquardt ANN and multi linear regression models for predicting of dew point temperature

    Science.gov (United States)

    Zounemat-Kermani, Mohammad

    2012-08-01

    In this study, the ability of two models of multi linear regression (MLR) and Levenberg-Marquardt (LM) feed-forward neural network was examined to estimate the hourly dew point temperature. Dew point temperature is the temperature at which water vapor in the air condenses into liquid. This temperature can be useful in estimating meteorological variables such as fog, rain, snow, dew, and evapotranspiration and in investigating agronomical issues as stomatal closure in plants. The availability of hourly records of climatic data (air temperature, relative humidity and pressure) which could be used to predict dew point temperature initiated the practice of modeling. Additionally, the wind vector (wind speed magnitude and direction) and conceptual input of weather condition were employed as other input variables. The three quantitative standard statistical performance evaluation measures, i.e. the root mean squared error, mean absolute error, and absolute logarithmic Nash-Sutcliffe efficiency coefficient ( {| {{{Log}}({{NS}})} |} ) were employed to evaluate the performances of the developed models. The results showed that applying wind vector and weather condition as input vectors along with meteorological variables could slightly increase the ANN and MLR predictive accuracy. The results also revealed that LM-NN was superior to MLR model and the best performance was obtained by considering all potential input variables in terms of different evaluation criteria.

  15. Long-term prediction of chaotic time series with multi-step prediction horizons by a neural network with Levenberg-Marquardt learning algorithm

    International Nuclear Information System (INIS)

    Mirzaee, Hossein

    2009-01-01

    The Levenberg-Marquardt learning algorithm is applied for training a multilayer perception with three hidden layer each with ten neurons in order to carefully map the structure of chaotic time series such as Mackey-Glass time series. First the MLP network is trained with 1000 data, and then it is tested with next 500 data. After that the trained and tested network is applied for long-term prediction of next 120 data which come after test data. The prediction is such a way that, the first inputs to network for prediction are the four last data of test data, then the predicted value is shifted to the regression vector which is the input to the network, then after first four-step of prediction, the input regression vector to network is fully predicted values and in continue, each predicted data is shifted to input vector for subsequent prediction.

  16. Bearing Fault Diagnosis under Variable Speed Using Convolutional Neural Networks and the Stochastic Diagonal Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Viet Tra

    2017-12-01

    Full Text Available This paper presents a novel method for diagnosing incipient bearing defects under variable operating speeds using convolutional neural networks (CNNs trained via the stochastic diagonal Levenberg-Marquardt (S-DLM algorithm. The CNNs utilize the spectral energy maps (SEMs of the acoustic emission (AE signals as inputs and automatically learn the optimal features, which yield the best discriminative models for diagnosing incipient bearing defects under variable operating speeds. The SEMs are two-dimensional maps that show the distribution of energy across different bands of the AE spectrum. It is hypothesized that the variation of a bearing’s speed would not alter the overall shape of the AE spectrum rather, it may only scale and translate it. Thus, at different speeds, the same defect would yield SEMs that are scaled and shifted versions of each other. This hypothesis is confirmed by the experimental results, where CNNs trained using the S-DLM algorithm yield significantly better diagnostic performance under variable operating speeds compared to existing methods. In this work, the performance of different training algorithms is also evaluated to select the best training algorithm for the CNNs. The proposed method is used to diagnose both single and compound defects at six different operating speeds.

  17. Information operator approach and iterative regularization methods for atmospheric remote sensing

    International Nuclear Information System (INIS)

    Doicu, A.; Hilgers, S.; Bargen, A. von; Rozanov, A.; Eichmann, K.-U.; Savigny, C. von; Burrows, J.P.

    2007-01-01

    In this study, we present the main features of the information operator approach for solving linear inverse problems arising in atmospheric remote sensing. This method is superior to the stochastic version of the Tikhonov regularization (or the optimal estimation method) due to its capability to filter out the noise-dominated components of the solution generated by an inappropriate choice of the regularization parameter. We extend this approach to iterative methods for nonlinear ill-posed problems and derive the truncated versions of the Gauss-Newton and Levenberg-Marquardt methods. Although the paper mostly focuses on discussing the mathematical details of the inverse method, retrieval results have been provided, which exemplify the performances of the methods. These results correspond to the NO 2 retrieval from SCIAMACHY limb scatter measurements and have been obtained by using the retrieval processors developed at the German Aerospace Center Oberpfaffenhofen and Institute of Environmental Physics of the University of Bremen

  18. Pap-smear Classification Using Efficient Second Order Neural Network Training Algorithms

    DEFF Research Database (Denmark)

    Ampazis, Nikolaos; Dounias, George; Jantzen, Jan

    2004-01-01

    In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier. The alg......In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier...

  19. Pap-smear Classification Using Efficient Second Order Neural Network Training Algorithms

    DEFF Research Database (Denmark)

    Ampazis, Nikolaos; Dounias, George; Jantzen, Jan

    2004-01-01

    In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier. The alg......In this paper we make use of two highly efficient second order neural network training algorithms, namely the LMAM (Levenberg-Marquardt with Adaptive Momentum) and OLMAM (Optimized Levenberg-Marquardt with Adaptive Momentum), for the construction of an efficient pap-smear test classifier....... The algorithms are methodologically similar, and are based on iterations of the form employed in the Levenberg-Marquardt (LM) method for non-linear least squares problems with the inclusion of an additional adaptive momentum term arising from the formulation of the training task as a constrained optimization...

  20. Improving Conductivity Image Quality Using Block Matrix-based Multiple Regularization (BMMR Technique in EIT: A Simulation Study

    Directory of Open Access Journals (Sweden)

    Tushar Kanti Bera

    2011-06-01

    Full Text Available A Block Matrix based Multiple Regularization (BMMR technique is proposed for improving conductivity image quality in EIT. The response matrix (JTJ has been partitioned into several sub-block matrices and the highest eigenvalue of each sub-block matrices has been chosen as regularization parameter for the nodes contained by that sub-block. Simulated boundary data are generated for circular domain with circular inhomogeneity and the conductivity images are reconstructed in a Model Based Iterative Image Reconstruction (MoBIIR algorithm. Conductivity images are reconstructed with BMMR technique and the results are compared with the Single-step Tikhonov Regularization (STR and modified Levenberg-Marquardt Regularization (LMR methods. It is observed that the BMMR technique reduces the projection error and solution error and improves the conductivity reconstruction in EIT. Result show that the BMMR method also improves the image contrast and inhomogeneity conductivity profile and hence the reconstructed image quality is enhanced. ;doi:10.5617/jeb.170 J Electr Bioimp, vol. 2, pp. 33-47, 2011

  1. Mixed Total Variation and L1 Regularization Method for Optical Tomography Based on Radiative Transfer Equation

    Directory of Open Access Journals (Sweden)

    Jinping Tang

    2017-01-01

    Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.

  2. Recursive Bayesian recurrent neural networks for time-series modeling.

    Science.gov (United States)

    Mirikitani, Derrick T; Nikolaev, Nikolay

    2010-02-01

    This paper develops a probabilistic approach to recursive second-order training of recurrent neural networks (RNNs) for improved time-series modeling. A general recursive Bayesian Levenberg-Marquardt algorithm is derived to sequentially update the weights and the covariance (Hessian) matrix. The main strengths of the approach are a principled handling of the regularization hyperparameters that leads to better generalization, and stable numerical performance. The framework involves the adaptation of a noise hyperparameter and local weight prior hyperparameters, which represent the noise in the data and the uncertainties in the model parameters. Experimental investigations using artificial and real-world data sets show that RNNs equipped with the proposed approach outperform standard real-time recurrent learning and extended Kalman training algorithms for recurrent networks, as well as other contemporary nonlinear neural models, on time-series modeling.

  3. Tip-tilt disturbance model identification based on non-linear least squares fitting for Linear Quadratic Gaussian control

    Science.gov (United States)

    Yang, Kangjian; Yang, Ping; Wang, Shuai; Dong, Lizhi; Xu, Bing

    2018-05-01

    We propose a method to identify tip-tilt disturbance model for Linear Quadratic Gaussian control. This identification method based on Levenberg-Marquardt method conducts with a little prior information and no auxiliary system and it is convenient to identify the tip-tilt disturbance model on-line for real-time control. This identification method makes it easy that Linear Quadratic Gaussian control runs efficiently in different adaptive optics systems for vibration mitigation. The validity of the Linear Quadratic Gaussian control associated with this tip-tilt disturbance model identification method is verified by experimental data, which is conducted in replay mode by simulation.

  4. PRIFIRA: General regularization using prior-conditioning for fast radio interferometric imaging†

    Science.gov (United States)

    Naghibzadeh, Shahrzad; van der Veen, Alle-Jan

    2018-06-01

    Image formation in radio astronomy is a large-scale inverse problem that is inherently ill-posed. We present a general algorithmic framework based on a Bayesian-inspired regularized maximum likelihood formulation of the radio astronomical imaging problem with a focus on diffuse emission recovery from limited noisy correlation data. The algorithm is dubbed PRIor-conditioned Fast Iterative Radio Astronomy (PRIFIRA) and is based on a direct embodiment of the regularization operator into the system by right preconditioning. The resulting system is then solved using an iterative method based on projections onto Krylov subspaces. We motivate the use of a beamformed image (which includes the classical "dirty image") as an efficient prior-conditioner. Iterative reweighting schemes generalize the algorithmic framework and can account for different regularization operators that encourage sparsity of the solution. The performance of the proposed method is evaluated based on simulated one- and two-dimensional array arrangements as well as actual data from the core stations of the Low Frequency Array radio telescope antenna configuration, and compared to state-of-the-art imaging techniques. We show the generality of the proposed method in terms of regularization schemes while maintaining a competitive reconstruction quality with the current reconstruction techniques. Furthermore, we show that exploiting Krylov subspace methods together with the proper noise-based stopping criteria results in a great improvement in imaging efficiency.

  5. Frequency guided methods for demodulation of a single fringe pattern.

    Science.gov (United States)

    Wang, Haixia; Kemao, Qian

    2009-08-17

    Phase demodulation from a single fringe pattern is a challenging task but of interest. A frequency-guided regularized phase tracker and a frequency-guided sequential demodulation method with Levenberg-Marquardt optimization are proposed to demodulate a single fringe pattern. Demodulation path guided by the local frequency from the highest to the lowest is applied in both methods. Since critical points have low local frequency values, they are processed last so that the spurious sign problem caused by these points is avoided. These two methods can be considered as alternatives to the effective fringe follower regularized phase tracker. Demodulation results from one computer-simulated and two experimental fringe patterns using the proposed methods will be demonstrated. (c) 2009 Optical Society of America

  6. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Directory of Open Access Journals (Sweden)

    N. I. Tananaev

    2015-03-01

    Full Text Available Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  7. Fitting sediment rating curves using regression analysis: a case study of Russian Arctic rivers

    Science.gov (United States)

    Tananaev, N. I.

    2015-03-01

    Published suspended sediment data for Arctic rivers is scarce. Suspended sediment rating curves for three medium to large rivers of the Russian Arctic were obtained using various curve-fitting techniques. Due to the biased sampling strategy, the raw datasets do not exhibit log-normal distribution, which restricts the applicability of a log-transformed linear fit. Non-linear (power) model coefficients were estimated using the Levenberg-Marquardt, Nelder-Mead and Hooke-Jeeves algorithms, all of which generally showed close agreement. A non-linear power model employing the Levenberg-Marquardt parameter evaluation algorithm was identified as an optimal statistical solution of the problem. Long-term annual suspended sediment loads estimated using the non-linear power model are, in general, consistent with previously published results.

  8. Phantom experiments using soft-prior regularization EIT for breast cancer imaging.

    Science.gov (United States)

    Murphy, Ethan K; Mahara, Aditya; Wu, Xiaotian; Halter, Ryan J

    2017-06-01

    A soft-prior regularization (SR) electrical impedance tomography (EIT) technique for breast cancer imaging is described, which shows an ability to accurately reconstruct tumor/inclusion conductivity values within a dense breast model investigated using a cylindrical and a breast-shaped tank. The SR-EIT method relies on knowing the spatial location of a suspicious lesion initially detected from a second imaging modality. Standard approaches (using Laplace smoothing and total variation regularization) without prior structural information are unable to accurately reconstruct or detect the tumors. The soft-prior approach represents a very significant improvement to these standard approaches, and has the potential to improve conventional imaging techniques, such as automated whole breast ultrasound (AWB-US), by providing electrical property information of suspicious lesions to improve AWB-US's ability to discriminate benign from cancerous lesions. Specifically, the best soft-regularization technique found average absolute tumor/inclusion errors of 0.015 S m -1 for the cylindrical test and 0.055 S m -1 and 0.080 S m -1 for the breast-shaped tank for 1.8 cm and 2.5 cm inclusions, respectively. The standard approaches were statistically unable to distinguish the tumor from the mammary gland tissue. An analysis of false tumors (benign suspicious lesions) provides extra insight into the potential and challenges EIT has for providing clinically relevant information. The ability to obtain accurate conductivity values of a suspicious lesion (>1.8 cm) detected from another modality (e.g. AWB-US) could significantly reduce false positives and result in a clinically important technology.

  9. Marquardt's Phi mask: pitfalls of relying on fashion models and the golden ratio to describe a beautiful face.

    Science.gov (United States)

    Holland, E

    2008-03-01

    Stephen Marquardt has derived a mask from the golden ratio that he claims represents the "ideal" facial archetype. Many have found his mask convincing, including cosmetic surgeons. However, Marquardt's mask is associated with numerous problems. The method used to examine goodness of fit with the proportions in the mask is faulty. The mask is ill-suited for non-European populations, especially sub-Saharan Africans and East Asians. The mask also appears to approximate the face shape of masculinized European women. Given that the general public strongly and overwhelmingly prefers above average facial femininity in women, white women seeking aesthetic facial surgery would be ill-advised to aim toward a better fit with Marquardt's mask. This article aims to show the proper way of assessing goodness of fit with Marquardt's mask, to address the shape of the mask as it pertains to masculinity-femininity, and to discuss the broader issue of an objective assessment of facial attractiveness. Generalized Procrustes analysis is used to show how goodness of fit with Marquardt's mask can be assessed. Thin-plate spline analysis is used to illustrate visually how sample faces, including northwestern European averages, differ from Marquardt's mask. Marquardt's mask best describes the facial proportions of masculinized white women as seen in fashion models. Marquardt's mask does not appear to describe "ideal" face shape even for white women because its proportions are inconsistent with the optimal preferences of most people, especially with regard to femininity.

  10. Implementing learning organization components in Ardabil Regional Water Company based on Marquardt systematic model

    Directory of Open Access Journals (Sweden)

    Shahram Mirzaie Daryani

    2015-09-01

    Full Text Available This main purpose of this study was to survey the implementation of learning organization characteristics based on Marquardt systematic model in Ardabil Regional Water Company. Two hundred and four staff (164 employees and 40 authorities participated in the study. For data collection Marquardt questionnaire was used which its validity and reliability had been confirmed. The results of the data analysis showed that learning organization characteristics were used more than average level in some subsystems of Marquardt model and there was a significant difference between current position and excellent position based on learning organization characteristic application. The results of this study can be used to improve work processes of organizations and institutions.

  11. Image segmentation with a novel regularized composite shape prior based on surrogate study

    Energy Technology Data Exchange (ETDEWEB)

    Zhao, Tingting, E-mail: tingtingzhao@mednet.ucla.edu; Ruan, Dan, E-mail: druan@mednet.ucla.edu [The Department of Radiation Oncology, University of California, Los Angeles, California 90095 (United States)

    2016-05-15

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  12. Image segmentation with a novel regularized composite shape prior based on surrogate study

    International Nuclear Information System (INIS)

    Zhao, Tingting; Ruan, Dan

    2016-01-01

    Purpose: Incorporating training into image segmentation is a good approach to achieve additional robustness. This work aims to develop an effective strategy to utilize shape prior knowledge, so that the segmentation label evolution can be driven toward the desired global optimum. Methods: In the variational image segmentation framework, a regularization for the composite shape prior is designed to incorporate the geometric relevance of individual training data to the target, which is inferred by an image-based surrogate relevance metric. Specifically, this regularization is imposed on the linear weights of composite shapes and serves as a hyperprior. The overall problem is formulated in a unified optimization setting and a variational block-descent algorithm is derived. Results: The performance of the proposed scheme is assessed in both corpus callosum segmentation from an MR image set and clavicle segmentation based on CT images. The resulted shape composition provides a proper preference for the geometrically relevant training data. A paired Wilcoxon signed rank test demonstrates statistically significant improvement of image segmentation accuracy, when compared to multiatlas label fusion method and three other benchmark active contour schemes. Conclusions: This work has developed a novel composite shape prior regularization, which achieves superior segmentation performance than typical benchmark schemes.

  13. Prospective regularization design in prior-image-based reconstruction

    International Nuclear Information System (INIS)

    Dang, Hao; Siewerdsen, Jeffrey H; Stayman, J Webster

    2015-01-01

    Prior-image-based reconstruction (PIBR) methods leveraging patient-specific anatomical information from previous imaging studies and/or sequences have demonstrated dramatic improvements in dose utilization and image quality for low-fidelity data. However, a proper balance of information from the prior images and information from the measurements is required (e.g. through careful tuning of regularization parameters). Inappropriate selection of reconstruction parameters can lead to detrimental effects including false structures and failure to improve image quality. Traditional methods based on heuristics are subject to error and sub-optimal solutions, while exhaustive searches require a large number of computationally intensive image reconstructions. In this work, we propose a novel method that prospectively estimates the optimal amount of prior image information for accurate admission of specific anatomical changes in PIBR without performing full image reconstructions. This method leverages an analytical approximation to the implicitly defined PIBR estimator, and introduces a predictive performance metric leveraging this analytical form and knowledge of a particular presumed anatomical change whose accurate reconstruction is sought. Additionally, since model-based PIBR approaches tend to be space-variant, a spatially varying prior image strength map is proposed to optimally admit changes everywhere in the image (eliminating the need to know change locations a priori). Studies were conducted in both an ellipse phantom and a realistic thorax phantom emulating a lung nodule surveillance scenario. The proposed method demonstrated accurate estimation of the optimal prior image strength while achieving a substantial computational speedup (about a factor of 20) compared to traditional exhaustive search. Moreover, the use of the proposed prior strength map in PIBR demonstrated accurate reconstruction of anatomical changes without foreknowledge of change locations in

  14. Parallelized Local Volatility Estimation Using GP-GPU Hardware Acceleration

    KAUST Repository

    Douglas, Craig C.; Lee, Hyoseop; Sheen, Dongwoo

    2010-01-01

    We introduce an inverse problem for the local volatility model in option pricing. We solve the problem using the Levenberg-Marquardt algorithm and use the notion of the Fréchet derivative when calculating the Jacobian matrix. We analyze

  15. Implementing learning organization components in Ardabil Regional Water Company based on Marquardt systematic model

    OpenAIRE

    Shahram Mirzaie Daryani; Azadeh Zirak

    2015-01-01

    This main purpose of this study was to survey the implementation of learning organization characteristics based on Marquardt systematic model in Ardabil Regional Water Company. Two hundred and four staff (164 employees and 40 authorities) participated in the study. For data collection Marquardt questionnaire was used which its validity and reliability had been confirmed. The results of the data analysis showed that learning organization characteristics were used more than average level in som...

  16. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example

    Science.gov (United States)

    Helgesson, P.; Sjöstrand, H.

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  17. Fitting a defect non-linear model with or without prior, distinguishing nuclear reaction products as an example.

    Science.gov (United States)

    Helgesson, P; Sjöstrand, H

    2017-11-01

    Fitting a parametrized function to data is important for many researchers and scientists. If the model is non-linear and/or defect, it is not trivial to do correctly and to include an adequate uncertainty analysis. This work presents how the Levenberg-Marquardt algorithm for non-linear generalized least squares fitting can be used with a prior distribution for the parameters and how it can be combined with Gaussian processes to treat model defects. An example, where three peaks in a histogram are to be distinguished, is carefully studied. In particular, the probability r 1 for a nuclear reaction to end up in one out of two overlapping peaks is studied. Synthetic data are used to investigate effects of linearizations and other assumptions. For perfect Gaussian peaks, it is seen that the estimated parameters are distributed close to the truth with good covariance estimates. This assumes that the method is applied correctly; for example, prior knowledge should be implemented using a prior distribution and not by assuming that some parameters are perfectly known (if they are not). It is also important to update the data covariance matrix using the fit if the uncertainties depend on the expected value of the data (e.g., for Poisson counting statistics or relative uncertainties). If a model defect is added to the peaks, such that their shape is unknown, a fit which assumes perfect Gaussian peaks becomes unable to reproduce the data, and the results for r 1 become biased. It is, however, seen that it is possible to treat the model defect with a Gaussian process with a covariance function tailored for the situation, with hyper-parameters determined by leave-one-out cross validation. The resulting estimates for r 1 are virtually unbiased, and the uncertainty estimates agree very well with the underlying uncertainty.

  18. Nonlinear microwave imaging using Levenberg-Marquardt method with iterative shrinkage thresholding

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    Development of microwave imaging methods applicable in sparse investigation domains is becoming a research focus in computational electromagnetics (D.W. Winters and S.C. Hagness, IEEE Trans. Antennas Propag., 58(1), 145-154, 2010). This is simply due to the fact that sparse/sparsified domains naturally exist in many applications including remote sensing, medical imaging, crack detection, hydrocarbon reservoir exploration, and see-through-the-wall imaging.

  19. Nonlinear microwave imaging using Levenberg-Marquardt method with iterative shrinkage thresholding

    KAUST Repository

    Desmal, Abdulla

    2014-07-01

    Development of microwave imaging methods applicable in sparse investigation domains is becoming a research focus in computational electromagnetics (D.W. Winters and S.C. Hagness, IEEE Trans. Antennas Propag., 58(1), 145-154, 2010). This is simply due to the fact that sparse/sparsified domains naturally exist in many applications including remote sensing, medical imaging, crack detection, hydrocarbon reservoir exploration, and see-through-the-wall imaging.

  20. Evolutionary neural network modeling for software cumulative failure time prediction

    International Nuclear Information System (INIS)

    Tian Liang; Noore, Afzel

    2005-01-01

    An evolutionary neural network modeling approach for software cumulative failure time prediction based on multiple-delayed-input single-output architecture is proposed. Genetic algorithm is used to globally optimize the number of the delayed input neurons and the number of neurons in the hidden layer of the neural network architecture. Modification of Levenberg-Marquardt algorithm with Bayesian regularization is used to improve the ability to predict software cumulative failure time. The performance of our proposed approach has been compared using real-time control and flight dynamic application data sets. Numerical results show that both the goodness-of-fit and the next-step-predictability of our proposed approach have greater accuracy in predicting software cumulative failure time compared to existing approaches

  1. Power plant fault detection using artificial neural network

    Science.gov (United States)

    Thanakodi, Suresh; Nazar, Nazatul Shiema Moh; Joini, Nur Fazriana; Hidzir, Hidzrin Dayana Mohd; Awira, Mohammad Zulfikar Khairul

    2018-02-01

    The fault that commonly occurs in power plants is due to various factors that affect the system outage. There are many types of faults in power plants such as single line to ground fault, double line to ground fault, and line to line fault. The primary aim of this paper is to diagnose the fault in 14 buses power plants by using an Artificial Neural Network (ANN). The Multilayered Perceptron Network (MLP) that detection trained utilized the offline training methods such as Gradient Descent Backpropagation (GDBP), Levenberg-Marquardt (LM), and Bayesian Regularization (BR). The best method is used to build the Graphical User Interface (GUI). The modelling of 14 buses power plant, network training, and GUI used the MATLAB software.

  2. Iterative reconstruction for x-ray computed tomography using prior-image induced nonlocal regularization.

    Science.gov (United States)

    Zhang, Hua; Huang, Jing; Ma, Jianhua; Bian, Zhaoying; Feng, Qianjin; Lu, Hongbing; Liang, Zhengrong; Chen, Wufan

    2014-09-01

    Repeated X-ray computed tomography (CT) scans are often required in several specific applications such as perfusion imaging, image-guided biopsy needle, image-guided intervention, and radiotherapy with noticeable benefits. However, the associated cumulative radiation dose significantly increases as comparison with that used in the conventional CT scan, which has raised major concerns in patients. In this study, to realize radiation dose reduction by reducing the X-ray tube current and exposure time (mAs) in repeated CT scans, we propose a prior-image induced nonlocal (PINL) regularization for statistical iterative reconstruction via the penalized weighted least-squares (PWLS) criteria, which we refer to as "PWLS-PINL". Specifically, the PINL regularization utilizes the redundant information in the prior image and the weighted least-squares term considers a data-dependent variance estimation, aiming to improve current low-dose image quality. Subsequently, a modified iterative successive overrelaxation algorithm is adopted to optimize the associative objective function. Experimental results on both phantom and patient data show that the present PWLS-PINL method can achieve promising gains over the other existing methods in terms of the noise reduction, low-contrast object detection, and edge detail preservation.

  3. Assessment of prior image induced nonlocal means regularization for low-dose CT reconstruction: Change in anatomy.

    Science.gov (United States)

    Zhang, Hao; Ma, Jianhua; Wang, Jing; Moore, William; Liang, Zhengrong

    2017-09-01

    Repeated computed tomography (CT) scans are prescribed for some clinical applications such as lung nodule surveillance. Several studies have demonstrated that incorporating a high-quality prior image into the reconstruction of subsequent low-dose CT (LDCT) acquisitions can either improve image quality or reduce data fidelity requirements. Our proposed previous normal-dose image induced nonlocal means (ndiNLM) regularization method for LDCT is an example of such a method. However, one major concern with prior image based methods is that they might produce false information when the prior image and the current LDCT image show different structures (for example, if a lung nodule emerges, grows, shrinks, or disappears over time). This study aims to assess the performance of the ndiNLM regularization method in situations with change in anatomy. We incorporated the ndiNLM regularization into the statistical image reconstruction (SIR) framework for reconstruction of subsequent LDCT images. Because of its patch-based search mechanism, a rough registration between the prior image and the current LDCT image is adequate for the SIR-ndiNLM method. We assessed the performance of the SIR-ndiNLM method in lung nodule surveillance for two different scenarios: (a) the nodule was not found in a baseline exam but appears in a follow-up LDCT scan; (b) the nodule was present in a baseline exam but disappears in a follow-up LDCT scan. We further investigated the effect of nodule size on the performance of the SIR-ndiNLM method. We found that a relatively large search-window (e.g., 33 × 33) should be used for the SIR-ndiNLM method to account for misalignment between the prior image and the current LDCT image, and to ensure that enough similar patches can be found in the prior image. With proper selection of other parameters, experimental results with two patient datasets demonstrated that the SIR-ndiNLM method did not miss true nodules nor introduce false nodules in the lung nodule

  4. Load forecasting using different architectures of neural networks with the assistance of the MATLAB toolboxes; Previsao de cargas eletricas utilizando diferentes arquiteturas de redes neurais artificiais com o auxilio das toolboxes do MATLAB

    Energy Technology Data Exchange (ETDEWEB)

    Nose Filho, Kenji; Araujo, Klayton A.M.; Maeda, Jorge L.Y.; Lotufo, Anna Diva P. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Ilha Solteira, SP (Brazil)], Emails: kenjinose@yahoo.com.br, klayton_ama@hotmail.com, jorge-maeda@hotmail.com, annadiva@dee.feis.unesp.br

    2009-07-01

    This paper presents a development and implementation of a program to electrical load forecasting with data from a Brazilian electrical company, using four different architectures of neural networks of the MATLAB toolboxes: multilayer backpropagation gradient descendent with momentum, multilayer backpropagation Levenberg-Marquardt, adaptive network based fuzzy inference system and general regression neural network. The program presented a satisfactory performance, guaranteeing very good results. (author)

  5. Numerical CP Decomposition of Some Difficult Tensors

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Phan, A. H.; Cichocki, A.

    2017-01-01

    Roč. 317, č. 1 (2017), s. 362-370 ISSN 0377-0427 R&D Projects: GA ČR(CZ) GA14-13713S Institutional support: RVO:67985556 Keywords : Small matrix multiplication * Canonical polyadic tensor decomposition * Levenberg-Marquardt method Subject RIV: BB - Applied Statistics, Operational Research OBOR OECD: Applied mathematics Impact factor: 1.357, year: 2016 http://library.utia.cas.cz/separaty/2017/SI/tichavsky-0468385. pdf

  6. A blind deconvolution method based on L1/L2 regularization prior in the gradient space

    Science.gov (United States)

    Cai, Ying; Shi, Yu; Hua, Xia

    2018-02-01

    In the process of image restoration, the result of image restoration is very different from the real image because of the existence of noise, in order to solve the ill posed problem in image restoration, a blind deconvolution method based on L1/L2 regularization prior to gradient domain is proposed. The method presented in this paper first adds a function to the prior knowledge, which is the ratio of the L1 norm to the L2 norm, and takes the function as the penalty term in the high frequency domain of the image. Then, the function is iteratively updated, and the iterative shrinkage threshold algorithm is applied to solve the high frequency image. In this paper, it is considered that the information in the gradient domain is better for the estimation of blur kernel, so the blur kernel is estimated in the gradient domain. This problem can be quickly implemented in the frequency domain by fast Fast Fourier Transform. In addition, in order to improve the effectiveness of the algorithm, we have added a multi-scale iterative optimization method. This paper proposes the blind deconvolution method based on L1/L2 regularization priors in the gradient space can obtain the unique and stable solution in the process of image restoration, which not only keeps the edges and details of the image, but also ensures the accuracy of the results.

  7. Limited-angle multi-energy CT using joint clustering prior and sparsity regularization

    Science.gov (United States)

    Zhang, Huayu; Xing, Yuxiang

    2016-03-01

    In this article, we present an easy-to-implement Multi-energy CT scanning strategy and a corresponding reconstruction method, which facilitate spectral CT imaging by improving the data efficiency the number-of-energy- channel fold without introducing visible limited-angle artifacts caused by reducing projection views. Leveraging the structure coherence at different energies, we first pre-reconstruct a prior structure information image using projection data from all energy channels. Then, we perform a k-means clustering on the prior image to generate a sparse dictionary representation for the image, which severs as a structure information constraint. We com- bine this constraint with conventional compressed sensing method and proposed a new model which we referred as Joint Clustering Prior and Sparsity Regularization (CPSR). CPSR is a convex problem and we solve it by Alternating Direction Method of Multipliers (ADMM). We verify our CPSR reconstruction method with a numerical simulation experiment. A dental phantom with complicate structures of teeth and soft tissues is used. X-ray beams from three spectra of different peak energies (120kVp, 90kVp, 60kVp) irradiate the phantom to form tri-energy projections. Projection data covering only 75◦ from each energy spectrum are collected for reconstruction. Independent reconstruction for each energy will cause severe limited-angle artifacts even with the help of compressed sensing approaches. Our CPSR provides us with images free of the limited-angle artifact. All edge details are well preserved in our experimental study.

  8. Local atomic structure of Fe/Cr multilayers: Depth-resolved method

    Science.gov (United States)

    Babanov, Yu. A.; Ponomarev, D. A.; Devyaterikov, D. I.; Salamatov, Yu. A.; Romashev, L. N.; Ustinov, V. V.; Vasin, V. V.; Ageev, A. L.

    2017-10-01

    A depth-resolved method for the investigation of the local atomic structure by combining data of X-ray reflectivity and angle-resolved EXAFS is proposed. The solution of the problem can be divided into three stages: 1) determination of the element concentration profile with the depth z from X-ray reflectivity data, 2) determination of the X-ray fluorescence emission spectrum of the element i absorption coefficient μia (z,E) as a function of depth and photon energy E using the angle-resolved EXAFS data Iif (E , ϑl) , 3) determination of partial correlation functions gij (z , r) as a function of depth from μi (z , E) . All stages of the proposed method are demonstrated on a model example of a multilayer nanoheterostructure Cr/Fe/Cr/Al2O3. Three partial pair correlation functions are obtained. A modified Levenberg-Marquardt algorithm and a regularization method are applied.

  9. Optimal determination of the elastic constants of woven 2D SiC/SiC composite materials

    International Nuclear Information System (INIS)

    Mouchtachi, A; Guerjouma, R El; Baboux, J C; Rouby, D; Bouami, D

    2004-01-01

    For homogeneous materials, the ultrasonic immersion method, associated with a numerical optimization process mostly based on Newton's algorithm, allows the determination of elastic constants for various synthetic and natural composite materials. Nevertheless, a principal limitation of the existing optimization procedure occurs when the considered material is at the limit of the homogeneous hypothesis. Such is the case of the woven bidirectional SiC matrix and SiC fibre composite material. In this study, we have developed two numerical methods for the determination of the elastic constants of the 2D SiC/SiC composite material (2D SiC/SiC). The first one is based on Newton's algorithm: the elastic constants are obtained by minimizing the square deviation between experimental and calculated velocities. The second method is based on the Levenberg-Marquardt algorithm. We show that these algorithms give the same results in the case of homogeneous anisotropic composite materials. For the 2D SiC/SiC composite material, the two methods, using the same measured velocities, give different sets of elastic constants. We then note that the Levenberg-Marquardt algorithm enables a better convergence towards a global set of elastic constants in good agreement with the elastic properties, which can be measured using classical quasi-static methods

  10. Control strategy of an autonomous desalination unit fed by PV-Wind hybrid system without battery storage

    Directory of Open Access Journals (Sweden)

    M. Turki

    2008-06-01

    Full Text Available This paper presents a novel approach to economic dispatch problems with valve point effects and multiple fuel options using a hybrid evolutionary programming method. Determination of global optimum solution for the practical economic dispatch problem having non smooth cost functions is difficult by using conventional mathematical approaches. Hence several evolutionary algorithm methods were proposed to solve this problem. In this paper, EP-LMO (Evolutionary Programming with Levenberg-Marquardt Optimization technique is proposed to solve economic dispatch problems with valve point effects and multiple fuel options. The EP-LMO is developed in such a way that a simple evolutionary programming (EP is applied as a base level search to find the direction of the optimal global region. And Levenberg-Marquardt Optimization (LMO method is used as a fine tuning to determine the optimal solution. To illustrate the efficiency and effectiveness of the proposed approach, two bench mark problems are considered. First test problem considers multiple fuel options and next problem addresses both valve-point effects and multi-fuel options. To validate the obtained results, the proposed method is compared with the results of conventional numerical methods, Modified Hop-field Neural network, Evolutionary Programming approaches, Modified PSO, Improved PSO and Improved Genetic Algorithm with multiplier updating (IGA_MUmethod.

  11. Landslide Occurrence Prediction Using Trainable Cascade Forward Network and Multilayer Perceptron

    Directory of Open Access Journals (Sweden)

    Mohammad Subhi Al-batah

    2015-01-01

    Full Text Available Landslides are one of the dangerous natural phenomena that hinder the development in Penang Island, Malaysia. Therefore, finding the reliable method to predict the occurrence of landslides is still the research of interest. In this paper, two models of artificial neural network, namely, Multilayer Perceptron (MLP and Cascade Forward Neural Network (CFNN, are introduced to predict the landslide hazard map of Penang Island. These two models were tested and compared using eleven machine learning algorithms, that is, Levenberg Marquardt, Broyden Fletcher Goldfarb, Resilient Back Propagation, Scaled Conjugate Gradient, Conjugate Gradient with Beale, Conjugate Gradient with Fletcher Reeves updates, Conjugate Gradient with Polakribiere updates, One Step Secant, Gradient Descent, Gradient Descent with Momentum and Adaptive Learning Rate, and Gradient Descent with Momentum algorithm. Often, the performance of the landslide prediction depends on the input factors beside the prediction method. In this research work, 14 input factors were used. The prediction accuracies of networks were verified using the Area under the Curve method for the Receiver Operating Characteristics. The results indicated that the best prediction accuracy of 82.89% was achieved using the CFNN network with the Levenberg Marquardt learning algorithm for the training data set and 81.62% for the testing data set.

  12. Breakout Prediction Based on BP Neural Network in Continuous Casting Process

    Directory of Open Access Journals (Sweden)

    Zhang Ben-guo

    2016-01-01

    Full Text Available An improved BP neural network model was presented by modifying the learning algorithm of the traditional BP neural network, based on the Levenberg-Marquardt algorithm, and was applied to the breakout prediction system in the continuous casting process. The results showed that the accuracy rate of the model for the temperature pattern of sticking breakout was 96.43%, and the quote rate was 100%, that verified the feasibility of the model.

  13. Control of Three-Phase Grid-Connected Microgrids Using Artificial Neural Networks

    OpenAIRE

    Shuhui, L.; Fu, X.; Jaithwa, I.; Alonso, E.; Fairbank, M.; Wunsch, D. C.

    2015-01-01

    A microgrid consists of a variety of inverter-interfaced distributed energy resources (DERs). A key issue is how to control DERs within the microgrid and how to connect them to or disconnect them from the microgrid quickly. This paper presents a strategy for controlling inverter-interfaced DERs within a microgrid using an artificial neural network, which implements a dynamic programming algorithm and is trained with a new Levenberg-Marquardt backpropagation algorithm. Compared to conventional...

  14. Determination of pore diameter from rejection measurements with a mixture of oligosaccharides

    Energy Technology Data Exchange (ETDEWEB)

    Espinoza-Gomez, Heriberto; Rogel-Hernandez, Eduardo [Universidad Autonoma de Baja California-Tijuana, Facultad de Ciencias Quimicas e Ingenieria, Tijuana, BC (Mexico); Lin, Shui Wai [Centro de Graduados e Investigacion del Instituto Tecnologico de Tijuana, Apdo. Postal 1166, Tijuana, BC (Mexico)

    2005-04-01

    This paper present a method to determine pore diameters and effective transport through membranes using a mixture of oligosaccharides. The results are compared with the Maxwell-Stefan equations. The partition coefficients of the solutes are a function of the pore diameter according to the Ferry equation. Thus, with the pore diameter as the only unknown parameter, rejection is described and the pore diameter is obtained by a Marquardt-Levenberg optimization procedure. (orig.)

  15. Artificial neural networks for spatial distribution of fuel assemblies in reload of PWR reactors

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Edyene; Castro, Victor F.; Velásquez, Carlos E.; Pereira, Claubia, E-mail: claubia@nuclear.ufmg.br [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Programa de Pós-Graduação em Ciências e Técnicas Nucleares

    2017-07-01

    An artificial neural network methodology is being developed in order to find an optimum spatial distribution of the fuel assemblies in a nuclear reactor core during reload. The main bounding parameter of the modelling was the neutron multiplication factor, k{sub ef{sub f}}. The characteristics of the network are defined by the nuclear parameters: cycle, burnup, enrichment, fuel type, and average power peak of each element. These parameters were obtained by the ORNL nuclear code package SCALE6.0. As for the artificial neural network, the ANN Feedforward Multi{sub L}ayer{sub P}erceptron with various layers and neurons were constructed. Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization). Artificial neural network have implemented using MATLAB 2015a version. As preliminary results, the spatial distribution of the fuel assemblies in the core using a neural network was slightly better than the standard core. (author)

  16. Artificial neural networks for spatial distribution of fuel assemblies in reload of PWR reactors

    International Nuclear Information System (INIS)

    Oliveira, Edyene; Castro, Victor F.; Velásquez, Carlos E.; Pereira, Claubia

    2017-01-01

    An artificial neural network methodology is being developed in order to find an optimum spatial distribution of the fuel assemblies in a nuclear reactor core during reload. The main bounding parameter of the modelling was the neutron multiplication factor, k ef f . The characteristics of the network are defined by the nuclear parameters: cycle, burnup, enrichment, fuel type, and average power peak of each element. These parameters were obtained by the ORNL nuclear code package SCALE6.0. As for the artificial neural network, the ANN Feedforward Multi L ayer P erceptron with various layers and neurons were constructed. Three algorithms were used and tested: LM (Levenberg-Marquardt), SCG (Scaled Conjugate Gradient) and BayR (Bayesian Regularization). Artificial neural network have implemented using MATLAB 2015a version. As preliminary results, the spatial distribution of the fuel assemblies in the core using a neural network was slightly better than the standard core. (author)

  17. Adaptive regularization of noisy linear inverse problems

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue

    2006-01-01

    In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....

  18. Construction cost estimation of spherical storage tanks: artificial neural networks and hybrid regression—GA algorithms

    Science.gov (United States)

    Arabzadeh, Vida; Niaki, S. T. A.; Arabzadeh, Vahid

    2017-10-01

    One of the most important processes in the early stages of construction projects is to estimate the cost involved. This process involves a wide range of uncertainties, which make it a challenging task. Because of unknown issues, using the experience of the experts or looking for similar cases are the conventional methods to deal with cost estimation. The current study presents data-driven methods for cost estimation based on the application of artificial neural network (ANN) and regression models. The learning algorithms of the ANN are the Levenberg-Marquardt and the Bayesian regulated. Moreover, regression models are hybridized with a genetic algorithm to obtain better estimates of the coefficients. The methods are applied in a real case, where the input parameters of the models are assigned based on the key issues involved in a spherical tank construction. The results reveal that while a high correlation between the estimated cost and the real cost exists; both ANNs could perform better than the hybridized regression models. In addition, the ANN with the Levenberg-Marquardt learning algorithm (LMNN) obtains a better estimation than the ANN with the Bayesian-regulated learning algorithm (BRNN). The correlation between real data and estimated values is over 90%, while the mean square error is achieved around 0.4. The proposed LMNN model can be effective to reduce uncertainty and complexity in the early stages of the construction project.

  19. Parallelized Local Volatility Estimation Using GP-GPU Hardware Acceleration

    KAUST Repository

    Douglas, Craig C.

    2010-01-01

    We introduce an inverse problem for the local volatility model in option pricing. We solve the problem using the Levenberg-Marquardt algorithm and use the notion of the Fréchet derivative when calculating the Jacobian matrix. We analyze the existence of the Fréchet derivative and its numerical computation. To reduce the computational time of the inverse problem, a GP-GPU environment is considered for parallel computation. Numerical results confirm the validity and efficiency of the proposed method. ©2010 IEEE.

  20. l0 regularization based on a prior image incorporated non-local means for limited-angle X-ray CT reconstruction.

    Science.gov (United States)

    Zhang, Lingli; Zeng, Li; Guo, Yumeng

    2018-03-15

    Restricted by the scanning environment in some CT imaging modalities, the acquired projection data are usually incomplete, which may lead to a limited-angle reconstruction problem. Thus, image quality usually suffers from the slope artifacts. The objective of this study is to first investigate the distorted domains of the reconstructed images which encounter the slope artifacts and then present a new iterative reconstruction method to address the limited-angle X-ray CT reconstruction problem. The presented framework of new method exploits the structural similarity between the prior image and the reconstructed image aiming to compensate the distorted edges. Specifically, the new method utilizes l0 regularization and wavelet tight framelets to suppress the slope artifacts and pursue the sparsity. New method includes following 4 steps to (1) address the data fidelity using SART; (2) compensate for the slope artifacts due to the missed projection data using the prior image and modified nonlocal means (PNLM); (3) utilize l0 regularization to suppress the slope artifacts and pursue the sparsity of wavelet coefficients of the transformed image by using iterative hard thresholding (l0W); and (4) apply an inverse wavelet transform to reconstruct image. In summary, this method is referred to as "l0W-PNLM". Numerical implementations showed that the presented l0W-PNLM was superior to suppress the slope artifacts while preserving the edges of some features as compared to the commercial and other popular investigative algorithms. When the image to be reconstructed is inconsistent with the prior image, the new method can avoid or minimize the distorted edges in the reconstructed images. Quantitative assessments also showed that applying the new method obtained the highest image quality comparing to the existing algorithms. This study demonstrated that the presented l0W-PNLM yielded higher image quality due to a number of unique characteristics, which include that (1) it utilizes

  1. Gravitational lensing and ghost images in the regular Bardeen no-horizon spacetimes

    International Nuclear Information System (INIS)

    Schee, Jan; Stuchlík, Zdeněk

    2015-01-01

    We study deflection of light rays and gravitational lensing in the regular Bardeen no-horizon spacetimes. Flatness of these spacetimes in the central region implies existence of interesting optical effects related to photons crossing the gravitational field of the no-horizon spacetimes with low impact parameters. These effects occur due to existence of a critical impact parameter giving maximal deflection of light rays in the Bardeen no-horizon spacetimes. We give the critical impact parameter in dependence on the specific charge of the spacetimes, and discuss 'ghost' direct and indirect images of Keplerian discs, generated by photons with low impact parameters. The ghost direct images can occur only for large inclination angles of distant observers, while ghost indirect images can occur also for small inclination angles. We determine the range of the frequency shift of photons generating the ghost images and determine distribution of the frequency shift across these images. We compare them to those of the standard direct images of the Keplerian discs. The difference of the ranges of the frequency shift on the ghost and direct images could serve as a quantitative measure of the Bardeen no-horizon spacetimes. The regions of the Keplerian discs giving the ghost images are determined in dependence on the specific charge of the no-horizon spacetimes. For comparison we construct direct and indirect (ordinary and ghost) images of Keplerian discs around Reissner-Nördström naked singularities demonstrating a clear qualitative difference to the ghost direct images in the regular Bardeen no-horizon spacetimes. The optical effects related to the low impact parameter photons thus give clear signature of the regular Bardeen no-horizon spacetimes, as no similar phenomena could occur in the black hole or naked singularity spacetimes. Similar direct ghost images have to occur in any regular no-horizon spacetimes having nearly flat central region

  2. Gompertzian stochastic model with delay effect to cervical cancer growth

    International Nuclear Information System (INIS)

    Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti; Bahar, Arifah

    2015-01-01

    In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits

  3. Gompertzian stochastic model with delay effect to cervical cancer growth

    Energy Technology Data Exchange (ETDEWEB)

    Mazlan, Mazma Syahidatul Ayuni binti; Rosli, Norhayati binti [Faculty of Industrial Sciences and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300 Gambang, Pahang (Malaysia); Bahar, Arifah [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor and UTM Centre for Industrial and Applied Mathematics (UTM-CIAM), Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)

    2015-02-03

    In this paper, a Gompertzian stochastic model with time delay is introduced to describe the cervical cancer growth. The parameters values of the mathematical model are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic model numerically. The efficiency of mathematical model is measured by comparing the simulated result and the clinical data of cervical cancer growth. Low values of Mean-Square Error (MSE) of Gompertzian stochastic model with delay effect indicate good fits.

  4. Comparison of methods for accurate end-point detection of potentiometric titrations

    Science.gov (United States)

    Villela, R. L. A.; Borges, P. P.; Vyskočil, L.

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper.

  5. Application of neuro-fuzzy model for neutron activation analysis (NAA)

    International Nuclear Information System (INIS)

    Khalafi, H.; Terman, M.S.; Rahmani, F.

    2011-01-01

    Neutron activation analysis (NAA) is a precise chemical multielemental method of analysis which is satisfactorily used for qualitative and quantitative analyses. Repeated irradiation is needed because of mal-determination of some elements due to peak overlap in qualitative analysis. In this study, NAA procedure has been modified using a neuro-fuzzy model to avoid repeated irradiation based on multilayer perceptrons network trained by the Levenberg Marquardt algorithm. This method increases the precision of spectrum analysis in the case of strong background and peak overlap. (authors)

  6. Comparison of methods for accurate end-point detection of potentiometric titrations

    International Nuclear Information System (INIS)

    Villela, R L A; Borges, P P; Vyskočil, L

    2015-01-01

    Detection of the end point in potentiometric titrations has wide application on experiments that demand very low measurement uncertainties mainly for certifying reference materials. Simulations of experimental coulometric titration data and consequential error analysis of the end-point values were conducted using a programming code. These simulations revealed that the Levenberg-Marquardt method is in general more accurate than the traditional second derivative technique used currently as end-point detection for potentiometric titrations. Performance of the methods will be compared and presented in this paper

  7. Spatial Disaggregation of Areal Rainfall Using Two Different Artificial Neural Networks Models

    Directory of Open Access Journals (Sweden)

    Sungwon Kim

    2015-06-01

    Full Text Available The objective of this study is to develop artificial neural network (ANN models, including multilayer perceptron (MLP and Kohonen self-organizing feature map (KSOFM, for spatial disaggregation of areal rainfall in the Wi-stream catchment, an International Hydrological Program (IHP representative catchment, in South Korea. A three-layer MLP model, using three training algorithms, was used to estimate areal rainfall. The Levenberg–Marquardt training algorithm was found to be more sensitive to the number of hidden nodes than were the conjugate gradient and quickprop training algorithms using the MLP model. Results showed that the networks structures of 11-5-1 (conjugate gradient and quickprop and 11-3-1 (Levenberg-Marquardt were the best for estimating areal rainfall using the MLP model. The networks structures of 1-5-11 (conjugate gradient and quickprop and 1-3-11 (Levenberg–Marquardt, which are the inverse networks for estimating areal rainfall using the best MLP model, were identified for spatial disaggregation of areal rainfall using the MLP model. The KSOFM model was compared with the MLP model for spatial disaggregation of areal rainfall. The MLP and KSOFM models could disaggregate areal rainfall into individual point rainfall with spatial concepts.

  8. Approaches in highly parameterized inversion—PEST++ Version 3, a Parameter ESTimation and uncertainty analysis software suite optimized for large environmental models

    Science.gov (United States)

    Welter, David E.; White, Jeremy T.; Hunt, Randall J.; Doherty, John E.

    2015-09-18

    The PEST++ Version 1 object-oriented parameter estimation code is here extended to Version 3 to incorporate additional algorithms and tools to further improve support for large and complex environmental modeling problems. PEST++ Version 3 includes the Gauss-Marquardt-Levenberg (GML) algorithm for nonlinear parameter estimation, Tikhonov regularization, integrated linear-based uncertainty quantification, options of integrated TCP/IP based parallel run management or external independent run management by use of a Version 2 update of the GENIE Version 1 software code, and utilities for global sensitivity analyses. The Version 3 code design is consistent with PEST++ Version 1 and continues to be designed to lower the barriers of entry for users as well as developers while providing efficient and optimized algorithms capable of accommodating large, highly parameterized inverse problems. As such, this effort continues the original focus of (1) implementing the most popular and powerful features of the PEST software suite in a fashion that is easy for novice or experienced modelers to use and (2) developing a software framework that is easy to extend.

  9. Modeling of Groundwater Resources Heavy Metals Concentration Using Soft Computing Methods: Application of Different Types of Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    Meysam Alizamir

    2017-09-01

    Full Text Available Nowadays, groundwater resources play a vital role as a source of drinking water in arid and semiarid regions and forecasting of pollutants content in these resources is very important. Therefore, this study aimed to compare two soft computing methods for modeling Cd, Pb and Zn concentration in groundwater resources of Asadabad Plain, Western Iran. The relative accuracy of several soft computing models, namely multi-layer perceptron (MLP and radial basis function (RBF for forecasting of heavy metals concentration have been investigated. In addition, Levenberg-Marquardt, gradient descent and conjugate gradient training algorithms were utilized for the MLP models. The ANN models for this study were developed using MATLAB R 2014 Software program. The MLP performs better than the other models for heavy metals concentration estimation. The simulation results revealed that MLP model was able to model heavy metals concentration in groundwater resources favorably. It generally is effectively utilized in environmental applications and in the water quality estimations. In addition, out of three algorithms, Levenberg-Marquardt was better than the others were. This study proposed soft computing modeling techniques for the prediction and estimation of heavy metals concentration in groundwater resources of Asadabad Plain. Based on collected data from the plain, MLP and RBF models were developed for each heavy metal. MLP can be utilized effectively in applications of prediction of heavy metals concentration in groundwater resources of Asadabad Plain.

  10. MINPACK-1, Subroutine Library for Nonlinear Equation System

    International Nuclear Information System (INIS)

    Garbow, Burton S.

    1984-01-01

    1 - Description of problem or function: MINPACK1 is a package of FORTRAN subprograms for the numerical solution of systems of non- linear equations and nonlinear least-squares problems. The individual programs are: Identification/Description: - CHKDER: Check gradients for consistency with functions, - DOGLEG: Determine combination of Gauss-Newton and gradient directions, - DPMPAR: Provide double precision machine parameters, - ENORM: Calculate Euclidean norm of vector, - FDJAC1: Calculate difference approximation to Jacobian (nonlinear equations), - FDJAC2: Calculate difference approximation to Jacobian (least squares), - HYBRD: Solve system of nonlinear equations (approximate Jacobian), - HYBRD1: Easy-to-use driver for HYBRD, - HYBRJ: Solve system of nonlinear equations (analytic Jacobian), - HYBRJ1: Easy-to-use driver for HYBRJ, - LMDER: Solve nonlinear least squares problem (analytic Jacobian), - LMDER1: Easy-to-use driver for LMDER, - LMDIF: Solve nonlinear least squares problem (approximate Jacobian), - LMDIF1: Easy-to-use driver for LMDIF, - LMPAR: Determine Levenberg-Marquardt parameter - LMSTR: Solve nonlinear least squares problem (analytic Jacobian, storage conserving), - LMSTR1: Easy-to-use driver for LMSTR, - QFORM: Accumulate orthogonal matrix from QR factorization QRFAC Compute QR factorization of rectangular matrix, - QRSOLV: Complete solution of least squares problem, - RWUPDT: Update QR factorization after row addition, - R1MPYQ: Apply orthogonal transformations from QR factorization, - R1UPDT: Update QR factorization after rank-1 addition, - SPMPAR: Provide single precision machine parameters. 4. Method of solution - MINPACK1 uses the modified Powell hybrid method and the Levenberg-Marquardt algorithm

  11. Prediction of residential building energy consumption: A neural network approach

    International Nuclear Information System (INIS)

    Biswas, M.A. Rafe; Robinson, Melvin D.; Fumo, Nelson

    2016-01-01

    Some of the challenges to predict energy utilization has gained recognition in the residential sector due to the significant energy consumption in recent decades. However, the modeling of residential building energy consumption is still underdeveloped for optimal and robust solutions while this research area has become of greater relevance with significant advances in computation and simulation. Such advances include the advent of artificial intelligence research in statistical model development. Artificial neural network has emerged as a key method to address the issue of nonlinearity of building energy data and the robust calculation of large and dynamic data. The development and validation of such models on one of the TxAIRE Research houses has been demonstrated in this paper. The TxAIRE houses have been designed to serve as realistic test facilities for demonstrating new technologies. The input variables used from the house data include number of days, outdoor temperature and solar radiation while the output variables are house and heat pump energy consumption. The models based on Levenberg-Marquardt and OWO-Newton algorithms had promising results of coefficients of determination within 0.87–0.91, which is comparable to prior literature. Further work will be explored to develop a robust model for residential building application. - Highlights: • A TxAIRE research house energy consumption data was collected in model development. • Neural network models developed using Levenberg–Marquardt or OWO-Newton algorithms. • Model results match well with data and statistically consistent with literature.

  12. Dipole location using SQUID based measurements: Application to magnetocardiography

    Science.gov (United States)

    Mariyappa, N.; Parasakthi, C.; Sengottuvel, S.; Gireesan, K.; Patel, Rajesh; Janawadkar, M. P.; Sundar, C. S.; Radhakrishnan, T. S.

    2012-07-01

    We report a method of inferring the dipole location using iterative nonlinear least square optimization based on Levenberg-Marquardt algorithm, wherein, we use different sets of pseudo-random numbers as initial parameter values. The method has been applied to (i) the simulated data representing the calculated magnetic field distribution produced by a point dipole placed at a known position, (ii) the experimental data from SQUID based measurements of the magnetic field distribution produced by a source coil carrying current, and (iii) the actual experimentally measured magnetocardiograms of human subjects using a SQUID based system.

  13. Open-source Software for Exoplanet Atmospheric Modeling

    Science.gov (United States)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  14. Inverse radiative transfer problems in two-dimensional heterogeneous media

    International Nuclear Information System (INIS)

    Tito, Mariella Janette Berrocal

    2001-01-01

    The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)

  15. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...... penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least...

  16. Gamma ray spectrum analysis code: sigmas 1.0

    International Nuclear Information System (INIS)

    Siangsanan, P.; Dharmavanij, W.; Chongkum, S.

    1996-01-01

    We have developed Sigmas 1.0 a software package for data reduction and gamma ray spectra evaluation. It is capable of analysing the gamma-ray spectrum in the range of 0-3 MeV by semiconductor detector, i.e. Ge(Li) or HPGe, peak searching, net area determining, plotting and spectrum displaying. There are two methods for calculating the net area under peaks; the Covell method and non-linear fitting by the method of Levenberg and Marquardt which can fit any multiplet peak in the spectrum. The graphic display was rather fast and user friendly

  17. Properties of the Variation of the Infrared Emission of OH/IR Stars I. The K Band Light Curves

    Directory of Open Access Journals (Sweden)

    Kyung-Won Suh

    2009-09-01

    Full Text Available To study properties of the variation of the infrared emission of OH/IR stars, we collect and analyze the infrared observational data in K band for nine OH/IR stars. We use the observational data obtained for about three decades including recent data from the two micron all sky survey (2MASS and the deep near infrared survey of the southern sky (DENIS. We use Marquardt-Levenberg algorithm to determine the pulsation period and amplitude for each star and compare them with previous results of infrared and radio investigations.

  18. Inverse radiative transfer problems in two-dimensional heterogeneous media; Problemas inversos em transferencia radiativa em meios heterogeneos bidimensionais

    Energy Technology Data Exchange (ETDEWEB)

    Tito, Mariella Janette Berrocal

    2001-01-01

    The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)

  19. Gas metal arc welding of butt joint with varying gap width based on neural networks

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2005-01-01

    penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters, has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least......This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...

  20. A Neural Network Approach for GMA Butt Joint Welding

    DEFF Research Database (Denmark)

    Christensen, Kim Hardam; Sørensen, Torben

    2003-01-01

    penetration, when the gap width is varying during the welding process. The process modeling to facilitate the mapping from joint geometry and reference weld quality to significant welding parameters has been based on a multi-layer feed-forward network. The Levenberg-Marquardt algorithm for non-linear least......This paper describes the application of the neural network technology for gas metal arc welding (GMAW) control. A system has been developed for modeling and online adjustment of welding parameters, appropriate to guarantee a certain degree of quality in the field of butt joint welding with full...

  1. Determination of the thermal conductivity and specific heat capacity of neem seeds by inverse problem method

    Directory of Open Access Journals (Sweden)

    S.N. Nnamchi

    2010-01-01

    Full Text Available Determination of the thermal conductivity and the specific heat capacity of neem seeds (Azadirachta indica A. Juss usingthe inverse method is the main subject of this work. One-dimensional formulation of heat conduction problem in a spherewas used. Finite difference method was adopted for the solution of the heat conduction problem. The thermal conductivityand the specific heat capacity were determined by least square method in conjunction with Levenberg-Marquardt algorithm.The results obtained compare favourably with those obtained experimentally. These results are useful in the analysis ofneem seeds drying and leaching processes.

  2. Low dose CBCT reconstruction via prior contour based total variation (PCTV) regularization: a feasibility study

    Science.gov (United States)

    Chen, Yingxuan; Yin, Fang-Fang; Zhang, Yawei; Zhang, You; Ren, Lei

    2018-04-01

    Purpose: compressed sensing reconstruction using total variation (TV) tends to over-smooth the edge information by uniformly penalizing the image gradient. The goal of this study is to develop a novel prior contour based TV (PCTV) method to enhance the edge information in compressed sensing reconstruction for CBCT. Methods: the edge information is extracted from prior planning-CT via edge detection. Prior CT is first registered with on-board CBCT reconstructed with TV method through rigid or deformable registration. The edge contours in prior-CT is then mapped to CBCT and used as the weight map for TV regularization to enhance edge information in CBCT reconstruction. The PCTV method was evaluated using extended-cardiac-torso (XCAT) phantom, physical CatPhan phantom and brain patient data. Results were compared with both TV and edge preserving TV (EPTV) methods which are commonly used for limited projection CBCT reconstruction. Relative error was used to calculate pixel value difference and edge cross correlation was defined as the similarity of edge information between reconstructed images and ground truth in the quantitative evaluation. Results: compared to TV and EPTV, PCTV enhanced the edge information of bone, lung vessels and tumor in XCAT reconstruction and complex bony structures in brain patient CBCT. In XCAT study using 45 half-fan CBCT projections, compared with ground truth, relative errors were 1.5%, 0.7% and 0.3% and edge cross correlations were 0.66, 0.72 and 0.78 for TV, EPTV and PCTV, respectively. PCTV is more robust to the projection number reduction. Edge enhancement was reduced slightly with noisy projections but PCTV was still superior to other methods. PCTV can maintain resolution while reducing the noise in the low mAs CatPhan reconstruction. Low contrast edges were preserved better with PCTV compared with TV and EPTV. Conclusion: PCTV preserved edge information as well as reduced streak artifacts and noise in low dose CBCT reconstruction

  3. Real-time deformation of human soft tissues: A radial basis meshless 3D model based on Marquardt's algorithm.

    Science.gov (United States)

    Zhou, Jianyong; Luo, Zu; Li, Chunquan; Deng, Mi

    2018-01-01

    When the meshless method is used to establish the mathematical-mechanical model of human soft tissues, it is necessary to define the space occupied by human tissues as the problem domain and the boundary of the domain as the surface of those tissues. Nodes should be distributed in both the problem domain and on the boundaries. Under external force, the displacement of the node is computed by the meshless method to represent the deformation of biological soft tissues. However, computation by the meshless method consumes too much time, which will affect the simulation of real-time deformation of human tissues in virtual surgery. In this article, the Marquardt's Algorithm is proposed to fit the nodal displacement at the problem domain's boundary and obtain the relationship between surface deformation and force. When different external forces are applied, the deformation of soft tissues can be quickly obtained based on this relationship. The analysis and discussion show that the improved model equations with Marquardt's Algorithm not only can simulate the deformation in real-time but also preserve the authenticity of the deformation model's physical properties. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Ajuste do modelo de Schumacher e Hall e aplicação de redes neurais artificiais para estimar volume de árvores de eucalipto Adjustment of the Schumacher and Hall model and application of artificial neural networks to estimate volume of eucalypt trees

    Directory of Open Access Journals (Sweden)

    Mayra Luiza Marques da Silva

    2009-12-01

    Full Text Available Objetivou-se, neste trabalho, avaliar o ajuste do modelo volumétrico de Schumacher e Hall por diferentes algoritmos, bem como a aplicação de redes neurais artificiais para estimação do volume de madeira de eucalipto em função do diâmetro a 1,30 m do solo (DAP, da altura total (Ht e do clone. Foram utilizadas 21 cubagens de povoamentos de clones de eucalipto com DAP variando de 4,5 a 28,3 cm e altura total de 6,6 a 33,8 m, num total de 862 árvores. O modelo volumétrico de Schumacher e Hall foi ajustado nas formas linear e não linear, com os seguintes algoritmos: Gauss-Newton, Quasi-Newton, Levenberg-Marquardt, Simplex, Hooke-Jeeves Pattern, Rosenbrock Pattern, Simplex, Hooke-Jeeves e Rosenbrock, utilizado simultaneamente com o método Quasi-Newton e com o princípio da Máxima Verossimilhança. Diferentes arquiteturas e modelos (Multilayer Perceptron MLP e Radial Basis Function RBF de redes neurais artificiais foram testados, sendo selecionadas as redes que melhor representaram os dados. As estimativas dos volumes foram avaliadas por gráficos de volume estimado em função do volume observado e pelo teste estatístico L&O. Assim, conclui-se que o ajuste do modelo de Schumacher e Hall pode ser usado na sua forma linear, com boa representatividade e sem apresentar tendenciosidade; os algoritmos Gauss-Newton, Quasi-Newton e Levenberg-Marquardt mostraram-se eficientes para o ajuste do modelo volumétrico de Schumacher e Hall, e as redes neurais artificiais apresentaram boa adequação ao problema, sendo elas altamente recomendadas para realizar prognose da produção de florestas plantadas.This research aimed at evaluating the adjustment of Schumacher and Hall volumetric model by different algorithms and the application of artificial neural networks to estimate the volume of wood of eucalyptus according to the diameter at breast height (DBH, total height (Ht of the clone. For such, 21 scalings of stands of eucalyptus clones were used with

  5. Using virtual reality to test the regularity priors used by the human visual system

    Science.gov (United States)

    Palmer, Eric; Kwon, TaeKyu; Pizlo, Zygmunt

    2017-09-01

    Virtual reality applications provide an opportunity to test human vision in well-controlled scenarios that would be difficult to generate in real physical spaces. This paper presents a study intended to evaluate the importance of the regularity priors used by the human visual system. Using a CAVE simulation, subjects viewed virtual objects in a variety of experimental manipulations. In the first experiment, the subject was asked to count the objects in a scene that was viewed either right-side-up or upside-down for 4 seconds. The subject counted more accurately in the right-side-up condition regardless of the presence of binocular disparity or color. In the second experiment, the subject was asked to reconstruct the scene from a different viewpoint. Reconstructions were accurate, but the position and orientation error was twice as high when the scene was rotated by 45°, compared to 22.5°. Similarly to the first experiment, there was little difference between monocular and binocular viewing. In the third experiment, the subject was asked to adjust the position of one object to match the depth extent to the frontal extent among three objects. Performance was best with symmetrical objects and became poorer with asymmetrical objects and poorest with only small circular markers on the floor. Finally, in the fourth experiment, we demonstrated reliable performance in monocular and binocular recovery of 3D shapes of objects standing naturally on the simulated horizontal floor. Based on these results, we conclude that gravity, horizontal ground, and symmetry priors play an important role in veridical perception of scenes.

  6. Stochastic growth logistic model with aftereffect for batch fermentation process

    Energy Technology Data Exchange (ETDEWEB)

    Rosli, Norhayati; Ayoubi, Tawfiqullah [Faculty of Industrial Sciences and Technology, Universiti Malaysia Pahang, Lebuhraya Tun Razak, 26300 Gambang, Pahang (Malaysia); Bahar, Arifah; Rahman, Haliza Abdul [Department of Mathematical Sciences, Faculty of Science, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia); Salleh, Madihah Md [Department of Biotechnology Industry, Faculty of Biosciences and Bioengineering, Universiti Teknologi Malaysia, 81310 Johor Bahru, Johor (Malaysia)

    2014-06-19

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  7. Stochastic growth logistic model with aftereffect for batch fermentation process

    Science.gov (United States)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md

    2014-06-01

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits.

  8. Stochastic growth logistic model with aftereffect for batch fermentation process

    International Nuclear Information System (INIS)

    Rosli, Norhayati; Ayoubi, Tawfiqullah; Bahar, Arifah; Rahman, Haliza Abdul; Salleh, Madihah Md

    2014-01-01

    In this paper, the stochastic growth logistic model with aftereffect for the cell growth of C. acetobutylicum P262 and Luedeking-Piret equations for solvent production in batch fermentation system is introduced. The parameters values of the mathematical models are estimated via Levenberg-Marquardt optimization method of non-linear least squares. We apply Milstein scheme for solving the stochastic models numerically. The effciency of mathematical models is measured by comparing the simulated result and the experimental data of the microbial growth and solvent production in batch system. Low values of Root Mean-Square Error (RMSE) of stochastic models with aftereffect indicate good fits

  9. Methane combustion kinetic rate constants determination: an ill-posed inverse problem analysis

    Directory of Open Access Journals (Sweden)

    Bárbara D. L. Ferreira

    2013-01-01

    Full Text Available Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.

  10. Description of bioremediation of soils using the model of a multistep system of microorganisms

    Science.gov (United States)

    Lubysheva, A. I.; Potashev, K. A.; Sofinskaya, O. A.

    2018-01-01

    The paper deals with the development of a mathematical model describing the interaction of a multi-step system of microorganisms in soil polluted with oil products. Each step in this system uses products of vital activity of the previous step to feed. Six different models of the multi-step system are considered. The equipping of the models with coefficients was carried out from the condition of minimizing the residual of the calculated and experimental data using an original algorithm based on the Levenberg-Marquardt method in combination with the Monte Carlo method for the initial approximation finding.

  11. Modeling of the height control system using artificial neural networks

    Directory of Open Access Journals (Sweden)

    A. R Tahavvor

    2016-09-01

    action. The mechanical parts were computer-generated by engineering software in assembled, exploded and standard two-dimensional drawing required for the manufacturing process. Carrier and framework of control unit and actuator mainly designed to have the capability to support and hold the hardware and sensor assembly in an easy mountable fashion. This arrangement performed feasibility of the movement and allocating of control unit along the travel length of belt above the conveyor unit. In this work a multilayer perceptron network with different training algorithm was used and it is found that the backpropagation algorithm with Levenberge-Marquardt learning rule was the best choice for this analysis because of the accurate and faster training procedure. The Levenberg-Marquardt algorithm was an iterative technique that locates the minimum of a multivariate function that was expressed as the sum of squares of nonlinear real-valued functions. It has become a standard technique for non-linear least-squares problems, widely adopted in a broad spectrum of disciplines. LM can be thought of as a combination of steepest descent and the Gauss-Newton method. When the current solution was far from the correct one, the algorithm behaves like a steepest descent method: slow, but guaranteed to converge. When the current solution is close to the correct solution, it becomes a Gauss-Newton method. The Levenberg algorithm is: 1. Do an update as directed by the rule above. 2. Evaluate the error at the new parameter vector. 3. If the error has increased as a result the update, then retract the step (i.e. reset the weights to their previous values and increase l by a factor of 10 or some such significant factor, then goes to (1 and try an update again. 4. If the error has decreased as a result of the update, then accept the step (i.e. keep the weights at their new values and decrease l by a factor of 10 or so. Results and Discussion The study of multi artificial neural network learning

  12. Effects of a balanced energy and high protein formula diet (Vegestart complet®) vs. low-calorie regular diet in morbid obese patients prior to bariatric surgery (laparoscopic single anastomosis gastric bypass): a prospective, double-blind randomized study.

    Science.gov (United States)

    Carbajo, M A; Castro, Maria J; Kleinfinger, S; Gómez-Arenas, S; Ortiz-Solórzano, J; Wellman, R; García-Ianza, C; Luque, E

    2010-01-01

    Bariatric surgery is considered the only therapeutic alternative for morbid obesity and its comorbidities. High risks factors are usually linked with this kind of surgery. In order to reduce it, we consider that losing at least 10% of overweight in Morbid Obese (MO) and a minimum of 20% in Super- Obese patients (SO) before surgery, may reduce the morbidity of the procedure. The aim of our study is to demonstrate the effectiveness and tolerance of a balanced energy formula diet at the preoperative stage, comparing it against a low calorie regular diet. We studied 120 patients divided into two groups of 60 each, group A was treated 20 days prior to bariatric surgery with a balanced energy formula diet, based on 200 Kcal every 6 hours for 12 days and group B was treated with a low calorie regular diet with no carbs or fat. The last eight days prior to surgery both groups took only clear liquids. We studied the evolution of weight loss, the BMI, as well as behavior of co-morbidities as systolic blood pressure, diastolic blood pressure, glucose controls and tolerance at the protocol. The study shows that patients undergoing a balanced energy formula diet improved their comorbidities statistically significant in terms of decrease in weight and BMI loss, blood pressure and glucose, compared to the group that was treated before surgery with a low calorie regular diet. Nevertheless both groups improving the weight loss and co-morbidities with better surgical results and facilities. A correct preparation of the Morbid Obese patients prior of surgery can reduce the operative risks improving the results. Our study show that the preoperative treatment with a balanced energy formula diet as were included in our protocol in patients undergoing bariatric surgery improves statistical better their overall conditions, lowers cardiovascular risk and metabolic diseases that the patients with regular diet alone.

  13. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Directory of Open Access Journals (Sweden)

    Mohd Taufiq Muslim

    Full Text Available In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM algorithm, Bayesian Regularization (BR algorithm and Particle Swarm Optimization (PSO algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS. The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  14. Manifold absolute pressure estimation using neural network with hybrid training algorithm.

    Science.gov (United States)

    Muslim, Mohd Taufiq; Selamat, Hazlina; Alimin, Ahmad Jais; Haniff, Mohamad Fadzli

    2017-01-01

    In a modern small gasoline engine fuel injection system, the load of the engine is estimated based on the measurement of the manifold absolute pressure (MAP) sensor, which took place in the intake manifold. This paper present a more economical approach on estimating the MAP by using only the measurements of the throttle position and engine speed, resulting in lower implementation cost. The estimation was done via two-stage multilayer feed-forward neural network by combining Levenberg-Marquardt (LM) algorithm, Bayesian Regularization (BR) algorithm and Particle Swarm Optimization (PSO) algorithm. Based on the results found in 20 runs, the second variant of the hybrid algorithm yields a better network performance than the first variant of hybrid algorithm, LM, LM with BR and PSO by estimating the MAP closely to the simulated MAP values. By using a valid experimental training data, the estimator network that trained with the second variant of the hybrid algorithm showed the best performance among other algorithms when used in an actual retrofit fuel injection system (RFIS). The performance of the estimator was also validated in steady-state and transient condition by showing a closer MAP estimation to the actual value.

  15. Example-driven manifold priors for image deconvolution.

    Science.gov (United States)

    Ni, Jie; Turaga, Pavan; Patel, Vishal M; Chellappa, Rama

    2011-11-01

    Image restoration methods that exploit prior information about images to be estimated have been extensively studied, typically using the Bayesian framework. In this paper, we consider the role of prior knowledge of the object class in the form of a patch manifold to address the deconvolution problem. Specifically, we incorporate unlabeled image data of the object class, say natural images, in the form of a patch-manifold prior for the object class. The manifold prior is implicitly estimated from the given unlabeled data. We show how the patch-manifold prior effectively exploits the available sample class data for regularizing the deblurring problem. Furthermore, we derive a generalized cross-validation (GCV) function to automatically determine the regularization parameter at each iteration without explicitly knowing the noise variance. Extensive experiments show that this method performs better than many competitive image deconvolution methods.

  16. Circular geodesic of Bardeen and Ayon-Beato-Garcia regular black-hole and no-horizon spacetimes

    Science.gov (United States)

    Stuchlík, Zdeněk; Schee, Jan

    2015-12-01

    In this paper, we study circular geodesic motion of test particles and photons in the Bardeen and Ayon-Beato-Garcia (ABG) geometry describing spherically symmetric regular black-hole or no-horizon spacetimes. While the Bardeen geometry is not exact solution of Einstein's equations, the ABG spacetime is related to self-gravitating charged sources governed by Einstein's gravity and nonlinear electrodynamics. They both are characterized by the mass parameter m and the charge parameter g. We demonstrate that in similarity to the Reissner-Nordstrom (RN) naked singularity spacetimes an antigravity static sphere should exist in all the no-horizon Bardeen and ABG solutions that can be surrounded by a Keplerian accretion disc. However, contrary to the RN naked singularity spacetimes, the ABG no-horizon spacetimes with parameter g/m > 2 can contain also an additional inner Keplerian disc hidden under the static antigravity sphere. Properties of the geodesic structure are reflected by simple observationally relevant optical phenomena. We give silhouette of the regular black-hole and no-horizon spacetimes, and profiled spectral lines generated by Keplerian rings radiating at a fixed frequency and located in strong gravity region at or nearby the marginally stable circular geodesics. We demonstrate that the profiled spectral lines related to the regular black-holes are qualitatively similar to those of the Schwarzschild black-holes, giving only small quantitative differences. On the other hand, the regular no-horizon spacetimes give clear qualitative signatures of their presence while compared to the Schwarschild spacetimes. Moreover, it is possible to distinguish the Bardeen and ABG no-horizon spacetimes, if the inclination angle to the observer is known.

  17. Fermentation Process Modeling with Levenberg-Marquardt Algorithm and Runge-Kutta Method on Ethanol Production by Saccharomyces cerevisiae

    Directory of Open Access Journals (Sweden)

    Dengfeng Liu

    2014-01-01

    Full Text Available The core of the Chinese rice wine making is a typical simultaneous saccharification and fermentation (SSF process. In order to control and optimize the SSF process of Chinese rice wine brewing, it is necessary to construct kinetic model and study the influence of temperature on the Chinese rice wine brewing process. An unstructured kinetic model containing 12 kinetics parameters was developed and used to describe the changing of kinetic parameters in Chinese rice wine fermentation at 22, 26, and 30°C. The effects of substrate and product inhibitions were included in the model, and four variable, including biomass, ethanol, sugar and substrate were considered. The R-square values for the model are all above 0.95 revealing that the model prediction values could match experimental data very well. Our model conceivably contributes significantly to the improvement of the industrial process for the production of Chinese rice wine.

  18. Neuro-fuzzy and neural network techniques for forecasting sea level in Darwin Harbor, Australia

    Science.gov (United States)

    Karimi, Sepideh; Kisi, Ozgur; Shiri, Jalal; Makarynskyy, Oleg

    2013-03-01

    Accurate predictions of sea level with different forecast horizons are important for coastal and ocean engineering applications, as well as in land drainage and reclamation studies. The methodology of tidal harmonic analysis, which is generally used for obtaining a mathematical description of the tides, is data demanding requiring processing of tidal observation collected over several years. In the present study, hourly sea levels for Darwin Harbor, Australia were predicted using two different, data driven techniques, adaptive neuro-fuzzy inference system (ANFIS) and artificial neural network (ANN). Multi linear regression (MLR) technique was used for selecting the optimal input combinations (lag times) of hourly sea level. The input combination comprises current sea level as well as five previous level values found to be optimal. For the ANFIS models, five different membership functions namely triangular, trapezoidal, generalized bell, Gaussian and two Gaussian membership function were tested and employed for predicting sea level for the next 1 h, 24 h, 48 h and 72 h. The used ANN models were trained using three different algorithms, namely, Levenberg-Marquardt, conjugate gradient and gradient descent. Predictions of optimal ANFIS and ANN models were compared with those of the optimal auto-regressive moving average (ARMA) models. The coefficient of determination, root mean square error and variance account statistics were used as comparison criteria. The obtained results indicated that triangular membership function was optimal for predictions with the ANFIS models while adaptive learning rate and Levenberg-Marquardt were most suitable for training the ANN models. Consequently, ANFIS and ANN models gave similar forecasts and performed better than the developed for the same purpose ARMA models for all the prediction intervals.

  19. Nonlinear System Identification Using Neural Networks Trained with Natural Gradient Descent

    Directory of Open Access Journals (Sweden)

    Ibnkahla Mohamed

    2003-01-01

    Full Text Available We use natural gradient (NG learning neural networks (NNs for modeling and identifying nonlinear systems with memory. The nonlinear system is comprised of a discrete-time linear filter followed by a zero-memory nonlinearity . The NN model is composed of a linear adaptive filter followed by a two-layer memoryless nonlinear NN. A Kalman filter-based technique and a search-and-converge method have been employed for the NG algorithm. It is shown that the NG descent learning significantly outperforms the ordinary gradient descent and the Levenberg-Marquardt (LM procedure in terms of convergence speed and mean squared error (MSE performance.

  20. Analysis of radioactive waste contamination in soils. Part III: estimation of apparent diffusion coefficient; Analise de contaminacao de residuo radioativo em solos. Parte 3: calculo do coeficiente de difusao aparente

    Energy Technology Data Exchange (ETDEWEB)

    Souza, R.; Pereira, L.M.; Orlande, H.R.B.; Cotta, R.M. [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Dept. de Engenharia Mecanica

    1997-12-31

    This paper deals with the estimation of the apparent diffusion coefficient of KBr in sand saturated with water. The present inverse parameter estimation problem is solved with Levenberg-Marquardt method. Simulated experimental data is used in order to assess the accuracy of this method as applied to the estimation of the apparent diffusion coefficient. The experimental apparatus is described and the value estimated for the parameter obtained with actual experimental data is presented in the paper. A statistical analysis is performed in order to obtain an estimate for the standard deviation for the parameter. (author) 12 refs., 7 figs., 3 tabs.; helcio at serv.com.ufrj.br

  1. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...... and provides a better coverage of the Pareto optimal solutions at a lower computational cost....

  2. Neural Models for the Broadside-Coupled V-Shaped Microshield Coplanar Waveguides

    Science.gov (United States)

    Guney, K.; Yildiz, C.; Kaya, S.; Turkmen, M.

    2006-09-01

    This article presents a new approach based on multilayered perceptron neural networks (MLPNNs) to calculate the odd-and even-mode characteristic impedances and effective permittivities of the broadside-coupled V-shaped microshield coplanar waveguides (BC-VSMCPWs). Six learning algorithms, bayesian regulation (BR), Levenberg-Marquardt (LM), quasi-Newton (QN), scaled conjugate gradient (SCG), resilient propagation (RP), and conjugate gradient of Fletcher-Powell (CGF), are used to train the MLPNNs. The neural results are in very good agreement with the results reported elsewhere. When the performances of neural models are compared with each other, the best and worst results are obtained from the MLPNNs trained by the BR and CGF algorithms, respectively.

  3. Axially symmetric reconstruction of plasma emission and absorption coefficients

    International Nuclear Information System (INIS)

    Yang Lixin; Jia Hui; Yang Jiankun; Li Xiujian; Chen Shaorong; Liu Xishun

    2013-01-01

    A layered structure imaging model is developed in order to reconstruct emission coefficients and absorption coefficients simultaneously, in laser fusion core plasma diagnostics. A novel axially symmetric reconstruction method that utilizes the LM (Levenberg-Marquardt) nonlinear least squares minimization algorithm is proposed based on the layered structure. Numerical simulation results demonstrate that the proposed method is sufficiently accurate to reconstruct emission coefficients and absorption coefficients, and when the standard deviation of noise is 0.01, the errors of emission coefficients and absorption coefficients are 0.17, 0.22, respectively. Furthermore, this method could perform much better on reconstruction effect compared with traditional inverse Abel transform algorithms. (authors)

  4. A inclusão do aluno surdo no ensino regular

    OpenAIRE

    Maria Rita Paula da Silva; Faculdades EST, São Leopoldo, RS; Terezinha de Jesus Martins de Sena

    2015-01-01

    Este artigo tem como objetivo compreender como ocorre a inclusão do aluno surdo nas salas de aula do ensino regular. Buscou-se refletir sobre o desenvolvimento do processo ensino aprendizagem, a existência e aplicabilidade do currículo e estrutura física da escola: também a adaptação para a inclusão de alunos com necessidades especiais e a valorização da diversidade dos sujeitos no contexto escolar. A pesquisa foi realizada ...

  5. Ultrasound guided electrical impedance tomography for 2D free-interface reconstruction

    Science.gov (United States)

    Liang, Guanghui; Ren, Shangjie; Dong, Feng

    2017-07-01

    The free-interface detection problem is normally seen in industrial or biological processes. Electrical impedance tomography (EIT) is a non-invasive technique with advantages of high-speed and low cost, and is a promising solution for free-interface detection problems. However, due to the ill-posed and nonlinear characteristics, the spatial resolution of EIT is low. To deal with the issue, an ultrasound guided EIT is proposed to directly reconstruct the geometric configuration of the target free-interface. In the method, the position of the central point of the target interface is measured by a pair of ultrasound transducers mounted at the opposite side of the objective domain, and then the position measurement is used as the prior information for guiding the EIT-based free-interface reconstruction. During the process, a constrained least squares framework is used to fuse the information from different measurement modalities, and the Lagrange multiplier-based Levenberg-Marquardt method is adopted to provide the iterative solution of the constraint optimization problem. The numerical results show that the proposed ultrasound guided EIT method for the free-interface reconstruction is more accurate than the single modality method, especially when the number of valid electrodes is limited.

  6. Impact of imatinib interruption and duration of prior hydroxyurea on the treatment outcome in patients with chronic myeloid leukemia: Single institution experience

    International Nuclear Information System (INIS)

    Edesa, W.A.; Abdel-malek, R.R.

    2015-01-01

    Background: Optimal response requires that patients should be maintained on the drug continuously. Objectives: To evaluate the influence of imatinib interruption and prior hydroxyurea use on the outcome of patients with chronic myeloid leukemia. Materials and methods: Between January 2010 and November 2013, patients with chronic phase who received imatinib at the Kasr Al-ainy Center of Clinical Oncology were included. Results: Sixty patients were included in this study, thirty three patients (55%) received imatinib upfront, while 27 (45%) received imatinib post hydroxyurea. Imatinib was not given regularly in 50% of patients. In terms of response, only major molecular response and complete molecular response were statistically significant in favor of patients who were receiving imatinib regularly compared to those who had interruption (ρ < 0.001, ρ < 0.001, respectively) , while there was no difference in patients stratified according to prior hydroxyurea. The median progression free survival was 30.3 months (95% CI 24.3–36.3). Among the group of patients who received imatinib regularly, progression free survival was longer (ρ = 0.049), there was no difference between those who received prior hydroxyurea versus those who did not (ρ = 0.67). Conclusion: Duration of prior hydroxyurea had no impact on response or progression free survival, while patients regular on imatinib had statistically significant difference with respect to major molecular response, complete molecular response and progression free survival compared to those who had periods of drug interruption, thus we need more governmental support to supply the drug without interruption to improve the outcome of therapy

  7. Prediction of Force Measurements of a Microbend Sensor Based on an Artificial Neural Network

    Directory of Open Access Journals (Sweden)

    Kemal Fidanboylu

    2009-09-01

    Full Text Available Artificial neural network (ANN based prediction of the response of a microbend fiber optic sensor is presented. To the best of our knowledge no similar work has been previously reported in the literature. Parallel corrugated plates with three deformation cycles, 6 mm thickness of the spacer material and 16 mm mechanical periodicity between deformations were used in the microbend sensor. Multilayer Perceptron (MLP with different training algorithms, Radial Basis Function (RBF network and General Regression Neural Network (GRNN are used as ANN models in this work. All of these models can predict the sensor responses with considerable errors. RBF has the best performance with the smallest mean square error (MSE values of training and test results. Among the MLP algorithms and GRNN the Levenberg-Marquardt algorithm has good results. These models successfully predict the sensor responses, hence ANNs can be used as useful tool in the design of more robust fiber optic sensors.

  8. Accurate prediction of the dew points of acidic combustion gases by using an artificial neural network model

    International Nuclear Information System (INIS)

    ZareNezhad, Bahman; Aminian, Ali

    2011-01-01

    This paper presents a new approach based on using an artificial neural network (ANN) model for predicting the acid dew points of the combustion gases in process and power plants. The most important acidic combustion gases namely, SO 3 , SO 2 , NO 2 , HCl and HBr are considered in this investigation. Proposed Network is trained using the Levenberg-Marquardt back propagation algorithm and the hyperbolic tangent sigmoid activation function is applied to calculate the output values of the neurons of the hidden layer. According to the network's training, validation and testing results, a three layer neural network with nine neurons in the hidden layer is selected as the best architecture for accurate prediction of the acidic combustion gases dew points over wide ranges of acid and moisture concentrations. The proposed neural network model can have significant application in predicting the condensation temperatures of different acid gases to mitigate the corrosion problems in stacks, pollution control devices and energy recovery systems.

  9. Gpufit: An open-source toolkit for GPU-accelerated curve fitting.

    Science.gov (United States)

    Przybylski, Adrian; Thiel, Björn; Keller-Findeisen, Jan; Stock, Bernd; Bates, Mark

    2017-11-16

    We present a general purpose, open-source software library for estimation of non-linear parameters by the Levenberg-Marquardt algorithm. The software, Gpufit, runs on a Graphics Processing Unit (GPU) and executes computations in parallel, resulting in a significant gain in performance. We measured a speed increase of up to 42 times when comparing Gpufit with an identical CPU-based algorithm, with no loss of precision or accuracy. Gpufit is designed such that it is easily incorporated into existing applications or adapted for new ones. Multiple software interfaces, including to C, Python, and Matlab, ensure that Gpufit is accessible from most programming environments. The full source code is published as an open source software repository, making its function transparent to the user and facilitating future improvements and extensions. As a demonstration, we used Gpufit to accelerate an existing scientific image analysis package, yielding significantly improved processing times for super-resolution fluorescence microscopy datasets.

  10. Prevalence and Correlates of Having a Regular Physician among Women Presenting for Induced Abortion.

    Science.gov (United States)

    Chor, Julie; Hebert, Luciana E; Hasselbacher, Lee A; Whitaker, Amy K

    2016-01-01

    To determine the prevalence and correlates of having a regular physician among women presenting for induced abortion. We conducted a retrospective review of women presenting to an urban, university-based family planning clinic for abortion between January 2008 and September 2011. We conducted bivariate analyses, comparing women with and without a regular physician, and multivariable regression modeling, to identify factors associated with not having a regular physician. Of 834 women, 521 (62.5%) had a regular physician and 313 (37.5%) did not. Women with a prior pregnancy, live birth, or spontaneous abortion were more likely than women without these experiences to have a regular physician. Women with a prior induced abortion were not more likely than women who had never had a prior induced abortion to have a regular physician. Compared with women younger than 18 years, women aged 18 to 26 years were less likely to have a physician (adjusted odds ratio [aOR], 0.25; 95% confidence interval [CI], 0.10-0.62). Women with a prior live birth had increased odds of having a regular physician compared with women without a prior pregnancy (aOR, 1.89; 95% CI, 1.13-3.16). Women without medical/fetal indications and who had not been victims of sexual assault (self-indicated) were less likely to report having a regular physician compared with women with medical/fetal indications (aOR, 0.55; 95% CI, 0.17-0.82). The abortion visit is a point of contact with a large number of women without a regular physician and therefore provides an opportunity to integrate women into health care. Copyright © 2016 Jacobs Institute of Women's Health. Published by Elsevier Inc. All rights reserved.

  11. Decoding β-decay systematics: A global statistical model for β- half-lives

    International Nuclear Information System (INIS)

    Costiris, N. J.; Mavrommatis, E.; Gernoth, K. A.; Clark, J. W.

    2009-01-01

    Statistical modeling of nuclear data provides a novel approach to nuclear systematics complementary to established theoretical and phenomenological approaches based on quantum theory. Continuing previous studies in which global statistical modeling is pursued within the general framework of machine learning theory, we implement advances in training algorithms designed to improve generalization, in application to the problem of reproducing and predicting the half-lives of nuclear ground states that decay 100% by the β - mode. More specifically, fully connected, multilayer feed-forward artificial neural network models are developed using the Levenberg-Marquardt optimization algorithm together with Bayesian regularization and cross-validation. The predictive performance of models emerging from extensive computer experiments is compared with that of traditional microscopic and phenomenological models as well as with the performance of other learning systems, including earlier neural network models as well as the support vector machines recently applied to the same problem. In discussing the results, emphasis is placed on predictions for nuclei that are far from the stability line, and especially those involved in r-process nucleosynthesis. It is found that the new statistical models can match or even surpass the predictive performance of conventional models for β-decay systematics and accordingly should provide a valuable additional tool for exploring the expanding nuclear landscape.

  12. Application of Entropy Ensemble Filter in Neural Network Forecasts of Tropical Pacific Sea Surface Temperatures

    Directory of Open Access Journals (Sweden)

    Hossein Foroozand

    2018-03-01

    Full Text Available Recently, the Entropy Ensemble Filter (EEF method was proposed to mitigate the computational cost of the Bootstrap AGGregatING (bagging method. This method uses the most informative training data sets in the model ensemble rather than all ensemble members created by the conventional bagging. In this study, we evaluate, for the first time, the application of the EEF method in Neural Network (NN modeling of El Nino-southern oscillation. Specifically, we forecast the first five principal components (PCs of sea surface temperature monthly anomaly fields over tropical Pacific, at different lead times (from 3 to 15 months, with a three-month increment for the period 1979–2017. We apply the EEF method in a multiple-linear regression (MLR model and two NN models, one using Bayesian regularization and one Levenberg-Marquardt algorithm for training, and evaluate their performance and computational efficiency relative to the same models with conventional bagging. All models perform equally well at the lead time of 3 and 6 months, while at higher lead times, the MLR model’s skill deteriorates faster than the nonlinear models. The neural network models with both bagging methods produce equally successful forecasts with the same computational efficiency. It remains to be shown whether this finding is sensitive to the dataset size.

  13. An information geometric approach to least squares minimization

    Science.gov (United States)

    Transtrum, Mark; Machta, Benjamin; Sethna, James

    2009-03-01

    Parameter estimation by nonlinear least squares minimization is a ubiquitous problem that has an elegant geometric interpretation: all possible parameter values induce a manifold embedded within the space of data. The minimization problem is then to find the point on the manifold closest to the origin. The standard algorithm for minimizing sums of squares, the Levenberg-Marquardt algorithm, also has geometric meaning. When the standard algorithm fails to efficiently find accurate fits to the data, geometric considerations suggest improvements. Problems involving large numbers of parameters, such as often arise in biological contexts, are notoriously difficult. We suggest an algorithm based on geodesic motion that may offer improvements over the standard algorithm for a certain class of problems.

  14. Flow injection analysis simulations and diffusion coefficient determination by stochastic and deterministic optimization methods.

    Science.gov (United States)

    Kucza, Witold

    2013-07-25

    Stochastic and deterministic simulations of dispersion in cylindrical channels on the Poiseuille flow have been presented. The random walk (stochastic) and the uniform dispersion (deterministic) models have been used for computations of flow injection analysis responses. These methods coupled with the genetic algorithm and the Levenberg-Marquardt optimization methods, respectively, have been applied for determination of diffusion coefficients. The diffusion coefficients of fluorescein sodium, potassium hexacyanoferrate and potassium dichromate have been determined by means of the presented methods and FIA responses that are available in literature. The best-fit results agree with each other and with experimental data thus validating both presented approaches. Copyright © 2013 The Author. Published by Elsevier B.V. All rights reserved.

  15. Note: Cold spectra of the electronic transition A{sup 2}Σ{sup +}-X{sup 2}Π of N{sub 2}O{sup +} radical: High resolution analysis of the bands 000-100, 100-100, and 001-101

    Energy Technology Data Exchange (ETDEWEB)

    Lessa, L. L.; Martins, A. S.; Fellows, C. E., E-mail: fellows@if.uff.br [Departamento de Física, Instituto de Ciências Exatas–ICEx, Universidade Federal Fluminense, Campus do Aterrado, Volta Redonda, RJ 27213-415 (Brazil)

    2015-10-28

    In this note, three vibrational bands of the electronic transition A{sup 2}Σ{sup +}-X{sup 2}Π of the N{sub 2}O{sup +} radical (000-100, 100-100, and 001-101) were theoretically analysed. Starting from Hamiltonian models proposed for this kind of molecule, their parameters were calculated using a Levenberg-Marquardt fit procedure in order to reduce the root mean square deviation from the experimental transitions below to 0.01 cm{sup −1}. The main objective of this work is to obtain new and reliable values for rotational constant B″ and the spin-orbit interaction parameter A of the analysed vibrational levels of the X{sup 2}Π electronic state of this molecule.

  16. Design and Implementation of Recursive Model Predictive Control for Permanent Magnet Synchronous Motor Drives

    Directory of Open Access Journals (Sweden)

    Xuan Wu

    2015-01-01

    Full Text Available In order to control the permanent-magnet synchronous motor system (PMSM with different disturbances and nonlinearity, an improved current control algorithm for the PMSM systems using recursive model predictive control (RMPC is developed in this paper. As the conventional MPC has to be computed online, its iterative computational procedure needs long computing time. To enhance computational speed, a recursive method based on recursive Levenberg-Marquardt algorithm (RLMA and iterative learning control (ILC is introduced to solve the learning issue in MPC. RMPC is able to significantly decrease the computation cost of traditional MPC in the PMSM system. The effectiveness of the proposed algorithm has been verified by simulation and experimental results.

  17. Identification of subsurface structures using electromagnetic data and shape priors

    Energy Technology Data Exchange (ETDEWEB)

    Tveit, Svenn, E-mail: svenn.tveit@uni.no [Uni CIPR, Uni Research, Bergen 5020 (Norway); Department of Mathematics, University of Bergen, Bergen 5020 (Norway); Bakr, Shaaban A., E-mail: shaaban.bakr1@gmail.com [Department of Mathematics, Faculty of Science, Assiut University, Assiut 71516 (Egypt); Uni CIPR, Uni Research, Bergen 5020 (Norway); Lien, Martha, E-mail: martha.lien@octio.com [Uni CIPR, Uni Research, Bergen 5020 (Norway); Octio AS, Bøhmergaten 44, Bergen 5057 (Norway); Mannseth, Trond, E-mail: trond.mannseth@uni.no [Uni CIPR, Uni Research, Bergen 5020 (Norway); Department of Mathematics, University of Bergen, Bergen 5020 (Norway)

    2015-03-01

    We consider the inverse problem of identifying large-scale subsurface structures using the controlled source electromagnetic method. To identify structures in the subsurface where the contrast in electric conductivity can be small, regularization is needed to bias the solution towards preserving structural information. We propose to combine two approaches for regularization of the inverse problem. In the first approach we utilize a model-based, reduced, composite representation of the electric conductivity that is highly flexible, even for a moderate number of degrees of freedom. With a low number of parameters, the inverse problem is efficiently solved using a standard, second-order gradient-based optimization algorithm. Further regularization is obtained using structural prior information, available, e.g., from interpreted seismic data. The reduced conductivity representation is suitable for incorporation of structural prior information. Such prior information cannot, however, be accurately modeled with a gaussian distribution. To alleviate this, we incorporate the structural information using shape priors. The shape prior technique requires the choice of kernel function, which is application dependent. We argue for using the conditionally positive definite kernel which is shown to have computational advantages over the commonly applied gaussian kernel for our problem. Numerical experiments on various test cases show that the methodology is able to identify fairly complex subsurface electric conductivity distributions while preserving structural prior information during the inversion.

  18. Theophylline toxicity leading to suicidal ideation in a patient with no prior psychiatric illness

    Directory of Open Access Journals (Sweden)

    Sumit Kapoor

    2015-04-01

    Full Text Available Suicidal behavior is a common psychiatric emergency and is associated with psychiatric illness and history of prior suicide attempts. Neuropsychiatric manifestations related to theophylline toxicity are well described in literature. We report a case of theophylline toxicity manifesting as suicidal ideation in a patient with no prior psychiatric illness.

  19. Selection of regularization parameter for l1-regularized damage detection

    Science.gov (United States)

    Hou, Rongrong; Xia, Yong; Bao, Yuequan; Zhou, Xiaoqing

    2018-06-01

    The l1 regularization technique has been developed for structural health monitoring and damage detection through employing the sparsity condition of structural damage. The regularization parameter, which controls the trade-off between data fidelity and solution size of the regularization problem, exerts a crucial effect on the solution. However, the l1 regularization problem has no closed-form solution, and the regularization parameter is usually selected by experience. This study proposes two strategies of selecting the regularization parameter for the l1-regularized damage detection problem. The first method utilizes the residual and solution norms of the optimization problem and ensures that they are both small. The other method is based on the discrepancy principle, which requires that the variance of the discrepancy between the calculated and measured responses is close to the variance of the measurement noise. The two methods are applied to a cantilever beam and a three-story frame. A range of the regularization parameter, rather than one single value, can be determined. When the regularization parameter in this range is selected, the damage can be accurately identified even for multiple damage scenarios. This range also indicates the sensitivity degree of the damage identification problem to the regularization parameter.

  20. Universal Natural Shapes: From Unifying Shape Description to Simple Methods for Shape Analysis and Boundary Value Problems

    Science.gov (United States)

    Gielis, Johan; Caratelli, Diego; Fougerolle, Yohan; Ricci, Paolo Emilio; Tavkelidze, Ilia; Gerats, Tom

    2012-01-01

    Gielis curves and surfaces can describe a wide range of natural shapes and they have been used in various studies in biology and physics as descriptive tool. This has stimulated the generalization of widely used computational methods. Here we show that proper normalization of the Levenberg-Marquardt algorithm allows for efficient and robust reconstruction of Gielis curves, including self-intersecting and asymmetric curves, without increasing the overall complexity of the algorithm. Then, we show how complex curves of k-type can be constructed and how solutions to the Dirichlet problem for the Laplace equation on these complex domains can be derived using a semi-Fourier method. In all three methods, descriptive and computational power and efficiency is obtained in a surprisingly simple way. PMID:23028417

  1. Code Samples Used for Complexity and Control

    Science.gov (United States)

    Ivancevic, Vladimir G.; Reid, Darryn J.

    2015-11-01

    The following sections are included: * MathematicaⓇ Code * Generic Chaotic Simulator * Vector Differential Operators * NLS Explorer * 2C++ Code * C++ Lambda Functions for Real Calculus * Accelerometer Data Processor * Simple Predictor-Corrector Integrator * Solving the BVP with the Shooting Method * Linear Hyperbolic PDE Solver * Linear Elliptic PDE Solver * Method of Lines for a Set of the NLS Equations * C# Code * Iterative Equation Solver * Simulated Annealing: A Function Minimum * Simple Nonlinear Dynamics * Nonlinear Pendulum Simulator * Lagrangian Dynamics Simulator * Complex-Valued Crowd Attractor Dynamics * Freeform Fortran Code * Lorenz Attractor Simulator * Complex Lorenz Attractor * Simple SGE Soliton * Complex Signal Presentation * Gaussian Wave Packet * Hermitian Matrices * Euclidean L2-Norm * Vector/Matrix Operations * Plain C-Code: Levenberg-Marquardt Optimizer * Free Basic Code: 2D Crowd Dynamics with 3000 Agents

  2. Initialization of a fractional order identification algorithm applied for Lithium-ion battery modeling in time domain

    Science.gov (United States)

    Nasser Eddine, Achraf; Huard, Benoît; Gabano, Jean-Denis; Poinot, Thierry

    2018-06-01

    This paper deals with the initialization of a non linear identification algorithm used to accurately estimate the physical parameters of Lithium-ion battery. A Randles electric equivalent circuit is used to describe the internal impedance of the battery. The diffusion phenomenon related to this modeling is presented using a fractional order method. The battery model is thus reformulated into a transfer function which can be identified through Levenberg-Marquardt algorithm to ensure the algorithm's convergence to the physical parameters. An initialization method is proposed in this paper by taking into account previously acquired information about the static and dynamic system behavior. The method is validated using noisy voltage response, while precision of the final identification results is evaluated using Monte-Carlo method.

  3. An approach to evaluate switching overvoltages during power system restoration

    Directory of Open Access Journals (Sweden)

    Sadeghkhani Iman

    2012-01-01

    Full Text Available Transformer switching is one of the important stages during power system restoration. This switching can cause harmonic overvoltages that might damage some equipment and delay power system restoration. Core saturation on the energisation of a transformer with residual flux is a noticeable factor in harmonic overvoltages. This work uses artificial neural networks (ANN in order to estimate the temporary overvoltages (TOVs due to transformer energisation. In the proposed methodology, the Levenberg-Marquardt method is used to train the multilayer perceptron. The developed ANN is trained with the worst case of switching condition, and tested for typical cases. Simulated results for a partial 39-bus New England test system, show the proposed technique can accurately estimate the peak values and durations of switching overvoltages.

  4. Time sequence determination of parent–daughter radionuclides using gamma-spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Burnett, J. L.; Britton, R. E.; Abrecht, D. G.; Davies, A. V.

    2017-05-06

    The acquisition of time-stamped list (TLIST) data provides additional information useful to gamma-spectrometry analysis. A novel technique is described that uses non-linear least-squares fitting and the Levenberg-Marquardt algorithm to simultaneously determine parent-daughter atoms from time sequence measurements of only the daughter radionuclide. This has been demonstrated for the radioactive decay of short-lived radon progeny (214Pb/214Bi, 212Pb/212Bi) described using the Bateman first-order differential equation. The calculated atoms are in excellent agreement with measured atoms, with a difference of 1.3 – 4.8% for parent atoms and 2.4% - 10.4% for daughter atoms. Measurements are also reported with reduced uncertainty. The technique has potential to redefine gamma-spectrometry analysis.

  5. A SEASONAL AND MONTHLY APPROACH FOR PREDICTING THE DELIVERED ENERGY QUANTITY IN A PHOTOVOLTAIC POWER PLANT IN ROMANIA

    Directory of Open Access Journals (Sweden)

    George Căruțașu

    2016-12-01

    Full Text Available In this paper, we present solutions that facilitate the forecasting of the delivered energy quantity in a photovoltaic power plant using the data measured from the solar panels' sensors: solar irradiation level, present module temperature, environmental temperature, atmospheric pressure and humidity. We have developed and analyzed a series of Artificial Neural Networks (ANNs based on the Levenberg-Marquardt algorithm, using seasonal and monthly approaches. We have also integrated our developed Artificial Neural Networks into callable functions that we have compiled using the Matlab Compiler SDK. Thus, our solution can be accessed by developers through multiple Application Programming Interfaces when programming software that predicts the photovoltaic renewable energy production considering the seasonal particularities of the Romanian weather patterns

  6. Impact of imatinib interruption and duration of prior hydroxyurea on the treatment outcome in patients with chronic myeloid leukemia: Single institution experience.

    Science.gov (United States)

    Edesa, Wael Abdelgawad; Abdel-malek, Raafat Ragaey

    2015-06-01

    Optimal response requires that patients should be maintained on the drug continuously. To evaluate the influence of imatinib interruption and prior hydroxyurea use on the outcome of patients with chronic myeloid leukemia. Between January 2010 and November 2013, patients with chronic phase who received imatinib at the Kasr Al-ainy Center of Clinical Oncology were included. Sixty patients were included in this study, thirty three patients (55%) received imatinib upfront, while 27 (45%) received imatinib post hydroxyurea. Imatinib was not given regularly in 50% of patients. In terms of response, only major molecular response and complete molecular response were statistically significant in favor of patients who were receiving imatinib regularly compared to those who had interruption (phydroxyurea. The median progression free survival was 30.3 months (95% CI 24.3-36.3). Among the group of patients who received imatinib regularly, progression free survival was longer (p=0.049), there was no difference between those who received prior hydroxyurea versus those who did not (p=0.67). Duration of prior hydroxyurea had no impact on response or progression free survival, while patients regular on imatinib had statistically significant difference with respect to major molecular response, complete molecular response and progression free survival compared to those who had periods of drug interruption, thus we need more governmental support to supply the drug without interruption to improve the outcome of therapy. Copyright © 2015 The Authors. Production and hosting by Elsevier B.V. All rights reserved.

  7. On the use of a pruning prior for neural networks

    DEFF Research Database (Denmark)

    Goutte, Cyril

    1996-01-01

    We address the problem of using a regularization prior that prunes unnecessary weights in a neural network architecture. This prior provides a convenient alternative to traditional weight-decay. Two examples are studied to support this method and illustrate its use. First we use the sunspots...

  8. Fatores associados ao consumo regular de refrigerante não dietético em adultos de Pelotas, RS Factores asociados al consumo regular de gaseosa no dietética en adultos de Pelotas, Sur de Brasil Factors associated with regular non-diet soft drink intake among adults in Pelotas, Southern Brazil

    Directory of Open Access Journals (Sweden)

    Airton José Rombaldi

    2011-04-01

    general desde el del año pasado, cuantas veces tu tomaste gaseosa no dietética?". Las respuestas categorizadas fueron dicotomizadas para fines de análisis. Fue considerado consumo regular de refrigerante no dietético la frecuencia de cinco o más veces por semana. La asociación con variables demográficas, socioeconómicas, conductuales y nutricionales fue analizada por la prueba de chi-cuadrado para heterogeneidad y tendencia linear y el análisis multivariable fue realizado por medio de regresión de Poisson, con variancia robusta. RESULTADOS: Cerca de un quinto de la población adulta de Pelotas (20,4% ingería regularmente gaseosa no dietética. Individuos del sexo masculino (RP 1,50; IC95%: 1,20;2,00, fumadores actuales (RP 1,60; IC 95%: 1,20;2,10 y que consumían semanalmente meriendas (RP 2,10; IC95%: 1,60;2,70 presentaron mayor prevalencia de consumo de gaseosas no dietéticas en el análisis ajustado. El análisis estratificado por sexo mostró que el consumo regular de frutas, legumbres y verduras fue factor protector al consumo de gaseosas entre mujeres (RP 0,50; IC95%: 0,30;0,90. CONCLUSIONES: La frecuencia de consumo regular de gaseosas no dietéticas en la población adulta fue elevada, particularmente entre hombres, jóvenes y fumadores.OBJECTIVE: To assess factors associated with regular intake of non-diet soft drinks among adults. METHODS: Population-based cross-sectional study including 972 adults (aged 20 to 69 in the city of Pelotas, Southern Brazil, conducted in 2006. The frequency of non-diet soft drink intake in the 12 months prior to the study was evaluated by the question: "In general since last , how many times did you have a non-diet soft drink?". The answers were dichotomized for the analysis. Intake of non-diet soft drinks five times or more per week was considered regular intake. The association between the outcome and sociodemographic, behavioral and nutritional variables was tested using the chi-square test for heterogeneity and linear

  9. Prediction by Artificial Neural Networks (ANN of the diffusivity, mass, moisture, volume and solids on osmotically dehydrated yacon (Smallantus sonchifolius

    Directory of Open Access Journals (Sweden)

    Julio Rojas Naccha

    2012-09-01

    Full Text Available The predictive ability of Artificial Neural Network (ANN on the effect of the concentration (30, 40, 50 y 60 % w/w and temperature (30, 40 y 50°C of fructooligosaccharides solution, in the mass, moisture, volume and solids of osmodehydrated yacon cubes, and in the coefficients of the water means effective diffusivity with and without shrinkage was evaluated. The Feedforward type ANN with the Backpropagation training algorithms and the Levenberg-Marquardt weight adjustment was applied, using the following topology: 10-5 goal error, 0.01 learning rate, 0.5 moment coefficient, 2 input neurons, 6 output neurons, one hidden layer with 18 neurons, 15 training stages and logsig-pureline transfer functions. The overall average error achieved by the ANN was 3.44% and correlation coefficients were bigger than 0.9. No significant differences were found between the experimental values and the predicted values achieved by the ANN and with the predicted values achieved by a statistical model of second-order polynomial regression (p > 0.95.

  10. Broiler weight estimation based on machine vision and artificial neural network.

    Science.gov (United States)

    Amraei, S; Abdanan Mehdizadeh, S; Salari, S

    2017-04-01

    1. Machine vision and artificial neural network (ANN) procedures were used to estimate live body weight of broiler chickens in 30 1-d-old broiler chickens reared for 42 d. 2. Imaging was performed two times daily. To localise chickens within the pen, an ellipse fitting algorithm was used and the chickens' head and tail removed using the Chan-Vese method. 3. The correlations between the body weight and 6 physical extracted features indicated that there were strong correlations between body weight and the 5 features including area, perimeter, convex area, major and minor axis length. 5. According to statistical analysis there was no significant difference between morning and afternoon data over 42 d. 6. In an attempt to improve the accuracy of live weight approximation different ANN techniques, including Bayesian regulation, Levenberg-Marquardt, Scaled conjugate gradient and gradient descent were used. Bayesian regulation with R 2 value of 0.98 was the best network for prediction of broiler weight. 7. The accuracy of the machine vision technique was examined and most errors were less than 50 g.

  11. Artificial neural networks in knee injury risk evaluation among professional football players

    Science.gov (United States)

    Martyna, Michałowska; Tomasz, Walczak; Krzysztof, Grabski Jakub; Monika, Grygorowicz

    2018-01-01

    Lower limb injury risk assessment was proposed, based on isokinetic examination that is a part of standard athlete's biomechanical evaluation performed mainly twice a year. Information about non-contact knee injury (or lack of the injury) sustained within twelve months after isokinetic test, confirmed in USG were verified. Three the most common types of football injuries were taken into consideration: anterior cruciate ligament (ACL) rupture, hamstring and quadriceps muscles injuries. 22 parameters, obtained from isokinetic tests were divided into 4 groups and used as input parameters of five feedforward artificial neural networks (ANNs). The 5th group consisted of all considered parameters. The networks were trained with the use of Levenberg-Marquardt backpropagation algorithm to return value close to 1 for the sets of parameters corresponding injury event and close to 0 for parameters with no injury recorded within 6 - 12 months after isokinetic test. Results of this study shows that ANN might be useful tools, which simplify process of simultaneous interpretation of many numerical parameters, but the most important factor that significantly influence the results is database used for ANN training.

  12. Comparative age and growth of common snook Centropomus undecimalis (Pisces: Centropomidae from coastal and riverine areas in Southern Mexico

    Directory of Open Access Journals (Sweden)

    Martha A. Perera-Garcia

    2013-06-01

    Full Text Available Common snook Centropomus unidecimalis is an important commercial and fishery species in Southern Mexico, however the high exploitation rates have resulted in a strong reduction of its abundances. Since, the information about its population structure is scarce, the objective of the present research was to determine and compare the age structure in four important fishery sites. For this, age and growth of common snook were determined from specimens collected monthly, from July 2006 to March 2008, from two coastal (Barra Bosque and Barra San Pedro and two riverine (San Pedro and Tres Brazos commercial fishery sites in Tabasco, Mexico. Age was determined using sectioned saggitae otoliths and data analyzed by von Bertalanffy and Levenberg-Marquardt among others. Estimated ages ranged from 2 to 17 years. Monthly patterns of marginal increment formation and the percentage of otoliths with opaque rings on the outer edge demonstrated that a single annulus was formed each year. The von Bertalanffy parameters were calculated for males and females using linear adjustment and the non-linear method of Levenberg-Marquardt. The von Bertalanffy growth equations were FLt=109.21(1-e-0.21(t+0.57 for Barra Bosque, FLt=94.56(1-e-0.27(t+0.48 for Barra San Pedro, FLt=97.15(1-e-0.17(t+1.32 for San Pedro and FLt=83.77(1-e-0.26(t+0.49 for Tres Brazos. According to (Hotelling’s T², p<0.05 test growth was significantly greater for females than for males. Based on the Chen test, von Bertalanffy growth curves were different among the study sites (RSS, p<0.05. Based on the observed differences in growth parameters among sampling sites (coastal and riverine environments future research need to be conducted on migration and population genetics, in order to delineate the stock structure of this population and support management programs.

  13. MODEL JARINGAN SYARAF TIRUAN UNTUK MEMPREDIKSI PARAMETER KUALITAS TOMAT BERDASARKAN PARAMETER WARNA RGB (An artificial neural network model for predicting tomato quality parameters based on color

    Directory of Open Access Journals (Sweden)

    Rudiati Evi Masithoh

    2013-03-01

    Full Text Available Artificial neural networks (ANN was used to predict the quality parameters of tomato, i.e. Brix, citric acid, total carotene, and vitamin C. ANN was developed from Red Green Blue (RGB image data of tomatoes measured using a developed computer vision system (CVS. Qualitative analysis of tomato compositions were obtained from laboratory experiments. ANN model was based on a feedforward backpropagation network with different training functions, namely gradient descent (traingd, gradient descent with the resilient backpropagation (trainrp, Broyden, Fletcher, Goldfrab and Shanno (BFGS quasi-Newton (trainbfg, as well as Levenberg Marquardt (trainlm.  The network structure using logsig and linear (purelin activation function at the hidden and output layer, respectively, and using  the trainlm as a training function resulted in the best performance. Correlation coefficient (r of training and validation process were 0.97 - 0.99 and 0.92 - 0.99, whereas the MAE values ​​ranged from 0.01 to 0.23 and 0.03 to 0.59, respectively. Keywords: Artificial neural network, trainlm, tomato, RGB   Jaringan syaraf tiruan (JST digunakan untuk memprediksi parameter kualitas tomat, yaitu Brix, asam sitrat, karoten total, dan vitamin C. JST dikembangkan dari data Red Green Blue (RGB  citra tomat yang diukur menggunakan computer vision system. Data kualitas tomat diperoleh dari analisis di laboratorium. Struktur model JST didasarkan pada jaringan feedforward backpropagation dengan berbagai fungsi pelatihan, yaitu gradient descent (traingd, gradient descent dengan resilient backpropagation (trainrp, Broyden, Fletcher, Goldfrab dan Shanno (BFGS quasi-Newton (trainbfg, serta Levenberg Marquardt (trainlm. Fungsi pelatihan yang terbaik adalah menggunakan trainlm, serta pada struktur jaringan digunakan fungsi aktivasi logsig pada lapisan tersembunyi dan linier (purelin pada lapisan keluaran. dengan 1000 epoch. Nilai koefisien korelasi (r pada tahap pelatihan dan validasi

  14. Regularized forecasting of chaotic dynamical systems

    International Nuclear Information System (INIS)

    Bollt, Erik M.

    2017-01-01

    While local models of dynamical systems have been highly successful in terms of using extensive data sets observing even a chaotic dynamical system to produce useful forecasts, there is a typical problem as follows. Specifically, with k-near neighbors, kNN method, local observations occur due to recurrences in a chaotic system, and this allows for local models to be built by regression to low dimensional polynomial approximations of the underlying system estimating a Taylor series. This has been a popular approach, particularly in context of scalar data observations which have been represented by time-delay embedding methods. However such local models can generally allow for spatial discontinuities of forecasts when considered globally, meaning jumps in predictions because the collected near neighbors vary from point to point. The source of these discontinuities is generally that the set of near neighbors varies discontinuously with respect to the position of the sample point, and so therefore does the model built from the near neighbors. It is possible to utilize local information inferred from near neighbors as usual but at the same time to impose a degree of regularity on a global scale. We present here a new global perspective extending the general local modeling concept. In so doing, then we proceed to show how this perspective allows us to impose prior presumed regularity into the model, by involving the Tikhonov regularity theory, since this classic perspective of optimization in ill-posed problems naturally balances fitting an objective with some prior assumed form of the result, such as continuity or derivative regularity for example. This all reduces to matrix manipulations which we demonstrate on a simple data set, with the implication that it may find much broader context.

  15. Self-prior strategy for organ reconstruction in fluorescence molecular tomography.

    Science.gov (United States)

    Zhou, Yuan; Chen, Maomao; Su, Han; Luo, Jianwen

    2017-10-01

    The purpose of this study is to propose a strategy for organ reconstruction in fluorescence molecular tomography (FMT) without prior information from other imaging modalities, and to overcome the high cost and ionizing radiation caused by the traditional structural prior strategy. The proposed strategy is designed as an iterative architecture to solve the inverse problem of FMT. In each iteration, a short time Fourier transform (STFT) based algorithm is used to extract the self-prior information in the space-frequency energy spectrum with the assumption that the regions with higher fluorescence concentration have larger energy intensity, then the cost function of the inverse problem is modified by the self-prior information, and lastly an iterative Laplacian regularization algorithm is conducted to solve the updated inverse problem and obtains the reconstruction results. Simulations and in vivo experiments on liver reconstruction are carried out to test the performance of the self-prior strategy on organ reconstruction. The organ reconstruction results obtained by the proposed self-prior strategy are closer to the ground truth than those obtained by the iterative Tikhonov regularization (ITKR) method (traditional non-prior strategy). Significant improvements are shown in the evaluation indexes of relative locational error (RLE), relative error (RE) and contrast-to-noise ratio (CNR). The self-prior strategy improves the organ reconstruction results compared with the non-prior strategy and also overcomes the shortcomings of the traditional structural prior strategy. Various applications such as metabolic imaging and pharmacokinetic study can be aided by this strategy.

  16. MC ray-tracing optimization of lobster-eye focusing devices with RESTRAX

    International Nuclear Information System (INIS)

    Saroun, Jan; Kulda, Jiri

    2006-01-01

    The enhanced functionalities of the latest version of the RESTRAX software, providing a high-speed Monte Carlo (MC) ray-tracing code to represent a virtual three-axis neutron spectrometer, include representation of parabolic and elliptic guide profiles and facilities for numerical optimization of parameter values, characterizing the instrument components. As examples, we present simulations of a doubly focusing monochromator in combination with cold neutron guides and lobster-eye supermirror devices, concentrating a monochromatic beam to small sample volumes. A Levenberg-Marquardt minimization algorithm is used to optimize simultaneously several parameters of the monochromator and lobster-eye guides. We compare the performance of optimized configurations in terms of monochromatic neutron flux and energy spread and demonstrate the effect of lobster-eye optics on beam transformations in real and momentum subspaces

  17. Application of back-propagation artificial neural network (ANN) to predict crystallite size and band gap energy of ZnO quantum dots

    Science.gov (United States)

    Pelicano, Christian Mark; Rapadas, Nick; Cagatan, Gerard; Magdaluyo, Eduardo

    2017-12-01

    Herein, the crystallite size and band gap energy of zinc oxide (ZnO) quantum dots were predicted using artificial neural network (ANN). Three input factors including reagent ratio, growth time, and growth temperature were examined with respect to crystallite size and band gap energy as response factors. The generated results from neural network model were then compared with the experimental results. Experimental crystallite size and band gap energy of ZnO quantum dots were measured from TEM images and absorbance spectra, respectively. The Levenberg-Marquardt (LM) algorithm was used as the learning algorithm for the ANN model. The performance of the ANN model was then assessed through mean square error (MSE) and regression values. Based on the results, the ANN modelling results are in good agreement with the experimental data.

  18. Iterative Reconstruction Methods for Inverse Problems in Tomography with Hybrid Data

    DEFF Research Database (Denmark)

    Sherina, Ekaterina

    . The goal of these modalities is to quantify physical parameters of materials or tissues inside an object from given interior data, which is measured everywhere inside the object. The advantage of these modalities is that large variations in physical parameters can be resolved and therefore, they have...... data is precisely the reason why reconstructions with a high contrast and a high resolution can be expected. The main contributions of this thesis consist in formulating the underlying mathematical problems with interior data as nonlinear operator equations, theoretically analysing them within...... iteration and the Levenberg-Marquardt method are employed for solving the problems. The first problem considered in this thesis is a problem of conductivity estimation from interior measurements of the power density, known as Acousto-Electrical Tomography. A special case of limited angle tomography...

  19. Motion of a Point Mass in a Rotating Disc: A Quantitative Analysis of the Coriolis and Centrifugal Force

    Science.gov (United States)

    Haddout, Soufiane

    2016-06-01

    In Newtonian mechanics, the non-inertial reference frames is a generalization of Newton's laws to any reference frames. While this approach simplifies some problems, there is often little physical insight into the motion, in particular into the effects of the Coriolis force. The fictitious Coriolis force can be used by anyone in that frame of reference to explain why objects follow curved paths. In this paper, a mathematical solution based on differential equations in non-inertial reference is used to study different types of motion in rotating system. In addition, the experimental data measured on a turntable device, using a video camera in a mechanics laboratory was conducted to compare with mathematical solution in case of parabolically curved, solving non-linear least-squares problems, based on Levenberg-Marquardt's and Gauss-Newton algorithms.

  20. Reducing Wind Tunnel Data Requirements Using Neural Networks

    Science.gov (United States)

    Ross, James C.; Jorgenson, Charles C.; Norgaard, Magnus

    1997-01-01

    The use of neural networks to minimize the amount of data required to completely define the aerodynamic performance of a wind tunnel model is examined. The accuracy requirements for commercial wind tunnel test data are very severe and are difficult to reproduce using neural networks. For the current work, multiple input, single output networks were trained using a Levenberg-Marquardt algorithm for each of the aerodynamic coefficients. When applied to the aerodynamics of a 55% scale model of a U.S. Air Force/ NASA generic fighter configuration, this scheme provided accurate models of the lift, drag, and pitching-moment coefficients. Using only 50% of the data acquired during, the wind tunnel test, the trained neural network had a predictive accuracy equal to or better than the accuracy of the experimental measurements.

  1. Modeling of wear behavior of Al/B_4C composites produced by powder metallurgy

    International Nuclear Information System (INIS)

    Sahin, Ismail; Bektas, Asli; Guel, Ferhat; Cinci, Hanifi

    2017-01-01

    Wear characteristics of composites, Al matrix reinforced with B_4C particles percentages of 5, 10,15 and 20 produced by the powder metallurgy method were studied in this study. For this purpose, a mixture of Al and B_4C powders were pressed under 650 MPa pressure and then sintered at 635 C. The analysis of hardness, density and microstructure was performed. The produced samples were worn using a pin-on-disk abrasion device under 10, 20 and 30 N load through 500, 800 and 1200 mesh SiC abrasive papers. The obtained wear values were implemented in an artificial neural network (ANN) model having three inputs and one output using feed forward backpropagation Levenberg-Marquardt algorithm. Thus, the optimum wear conditions and hardness values were determined.

  2. Modeling of wear behavior of Al/B{sub 4}C composites produced by powder metallurgy

    Energy Technology Data Exchange (ETDEWEB)

    Sahin, Ismail; Bektas, Asli [Gazi Univ., Ankara (Turkey). Dept. of Industrial Design Engineering; Guel, Ferhat; Cinci, Hanifi [Gazi Univ., Ankara (Turkey). Dept. of Materials and Metallurgy Engineering

    2017-06-01

    Wear characteristics of composites, Al matrix reinforced with B{sub 4}C particles percentages of 5, 10,15 and 20 produced by the powder metallurgy method were studied in this study. For this purpose, a mixture of Al and B{sub 4}C powders were pressed under 650 MPa pressure and then sintered at 635 C. The analysis of hardness, density and microstructure was performed. The produced samples were worn using a pin-on-disk abrasion device under 10, 20 and 30 N load through 500, 800 and 1200 mesh SiC abrasive papers. The obtained wear values were implemented in an artificial neural network (ANN) model having three inputs and one output using feed forward backpropagation Levenberg-Marquardt algorithm. Thus, the optimum wear conditions and hardness values were determined.

  3. A Model-Free Diagnosis Approach for Intake Leakage Detection and Characterization in Diesel Engines

    Directory of Open Access Journals (Sweden)

    Ghaleb Hoblos

    2015-07-01

    Full Text Available Feature selection is an essential step for data classification used in fault detection and diagnosis processes. In this work, a new approach is proposed, which combines a feature selection algorithm and a neural network tool for leak detection and characterization tasks in diesel engine air paths. The Chi square classifier is used as the feature selection algorithm and the neural network based on Levenberg-Marquardt is used in system behavior modeling. The obtained neural network is used for leak detection and characterization. The model is learned and validated using data generated by xMOD. This tool is used again for testing. The effectiveness of the proposed approach is illustrated in simulation when the system operates on a low speed/load and the considered leak affecting the air path is very small.

  4. An interactive program for pharmacokinetic modeling.

    Science.gov (United States)

    Lu, D R; Mao, F

    1993-05-01

    A computer program, PharmK, was developed for pharmacokinetic modeling of experimental data. The program was written in C computer language based on the high-level user-interface Macintosh operating system. The intention was to provide a user-friendly tool for users of Macintosh computers. An interactive algorithm based on the exponential stripping method is used for the initial parameter estimation. Nonlinear pharmacokinetic model fitting is based on the maximum likelihood estimation method and is performed by the Levenberg-Marquardt method based on chi 2 criterion. Several methods are available to aid the evaluation of the fitting results. Pharmacokinetic data sets have been examined with the PharmK program, and the results are comparable with those obtained with other programs that are currently available for IBM PC-compatible and other types of computers.

  5. Equações e programa computacional para cálculo do transporte de solutos do solo Equations and computer program for calculating the solute transport in soil

    Directory of Open Access Journals (Sweden)

    João C. F. Borges Júnior

    2006-09-01

    Full Text Available Em função deste trabalho, objetivou-se desenvolver e testar um programa computacional para calcular os parâmetros das equações de transporte de solutos no solo, com base no ajustamento de modelos teóricos a dados observados, e executar simulações para a variação espacial e temporal da concentração e do balanço de massa de solutos no perfil do solo. Utilizou-se o método dos mínimos quadrados (Levenberg-Marquardt para obtenção dos estimadores dos parâmetros coeficiente dispersivo-difusivo e fator de retardamento. O programa desenvolvido, denominado Disp, possui interface gráfica que torna simples o seu uso quanto aos procedimentos de entrada de dados, execução dos cálculos e acesso aos resultados. Nos formulários de resultados, gráficos e tabelas relacionados às curvas de efluente podem ser gerados além da possibilidade de se executar simulações quanto à variação espacial e temporal da concentração e do balanço de massa de solutos no perfil do solo. Testes comparativos entre o Disp e o programa CXTFIT, relativos aos cálculos dos parâmetros número de Peclet e fator de retardamento, indicaram equivalência entre os dois programas, porém a interface gráfica do Disp o torna de uso mais simples em relação ao CXTFIT.This study aimed to develop and to test a computer program for calculating the parameters of soil solute transport equations, based on adjustment of theoretical models to observed data, as well as to perform simulations for the space and temporary variations of the concentration and balance of the solute mass in the soil profile. The least-squares method (Levenberg-Marquardt was used to obtain the estimators of the diffusion-dispersion coefficient and retardation factor parameters. The developed program, so-called DISP, is provided with a graphic interface that makes possible its use in procedures for data input, accomplishment of calculations and access to results. In the result forms, a number of

  6. Identificación de parámetros con métodos numéricos para el modelado de sistemas eléctricos con dependencia frecuencial;Identification of parameters with numerical methods for the modelling of electrical systems with frequency dependence.

    Directory of Open Access Journals (Sweden)

    Eduardo Salvador Bañuelos Cabral

    2015-06-01

    Full Text Available El artículo presenta una descripción detallada de las técnicas más utilizadas para hacer ajuste racional de funciones del dominio de la frecuencia. Las técnicas son: Aproximación Asintótica de Bode, Mínimos Cuadrados Ordinarios, Mínimos Cuadrados Iterativamente Reponderados, Ajuste Vectorial y Levenberg-Marquardt. Estas técnicas se comparan aproximando una función analítica. Después, se aplican al ajuste racional de parámetros dependientes de la frecuencia de una línea de transmisión monofásica. El efecto del ajuste se evalúa considerando los casos de línea abierta, corto circuito y perfectamente acoplada. La transformada numérica de Laplace (NLT se utiliza como referencia para la evaluación. Se concluye que la adecuada implementación de cada técnica depende de varios factores; del tipo de función a ajustar, del rango de ajuste, y otros. Además, no es posible garantizar que una de las técnicas siempre converja al mejor resultado. El artículo propone algunas guías para seleccionar la técnica más adecuada para una aplicación particular. This paper provides a detailed description of the rational fitting techniques that are most used to approximate frequency domain functions. The techniques are: Bode asymptotic approximations, Ordinary Least-Squares, Iteratively Reweighted Least-Squares, Vector Fitting and Levenberg-Marquardt. These techniques are compared by approximating of an analytic function. Then, the techniques are applied to the rational-fitting of the frequency-dependent parameters corresponding to a single-phase transmission line. The effect of the rational representations is evaluated considering transients on cases of open-ended, short-circuited and perfectly matched lines. The numerical Laplace transform technique (NLT is used as reference for the evaluations. It follows that the proper implementation of each fitting technique depends on various factors; the type of function being adjusted, the desired

  7. Using Tranformation Group Priors and Maximum Relative Entropy for Bayesian Glaciological Inversions

    Science.gov (United States)

    Arthern, R. J.; Hindmarsh, R. C. A.; Williams, C. R.

    2014-12-01

    One of the key advances that has allowed better simulations of the large ice sheets of Greenland and Antarctica has been the use of inverse methods. These have allowed poorly known parameters such as the basal drag coefficient and ice viscosity to be constrained using a wide variety of satellite observations. Inverse methods used by glaciologists have broadly followed one of two related approaches. The first is minimization of a cost function that describes the misfit to the observations, often accompanied by some kind of explicit or implicit regularization that promotes smallness or smoothness in the inverted parameters. The second approach is a probabilistic framework that makes use of Bayes' theorem to update prior assumptions about the probability of parameters, making use of data with known error estimates. Both approaches have much in common and questions of regularization often map onto implicit choices of prior probabilities that are made explicit in the Bayesian framework. In both approaches questions can arise that seem to demand subjective input. What should the functional form of the cost function be if there are alternatives? What kind of regularization should be applied, and how much? How should the prior probability distribution for a parameter such as basal slipperiness be specified when we know so little about the details of the subglacial environment? Here we consider some approaches that have been used to address these questions and discuss ways that probabilistic prior information used for regularizing glaciological inversions might be specified with greater objectivity.

  8. Using Tikhonov Regularization for Spatial Projections from CSR Regularized Spherical Harmonic GRACE Solutions

    Science.gov (United States)

    Save, H.; Bettadpur, S. V.

    2013-12-01

    It has been demonstrated before that using Tikhonov regularization produces spherical harmonic solutions from GRACE that have very little residual stripes while capturing all the signal observed by GRACE within the noise level. This paper demonstrates a two-step process and uses Tikhonov regularization to remove the residual stripes in the CSR regularized spherical harmonic coefficients when computing the spatial projections. We discuss methods to produce mass anomaly grids that have no stripe features while satisfying the necessary condition of capturing all observed signal within the GRACE noise level.

  9. Evidence for the Modulation of Sub-Lexical Processing in Go No-Go Naming: The Elimination of the Frequency x Regularity Interaction

    Science.gov (United States)

    Cummine, Jacqueline; Amyotte, Josee; Pancheshen, Brent; Chouinard, Brea

    2011-01-01

    The Frequency (high vs. low) x Regularity (regular vs. exception) interaction found on naming response times is often taken as evidence for parallel processing of sub-lexical and lexical systems. Using a Go/No-go naming task, we investigated the effect of nonword versus pseudohomophone foils on sub-lexical processing and the subsequent Frequency x…

  10. UNFOLDED REGULAR AND SEMI-REGULAR POLYHEDRA

    Directory of Open Access Journals (Sweden)

    IONIŢĂ Elena

    2015-06-01

    Full Text Available This paper proposes a presentation unfolding regular and semi-regular polyhedra. Regular polyhedra are convex polyhedra whose faces are regular and equal polygons, with the same number of sides, and whose polyhedral angles are also regular and equal. Semi-regular polyhedra are convex polyhedra with regular polygon faces, several types and equal solid angles of the same type. A net of a polyhedron is a collection of edges in the plane which are the unfolded edges of the solid. Modeling and unfolding Platonic and Arhimediene polyhedra will be using 3dsMAX program. This paper is intended as an example of descriptive geometry applications.

  11. Reference Priors For Non-Normal Two-Sample Problems

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo, 1992) is applied to locationscale models with any regular sampling density. A number of two-sample problems is analyzed in this general context, extending the dierence, ratio and product of Normal means problems outside Normality, while explicitly

  12. Analysis of Boiler Operational Variables Prior to Tube Leakage Fault by Artificial Intelligent System

    Directory of Open Access Journals (Sweden)

    Al-Kayiem Hussain H.

    2014-07-01

    Full Text Available Steam boilers are considered as a core of any steam power plant. Boilers are subjected to various types of trips leading to shut down of the entire plant. The tube leakage is the worse among the common boiler faults, where the shutdown period lasts for around four to five days. This paper describes the rules of the Artificial Intelligent Systems to diagnosis the boiler variables prior to tube leakage occurrence. An Intelligent system based on Artificial Neural Network was designed and coded in MATLAB environment. The ANN was trained and validated using real site data acquired from coal fired power plant in Malaysia. Ninety three boiler operational variables were identified for the present investigation based on the plant operator experience. Various neural networks topology combinations were investigated. The results showed that the NN with two hidden layers performed better than one hidden layer using Levenberg-Maquardt training algorithm. Moreover, it was noticed that hyperbolic tangent function for input and output nodes performed better than other activation function types.

  13. Detecting regular sound changes in linguistics as events of concerted evolution.

    Science.gov (United States)

    Hruschka, Daniel J; Branford, Simon; Smith, Eric D; Wilkins, Jon; Meade, Andrew; Pagel, Mark; Bhattacharya, Tanmoy

    2015-01-05

    Concerted evolution is normally used to describe parallel changes at different sites in a genome, but it is also observed in languages where a specific phoneme changes to the same other phoneme in many words in the lexicon—a phenomenon known as regular sound change. We develop a general statistical model that can detect concerted changes in aligned sequence data and apply it to study regular sound changes in the Turkic language family. Linguistic evolution, unlike the genetic substitutional process, is dominated by events of concerted evolutionary change. Our model identified more than 70 historical events of regular sound change that occurred throughout the evolution of the Turkic language family, while simultaneously inferring a dated phylogenetic tree. Including regular sound changes yielded an approximately 4-fold improvement in the characterization of linguistic change over a simpler model of sporadic change, improved phylogenetic inference, and returned more reliable and plausible dates for events on the phylogenies. The historical timings of the concerted changes closely follow a Poisson process model, and the sound transition networks derived from our model mirror linguistic expectations. We demonstrate that a model with no prior knowledge of complex concerted or regular changes can nevertheless infer the historical timings and genealogical placements of events of concerted change from the signals left in contemporary data. Our model can be applied wherever discrete elements—such as genes, words, cultural trends, technologies, or morphological traits—can change in parallel within an organism or other evolving group. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.

  14. Reference Priors for the General Location-Scale Model

    NARCIS (Netherlands)

    Fernández, C.; Steel, M.F.J.

    1997-01-01

    The reference prior algorithm (Berger and Bernardo 1992) is applied to multivariate location-scale models with any regular sampling density, where we establish the irrelevance of the usual assumption of Normal sampling if our interest is in either the location or the scale. This result immediately

  15. Comparing the Selected Transfer Functions and Local Optimization Methods for Neural Network Flood Runoff Forecast

    Directory of Open Access Journals (Sweden)

    Petr Maca

    2014-01-01

    Full Text Available The presented paper aims to analyze the influence of the selection of transfer function and training algorithms on neural network flood runoff forecast. Nine of the most significant flood events, caused by the extreme rainfall, were selected from 10 years of measurement on small headwater catchment in the Czech Republic, and flood runoff forecast was investigated using the extensive set of multilayer perceptrons with one hidden layer of neurons. The analyzed artificial neural network models with 11 different activation functions in hidden layer were trained using 7 local optimization algorithms. The results show that the Levenberg-Marquardt algorithm was superior compared to the remaining tested local optimization methods. When comparing the 11 nonlinear transfer functions, used in hidden layer neurons, the RootSig function was superior compared to the rest of analyzed activation functions.

  16. River suspended sediment estimation by climatic variables implication: Comparative study among soft computing techniques

    Science.gov (United States)

    Kisi, Ozgur; Shiri, Jalal

    2012-06-01

    Estimating sediment volume carried by a river is an important issue in water resources engineering. This paper compares the accuracy of three different soft computing methods, Artificial Neural Networks (ANNs), Adaptive Neuro-Fuzzy Inference System (ANFIS), and Gene Expression Programming (GEP), in estimating daily suspended sediment concentration on rivers by using hydro-meteorological data. The daily rainfall, streamflow and suspended sediment concentration data from Eel River near Dos Rios, at California, USA are used as a case study. The comparison results indicate that the GEP model performs better than the other models in daily suspended sediment concentration estimation for the particular data sets used in this study. Levenberg-Marquardt, conjugate gradient and gradient descent training algorithms were used for the ANN models. Out of three algorithms, the Conjugate gradient algorithm was found to be better than the others.

  17. New numerical approximation for solving fractional delay differential equations of variable order using artificial neural networks

    Science.gov (United States)

    Zúñiga-Aguilar, C. J.; Coronel-Escamilla, A.; Gómez-Aguilar, J. F.; Alvarado-Martínez, V. M.; Romero-Ugalde, H. M.

    2018-02-01

    In this paper, we approximate the solution of fractional differential equations with delay using a new approach based on artificial neural networks. We consider fractional differential equations of variable order with the Mittag-Leffler kernel in the Liouville-Caputo sense. With this new neural network approach, an approximate solution of the fractional delay differential equation is obtained. Synaptic weights are optimized using the Levenberg-Marquardt algorithm. The neural network effectiveness and applicability were validated by solving different types of fractional delay differential equations, linear systems with delay, nonlinear systems with delay and a system of differential equations, for instance, the Newton-Leipnik oscillator. The solution of the neural network was compared with the analytical solutions and the numerical simulations obtained through the Adams-Bashforth-Moulton method. To show the effectiveness of the proposed neural network, different performance indices were calculated.

  18. Inverse Kinematics of a Humanoid Robot with Non-Spherical Hip: A Hybrid Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Rafael Cisneros Limón

    2013-04-01

    Full Text Available This paper describes an approach to solve the inverse kinematics problem of humanoid robots whose construction shows a small but non negligible offset at the hip which prevents any purely analytical solution to be developed. Knowing that a purely numerical solution is not feasible due to variable efficiency problems, the proposed one first neglects the offset presence in order to obtain an approximate “solution” by means of an analytical algorithm based on screw theory, and then uses it as the initial condition of a numerical refining procedure based on the Levenberg-Marquardt algorithm. In this way, few iterations are needed for any specified attitude, making it possible to implement the algorithm for real-time applications. As a way to show the algorithm's implementation, one case of study is considered throughout the paper, represented by the SILO2 humanoid robot.

  19. MODELADO DE PARTÍCULAS PM10 Y PM2.5 MEDIANTE REDES NEURONALES ARTIFICIALES SOBRE CLIMA TROPICAL DE SAN FRANCISCO DE CAMPECHE, MÉXICO

    Directory of Open Access Journals (Sweden)

    Alberto Antonio Espinosa Guzmán

    Full Text Available In this paper, a computational methodology based on Artificial Neural Networks (ANN was developed to estimate the index of PM10 and PM2.5 concentration in air of San Francisco de Campeche city. A three layer ANN architecture was trained using an experimental database composed by days of the week, time of day, ambient temperature, atmospheric pressure, wind speed, wind direction, relative humidity, and solar radiation. The best ANN architecture, composed by 30 neurons in hidden layer, was obtained using the Levenberg-Marquardt (LM optimization algorithm, logarithmic sigmoid and linear transfer functions. Model results generate predictions with a determination coefficient of 93.01% and 90.10% for PM2.5 and PM10, respectively. The proposed methodology can be implemented in several studies as public health, environmental studies, urban development, and degradation of historical monuments.

  20. An algorithm for robust non-linear analysis of radioimmunoassays and other bioassays

    International Nuclear Information System (INIS)

    Normolle, D.P.

    1993-01-01

    The four-parameter logistic function is an appropriate model for many types of bioassays that have continuous response variables, such as radioimmunoassays. By modelling the variance of replicates in an assay, one can modify the usual parameter estimation techniques (for example, Gauss-Newton or Marquardt-Levenberg) to produce parameter estimates for the standard curve that are robust against outlying observations. This article describes the computation of robust (M-) estimates for the parameters of the four-parameter logistic function. It describes techniques for modelling the variance structure of the replicates, modifications to the usual iterative algorithms for parameter estimation in non-linear models, and a formula for inverse confidence intervals. To demonstrate the algorithm, the article presents examples where the robustly estimated four-parameter logistic model is compared with the logit-log and four-parameter logistic models with least-squares estimates. (author)

  1. Development of an Algorithm for Automatic Analysis of the Impedance Spectrum Based on a Measurement Model

    Science.gov (United States)

    Kobayashi, Kiyoshi; Suzuki, Tohru S.

    2018-03-01

    A new algorithm for the automatic estimation of an equivalent circuit and the subsequent parameter optimization is developed by combining the data-mining concept and complex least-squares method. In this algorithm, the program generates an initial equivalent-circuit model based on the sampling data and then attempts to optimize the parameters. The basic hypothesis is that the measured impedance spectrum can be reproduced by the sum of the partial-impedance spectra presented by the resistor, inductor, resistor connected in parallel to a capacitor, and resistor connected in parallel to an inductor. The adequacy of the model is determined by using a simple artificial-intelligence function, which is applied to the output function of the Levenberg-Marquardt module. From the iteration of model modifications, the program finds an adequate equivalent-circuit model without any user input to the equivalent-circuit model.

  2. Redundant interferometric calibration as a complex optimization problem

    Science.gov (United States)

    Grobler, T. L.; Bernardi, G.; Kenyon, J. S.; Parsons, A. R.; Smirnov, O. M.

    2018-05-01

    Observations of the redshifted 21 cm line from the epoch of reionization have recently motivated the construction of low-frequency radio arrays with highly redundant configurations. These configurations provide an alternative calibration strategy - `redundant calibration' - and boost sensitivity on specific spatial scales. In this paper, we formulate calibration of redundant interferometric arrays as a complex optimization problem. We solve this optimization problem via the Levenberg-Marquardt algorithm. This calibration approach is more robust to initial conditions than current algorithms and, by leveraging an approximate matrix inversion, allows for further optimization and an efficient implementation (`redundant STEFCAL'). We also investigated using the preconditioned conjugate gradient method as an alternative to the approximate matrix inverse, but found that its computational performance is not competitive with respect to `redundant STEFCAL'. The efficient implementation of this new algorithm is made publicly available.

  3. Group-regularized individual prediction: theory and application to pain.

    Science.gov (United States)

    Lindquist, Martin A; Krishnan, Anjali; López-Solà, Marina; Jepma, Marieke; Woo, Choong-Wan; Koban, Leonie; Roy, Mathieu; Atlas, Lauren Y; Schmidt, Liane; Chang, Luke J; Reynolds Losin, Elizabeth A; Eisenbarth, Hedwig; Ashar, Yoni K; Delk, Elizabeth; Wager, Tor D

    2017-01-15

    Multivariate pattern analysis (MVPA) has become an important tool for identifying brain representations of psychological processes and clinical outcomes using fMRI and related methods. Such methods can be used to predict or 'decode' psychological states in individual subjects. Single-subject MVPA approaches, however, are limited by the amount and quality of individual-subject data. In spite of higher spatial resolution, predictive accuracy from single-subject data often does not exceed what can be accomplished using coarser, group-level maps, because single-subject patterns are trained on limited amounts of often-noisy data. Here, we present a method that combines population-level priors, in the form of biomarker patterns developed on prior samples, with single-subject MVPA maps to improve single-subject prediction. Theoretical results and simulations motivate a weighting based on the relative variances of biomarker-based prediction-based on population-level predictive maps from prior groups-and individual-subject, cross-validated prediction. Empirical results predicting pain using brain activity on a trial-by-trial basis (single-trial prediction) across 6 studies (N=180 participants) confirm the theoretical predictions. Regularization based on a population-level biomarker-in this case, the Neurologic Pain Signature (NPS)-improved single-subject prediction accuracy compared with idiographic maps based on the individuals' data alone. The regularization scheme that we propose, which we term group-regularized individual prediction (GRIP), can be applied broadly to within-person MVPA-based prediction. We also show how GRIP can be used to evaluate data quality and provide benchmarks for the appropriateness of population-level maps like the NPS for a given individual or study. Copyright © 2015 Elsevier Inc. All rights reserved.

  4. Performance Estimation and Fault Diagnosis Based on Levenberg–Marquardt Algorithm for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Junjie Lu

    2018-01-01

    Full Text Available Establishing the schemes of accurate and computationally efficient performance estimation and fault diagnosis for turbofan engines has become a new research focus and challenges. It is able to increase reliability and stability of turbofan engine and reduce the life cycle costs. Accurate estimation of turbofan engine performance counts on thoroughly understanding the components’ performance, which is described by component characteristic maps and the fault of each component can be regarded as the change of characteristic maps. In this paper, a novel method based on a Levenberg–Marquardt (LM algorithm is proposed to enhance the fidelity of the performance estimation and the credibility of the fault diagnosis for the turbofan engine. The presented method utilizes the LM algorithm to figure out the operating point in the characteristic maps, preparing for performance estimation and fault diagnosis. The accuracy of the proposed method is evaluated for estimating performance parameters in the transient case with Rayleigh process noise and Gaussian measurement noise. The comparison among the extended Kalman filter (EKF method, the particle filter (PF method and the proposed method is implemented in the abrupt fault case and the gradual degeneration case and it has been shown that the proposed method has the capability to lead to more accurate result for performance estimation and fault diagnosis of turbofan engine than current popular EKF and PF diagnosis methods.

  5. 5 CFR 551.421 - Regular working hours.

    Science.gov (United States)

    2010-01-01

    ... 5 Administrative Personnel 1 2010-01-01 2010-01-01 false Regular working hours. 551.421 Section... Activities § 551.421 Regular working hours. (a) Under the Act there is no requirement that a Federal employee... distinction based on whether the activity is performed by an employee during regular working hours or outside...

  6. Modeling the ultrasonic testing echoes by a combination of particle swarm optimization and Levenberg–Marquardt algorithms

    International Nuclear Information System (INIS)

    Gholami, Ali; Honarvar, Farhang; Moghaddam, Hamid Abrishami

    2017-01-01

    This paper presents an accurate and easy-to-implement algorithm for estimating the parameters of the asymmetric Gaussian chirplet model (AGCM) used for modeling echoes measured in ultrasonic nondestructive testing (NDT) of materials. The proposed algorithm is a combination of particle swarm optimization (PSO) and Levenberg–Marquardt (LM) algorithms. PSO does not need an accurate initial guess and quickly converges to a reasonable output while LM needs a good initial guess in order to provide an accurate output. In the combined algorithm, PSO is run first to provide a rough estimate of the output and this result is consequently inputted to the LM algorithm for more accurate estimation of parameters. To apply the algorithm to signals with multiple echoes, the space alternating generalized expectation maximization (SAGE) is used. The proposed combined algorithm is robust and accurate. To examine the performance of the proposed algorithm, it is applied to a number of simulated echoes having various signal to noise ratios. The combined algorithm is also applied to a number of experimental ultrasonic signals. The results corroborate the accuracy and reliability of the proposed combined algorithm. (paper)

  7. Periodic orbits around areostationary points in the Martian gravity field

    International Nuclear Information System (INIS)

    Liu Xiaodong; Baoyin Hexi; Ma Xingrui

    2012-01-01

    This study investigates the problem of areostationary orbits around Mars in three-dimensional space. Areostationary orbits are expected to be used to establish a future telecommunication network for the exploration of Mars. However, no artificial satellites have been placed in these orbits thus far. The characteristics of the Martian gravity field are presented, and areostationary points and their linear stability are calculated. By taking linearized solutions in the planar case as the initial guesses and utilizing the Levenberg-Marquardt method, families of periodic orbits around areostationary points are shown to exist. Short-period orbits and long-period orbits are found around linearly stable areostationary points, but only short-period orbits are found around unstable areostationary points. Vertical periodic orbits around both linearly stable and unstable areostationary points are also examined. Satellites in these periodic orbits could depart from areostationary points by a few degrees in longitude, which would facilitate observation of the Martian topography. Based on the eigenvalues of the monodromy matrix, the evolution of the stability index of periodic orbits is determined. Finally, heteroclinic orbits connecting the two unstable areostationary points are found, providing the possibility for orbital transfer with minimal energy consumption.

  8. MATLAB-implemented estimation procedure for model-based assessment of hepatic insulin degradation from standard intravenous glucose tolerance test data.

    Science.gov (United States)

    Di Nardo, Francesco; Mengoni, Michele; Morettini, Micaela

    2013-05-01

    Present study provides a novel MATLAB-based parameter estimation procedure for individual assessment of hepatic insulin degradation (HID) process from standard frequently-sampled intravenous glucose tolerance test (FSIGTT) data. Direct access to the source code, offered by MATLAB, enabled us to design an optimization procedure based on the alternating use of Gauss-Newton's and Levenberg-Marquardt's algorithms, which assures the full convergence of the process and the containment of computational time. Reliability was tested by direct comparison with the application, in eighteen non-diabetic subjects, of well-known kinetic analysis software package SAAM II, and by application on different data. Agreement between MATLAB and SAAM II was warranted by intraclass correlation coefficients ≥0.73; no significant differences between corresponding mean parameter estimates and prediction of HID rate; and consistent residual analysis. Moreover, MATLAB optimization procedure resulted in a significant 51% reduction of CV% for the worst-estimated parameter by SAAM II and in maintaining all model-parameter CV% MATLAB-based procedure was suggested as a suitable tool for the individual assessment of HID process. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  9. Hybrid neuro-fuzzy system for power generation control with environmental constraints

    International Nuclear Information System (INIS)

    Chaturvedi, Krishna Teerth; Pandit, Manjaree; Srivastava, Laxmi

    2008-01-01

    The real time controls at the central energy management centre in a power system, continuously track the load changes and endeavor to match the total power demand with total generation in such a manner that the operating cost is least. However due to the strict government regulations on environmental protection, operation at minimum cost is no longer the only criterion for dispatching electrical power. The idea behind the environmentally constrained combined economic dispatch formulation is to estimate the optimal generation allocation to generating units in such a manner that fuel cost and harmful emission levels are both simultaneously minimized for a given load demand. Conventional optimization techniques are cumbersome for such complex optimization tasks and are not suitable for on-line use due to increased computational burden. This paper proposes a neuro-fuzzy power dispatch method where the uncertainty involved with power demand is modeled as a fuzzy variable. Then Levenberg-Marquardt neural network (LMNN) is used to evaluate the optimal generation schedules. This model trains almost hundred times faster that the popular BP neural network. The proposed method has been tested on two test systems and found to be suitable for on-line combined environmental economic dispatch

  10. X-ray computed tomography using curvelet sparse regularization.

    Science.gov (United States)

    Wieczorek, Matthias; Frikel, Jürgen; Vogel, Jakob; Eggl, Elena; Kopp, Felix; Noël, Peter B; Pfeiffer, Franz; Demaret, Laurent; Lasser, Tobias

    2015-04-01

    Reconstruction of x-ray computed tomography (CT) data remains a mathematically challenging problem in medical imaging. Complementing the standard analytical reconstruction methods, sparse regularization is growing in importance, as it allows inclusion of prior knowledge. The paper presents a method for sparse regularization based on the curvelet frame for the application to iterative reconstruction in x-ray computed tomography. In this work, the authors present an iterative reconstruction approach based on the alternating direction method of multipliers using curvelet sparse regularization. Evaluation of the method is performed on a specifically crafted numerical phantom dataset to highlight the method's strengths. Additional evaluation is performed on two real datasets from commercial scanners with different noise characteristics, a clinical bone sample acquired in a micro-CT and a human abdomen scanned in a diagnostic CT. The results clearly illustrate that curvelet sparse regularization has characteristic strengths. In particular, it improves the restoration and resolution of highly directional, high contrast features with smooth contrast variations. The authors also compare this approach to the popular technique of total variation and to traditional filtered backprojection. The authors conclude that curvelet sparse regularization is able to improve reconstruction quality by reducing noise while preserving highly directional features.

  11. Superposing pure quantum states with partial prior information

    Science.gov (United States)

    Dogra, Shruti; Thomas, George; Ghosh, Sibasish; Suter, Dieter

    2018-05-01

    The principle of superposition is an intriguing feature of quantum mechanics, which is regularly exploited in many different circumstances. A recent work [M. Oszmaniec et al., Phys. Rev. Lett. 116, 110403 (2016), 10.1103/PhysRevLett.116.110403] shows that the fundamentals of quantum mechanics restrict the process of superimposing two unknown pure states, even though it is possible to superimpose two quantum states with partial prior knowledge. The prior knowledge imposes geometrical constraints on the choice of input states. We discuss an experimentally feasible protocol to superimpose multiple pure states of a d -dimensional quantum system and carry out an explicit experimental realization for two single-qubit pure states with partial prior information on a two-qubit NMR quantum information processor.

  12. Regularized Laplace-Fourier-Domain Full Waveform Inversion Using a Weighted l 2 Objective Function

    Science.gov (United States)

    Jun, Hyunggu; Kwon, Jungmin; Shin, Changsoo; Zhou, Hongbo; Cogan, Mike

    2017-03-01

    Full waveform inversion (FWI) can be applied to obtain an accurate velocity model that contains important geophysical and geological information. FWI suffers from the local minimum problem when the starting model is not sufficiently close to the true model. Therefore, an accurate macroscale velocity model is essential for successful FWI, and Laplace-Fourier-domain FWI is appropriate for obtaining such a velocity model. However, conventional Laplace-Fourier-domain FWI remains an ill-posed and ill-conditioned problem, meaning that small errors in the data can result in large differences in the inverted model. This approach also suffers from certain limitations related to the logarithmic objective function. To overcome the limitations of conventional Laplace-Fourier-domain FWI, we introduce a weighted l 2 objective function, instead of the logarithmic objective function, as the data-domain objective function, and we also introduce two different model-domain regularizations: first-order Tikhonov regularization and prior model regularization. The weighting matrix for the data-domain objective function is constructed to suitably enhance the far-offset information. Tikhonov regularization smoothes the gradient, and prior model regularization allows reliable prior information to be taken into account. Two hyperparameters are obtained through trial and error and used to control the trade-off and achieve an appropriate balance between the data-domain and model-domain gradients. The application of the proposed regularizations facilitates finding a unique solution via FWI, and the weighted l 2 objective function ensures a more reasonable residual, thereby improving the stability of the gradient calculation. Numerical tests performed using the Marmousi synthetic dataset show that the use of the weighted l 2 objective function and the model-domain regularizations significantly improves the Laplace-Fourier-domain FWI. Because the Laplace-Fourier-domain FWI is improved, the

  13. Result of 11th regular inspection of No.1 plant in Shimane Nuclear Power Station, Chugoku Electric Power Co., Inc

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    The 11th regular inspection of No.1 plant in Shimane Nuclear Power Station was carried out from January 9 to July 2, 1986. The parallel operation was resumed on June 19, 1986, 162 days after the parallel off. The facilities as the object of inspection were the reactor proper, reactor cooling system, measurement and control system, fuel facilities, radiation control facilities, waste facilities, reactor containment installation, and emergency power generation system. On these facilities as the object of inspection, the appearance, disassembling, leak, function, performance and other inspections were carried out, as the result, any abnormality was not found. The works related to this regular inspection were accomplished within the range of the allowable radiation dose based on the relevant laws. The main reconstruction works carried out during the period of this regular inspection were as follows. Feed water spargers were replaced with those of welded type, the material of the drain pipe for No.3 feed heater was changed to STPA 23, an exhaust compressor, an exhaust gas-water separator and others, which have not been used, were removed, and the connecting pipe for a liquid nitrogen evaporator was installed. (Kako, I.)

  14. Nonnegative Matrix Factorization with Rank Regularization and Hard Constraint.

    Science.gov (United States)

    Shang, Ronghua; Liu, Chiyang; Meng, Yang; Jiao, Licheng; Stolkin, Rustam

    2017-09-01

    Nonnegative matrix factorization (NMF) is well known to be an effective tool for dimensionality reduction in problems involving big data. For this reason, it frequently appears in many areas of scientific and engineering literature. This letter proposes a novel semisupervised NMF algorithm for overcoming a variety of problems associated with NMF algorithms, including poor use of prior information, negative impact on manifold structure of the sparse constraint, and inaccurate graph construction. Our proposed algorithm, nonnegative matrix factorization with rank regularization and hard constraint (NMFRC), incorporates label information into data representation as a hard constraint, which makes full use of prior information. NMFRC also measures pairwise similarity according to geodesic distance rather than Euclidean distance. This results in more accurate measurement of pairwise relationships, resulting in more effective manifold information. Furthermore, NMFRC adopts rank constraint instead of norm constraints for regularization to balance the sparseness and smoothness of data. In this way, the new data representation is more representative and has better interpretability. Experiments on real data sets suggest that NMFRC outperforms four other state-of-the-art algorithms in terms of clustering accuracy.

  15. Analysis of the IJCNN 2007 agnostic learning vs. prior knowledge challenge.

    Science.gov (United States)

    Guyon, Isabelle; Saffari, Amir; Dror, Gideon; Cawley, Gavin

    2008-01-01

    We organized a challenge for IJCNN 2007 to assess the added value of prior domain knowledge in machine learning. Most commercial data mining programs accept data pre-formatted in the form of a table, with each example being encoded as a linear feature vector. Is it worth spending time incorporating domain knowledge in feature construction or algorithm design, or can off-the-shelf programs working directly on simple low-level features do better than skilled data analysts? To answer these questions, we formatted five datasets using two data representations. The participants in the "prior knowledge" track used the raw data, with full knowledge of the meaning of the data representation. Conversely, the participants in the "agnostic learning" track used a pre-formatted data table, with no knowledge of the identity of the features. The results indicate that black-box methods using relatively unsophisticated features work quite well and rapidly approach the best attainable performance. The winners on the prior knowledge track used feature extraction strategies yielding a large number of low-level features. Incorporating prior knowledge in the form of generic coding/smoothing methods to exploit regularities in data is beneficial, but incorporating actual domain knowledge in feature construction is very time consuming and seldom leads to significant improvements. The AL vs. PK challenge web site remains open for post-challenge submissions: http://www.agnostic.inf.ethz.ch/.

  16. Elastography as a hybrid imaging technique : coupling with photoacoustics and quantitative imaging

    International Nuclear Information System (INIS)

    Widlak, T.G.

    2015-01-01

    While classical imaging methods, such as ultrasound, computed tomography or magnetic resonance imaging, are well-known and mathematically understood, a host of physiological parameters relevant for diagnostic purposes cannot be obtained by them. This gap is recently being closed by the introduction of hybrid, or coupled-physics imaging methods. They connect more then one physical modality, and aim to provide quantitative information on optical, electrical or mechanical parameters with high resolution. Central to this thesis is the mechanical contrast of elastic tissue, especially Young’s modulus or the shear modulus. Different methods of qualitative elastography provide interior information of the mechanical displacement field. From this interior data the nonlinear inverse problem of quantitative elastography aims to reconstruct the shear modulus. In this thesis, the elastography problem is seen from a hybrid imaging perspective; methods from coupled-physics inspired literature and regularization theory have been employed to recover displacement and shear modulus information. The overdetermined systems approach by G. Bal is applied to the quantitative problem, and ellipticity criteria are deduced, for one and several measurements, as well as injectivity results. Together with the geometric theory of G. Chavent, the results are used for analyzing convergence of Tikhonov regularization. Also, a convergence analysis for the Levenberg Marquardt method is provided. As a second mainstream project in this thesis, elastography imaging is developed for extracting displacements from photoacoustic images. A novel method is provided for texturizing the images, and the optical flow problem for motion estimation is shown to be regularized with this texture generation. The results are tested in cooperation with the Medical University Vienna, and the methods for quantitative determination of the shear modulus evaluated in first experiments. In summary, the overdetermined systems

  17. Impact of imatinib interruption and duration of prior hydroxyurea on the treatment outcome in patients with chronic myeloid leukemia: Single institution experience

    Directory of Open Access Journals (Sweden)

    Wael Abdelgawad Edesa

    2015-06-01

    Conclusion: Duration of prior hydroxyurea had no impact on response or progression free survival, while patients regular on imatinib had statistically significant difference with respect to major molecular response, complete molecular response and progression free survival compared to those who had periods of drug interruption, thus we need more governmental support to supply the drug without interruption to improve the outcome of therapy.

  18. Duas crianças cegas congênitas no primeiro ciclo da escola regular Two congenitally blind children in the first cycle of regular school

    Directory of Open Access Journals (Sweden)

    Fernando Jorge Costa Figueiredo

    2010-04-01

    Full Text Available O estudo tem como objetivo investigar com maior profundidade a pesquisa sobre representação mental da realidade em crianças com cegueira congênita, comparando-as com crianças normovisuais no ensino básico da escola regular em Portugal. A partir de fundamentos teóricos, pretende-se analisar as diferentes crianças ao longo do tempo, bem como a sociedade atual perante as crianças diferentes. Foram feitos dois estudos de caso, combinando dados de natureza quantitativa e qualitativa. A análise desses casos revela dois caminhos diferentes na integração de crianças com cegueira congênita no primeiro ciclo do ensino básico, sendo que essa diferenciação não resulta dos processos de adaptação ao aluno concreto numa perspectiva humanista e, sim, dos condicionamentos físicos (escolas e organizacionais (Educação Especial.This study aims to look in more detail into the mental representation of reality in congenitally blind children when compared with normal-sighted children in basic education in a regular school in Portugal. Starting with the theoretical fundamentals, its intention is to analyze different children over time as well as the current society, vis-à-vis these children. We undertook two case studies and combined quantitative and qualitative data. The analysis of these cases reveals two different paths in the integration of the congenitally blind children, a differentiation that does not result from processes of adapting to the specific child from a humanistic perspective, but rather from the physical (schools and organizational (Special Education conditions.

  19. Incorporating a Spatial Prior into Nonlinear D-Bar EIT Imaging for Complex Admittivities.

    Science.gov (United States)

    Hamilton, Sarah J; Mueller, J L; Alsaker, M

    2017-02-01

    Electrical Impedance Tomography (EIT) aims to recover the internal conductivity and permittivity distributions of a body from electrical measurements taken on electrodes on the surface of the body. The reconstruction task is a severely ill-posed nonlinear inverse problem that is highly sensitive to measurement noise and modeling errors. Regularized D-bar methods have shown great promise in producing noise-robust algorithms by employing a low-pass filtering of nonlinear (nonphysical) Fourier transform data specific to the EIT problem. Including prior data with the approximate locations of major organ boundaries in the scattering transform provides a means of extending the radius of the low-pass filter to include higher frequency components in the reconstruction, in particular, features that are known with high confidence. This information is additionally included in the system of D-bar equations with an independent regularization parameter from that of the extended scattering transform. In this paper, this approach is used in the 2-D D-bar method for admittivity (conductivity as well as permittivity) EIT imaging. Noise-robust reconstructions are presented for simulated EIT data on chest-shaped phantoms with a simulated pneumothorax and pleural effusion. No assumption of the pathology is used in the construction of the prior, yet the method still produces significant enhancements of the underlying pathology (pneumothorax or pleural effusion) even in the presence of strong noise.

  20. Detection of Bundle Branch Block using Adaptive Bacterial Foraging Optimization and Neural Network

    Directory of Open Access Journals (Sweden)

    Padmavthi Kora

    2017-03-01

    Full Text Available The medical practitioners analyze the electrical activity of the human heart so as to predict various ailments by studying the data collected from the Electrocardiogram (ECG. A Bundle Branch Block (BBB is a type of heart disease which occurs when there is an obstruction along the pathway of an electrical impulse. This abnormality makes the heart beat irregular as there is an obstruction in the branches of heart, this results in pulses to travel slower than the usual. Our current study involved is to diagnose this heart problem using Adaptive Bacterial Foraging Optimization (ABFO Algorithm. The Data collected from MIT/BIH arrhythmia BBB database applied to an ABFO Algorithm for obtaining best(important feature from each ECG beat. These features later fed to Levenberg Marquardt Neural Network (LMNN based classifier. The results show the proposed classification using ABFO is better than some recent algorithms reported in the literature.

  1. Usage of neural network to predict aluminium oxide layer thickness.

    Science.gov (United States)

    Michal, Peter; Vagaská, Alena; Gombár, Miroslav; Kmec, Ján; Spišák, Emil; Kučerka, Daniel

    2015-01-01

    This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage) and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A · dm(-2) and 3 A · dm(-2) for creating aluminium oxide layer.

  2. Usage of Neural Network to Predict Aluminium Oxide Layer Thickness

    Directory of Open Access Journals (Sweden)

    Peter Michal

    2015-01-01

    Full Text Available This paper shows an influence of chemical composition of used electrolyte, such as amount of sulphuric acid in electrolyte, amount of aluminium cations in electrolyte and amount of oxalic acid in electrolyte, and operating parameters of process of anodic oxidation of aluminium such as the temperature of electrolyte, anodizing time, and voltage applied during anodizing process. The paper shows the influence of those parameters on the resulting thickness of aluminium oxide layer. The impact of these variables is shown by using central composite design of experiment for six factors (amount of sulphuric acid, amount of oxalic acid, amount of aluminium cations, electrolyte temperature, anodizing time, and applied voltage and by usage of the cubic neural unit with Levenberg-Marquardt algorithm during the results evaluation. The paper also deals with current densities of 1 A·dm−2 and 3 A·dm−2 for creating aluminium oxide layer.

  3. Semi-empirical Algorithm for the Retrieval of Ecology-Relevant Water Constituents in Various Aquatic Environments

    Directory of Open Access Journals (Sweden)

    Robert Shuchman

    2009-03-01

    Full Text Available An advanced operational semi-empirical algorithm for processing satellite remote sensing data in the visible region is described. Based on the Levenberg-Marquardt multivariate optimization procedure, the algorithm is developed for retrieving major water colour producing agents: chlorophyll-a, suspended minerals and dissolved organics. Two assurance units incorporated by the algorithm are intended to flag pixels with inaccurate atmospheric correction and specific hydro-optical properties not covered by the applied hydro-optical model. The hydro-optical model is a set of spectral cross-sections of absorption and backscattering of the colour producing agents. The combination of the optimization procedure and a replaceable hydro-optical model makes the developed algorithm not specific to a particular satellite sensor or a water body. The algorithm performance efficiency is amply illustrated for SeaWiFS, MODIS and MERIS images over a variety of water bodies.

  4. Contribution to the modelling of induction machines by fractional order; Contribution a la modelisation dynamique d'ordre non entier de la machine asynchrone a cage

    Energy Technology Data Exchange (ETDEWEB)

    Canat, S.

    2005-07-15

    Induction machine is most widespread in industry. Its traditional modeling does not take into account the eddy current in the rotor bars which however induce strong variations as well of the resistance as of the resistance of the rotor. This diffusive phenomenon, called 'skin effect' could be modeled by a compact transfer function using fractional derivative (non integer order). This report theoretically analyzes the electromagnetic phenomenon on a single rotor bar before approaching the rotor as a whole. This analysis is confirmed by the results of finite elements calculations of the magnetic field, exploited to identify a fractional order model of the induction machine (identification method of Levenberg-Marquardt). Then, the model is confronted with an identification of experimental results. Finally, an automatic method is carried out to approximate the dynamic model by integer order transfer function on a frequency band. (author)

  5. Super capacitor modeling with artificial neural network (ANN)

    Energy Technology Data Exchange (ETDEWEB)

    Marie-Francoise, J.N.; Gualous, H.; Berthon, A. [Universite de Franche-Comte, Lab. en Electronique, Electrotechnique et Systemes (L2ES), UTBM, INRETS (LRE T31) 90 - Belfort (France)

    2004-07-01

    This paper presents super-capacitors modeling using Artificial Neural Network (ANN). The principle consists on a black box nonlinear multiple inputs single output (MISO) model. The system inputs are temperature and current, the output is the super-capacitor voltage. The learning and the validation of the ANN model from experimental charge and discharge of super-capacitor establish the relationship between inputs and output. The learning and the validation of the ANN model use experimental results of 2700 F, 3700 F and a super-capacitor pack. Once the network is trained, the ANN model can predict the super-capacitor behaviour with temperature variations. The update parameters of the ANN model are performed thanks to Levenberg-Marquardt method in order to minimize the error between the output of the system and the predicted output. The obtained results with the ANN model of super-capacitor and experimental ones are in good agreement. (authors)

  6. A fitting algorithm based on simulated annealing techniques for efficiency calibration of HPGe detectors using different mathematical functions

    Energy Technology Data Exchange (ETDEWEB)

    Hurtado, S. [Servicio de Radioisotopos, Centro de Investigacion, Tecnologia e Innovacion (CITIUS), Universidad de Sevilla, Avda. Reina Mercedes s/n, 41012 Sevilla (Spain)], E-mail: shurtado@us.es; Garcia-Leon, M. [Departamento de Fisica Atomica, Molecular y Nuclear, Facultad de Fisica, Universidad de Sevilla, Aptd. 1065, 41080 Sevilla (Spain); Garcia-Tenorio, R. [Departamento de Fisica Aplicada II, E.T.S.A. Universidad de Sevilla, Avda, Reina Mercedes 2, 41012 Sevilla (Spain)

    2008-09-11

    In this work several mathematical functions are compared in order to perform the full-energy peak efficiency calibration of HPGe detectors using a 126cm{sup 3} HPGe coaxial detector and gamma-ray energies ranging from 36 to 1460 keV. Statistical tests and Monte Carlo simulations were used to study the performance of the fitting curve equations. Furthermore the fitting procedure of these complex functional forms to experimental data is a non-linear multi-parameter minimization problem. In gamma-ray spectrometry usually non-linear least-squares fitting algorithms (Levenberg-Marquardt method) provide a fast convergence while minimizing {chi}{sub R}{sup 2}, however, sometimes reaching only local minima. In order to overcome that shortcoming a hybrid algorithm based on simulated annealing (HSA) techniques is proposed. Additionally a new function is suggested that models the efficiency curve of germanium detectors in gamma-ray spectrometry.

  7. A Computational Agent-Based Modeling Approach for Competitive Wireless Service Market

    KAUST Repository

    Douglas, C C

    2011-04-01

    Using an agent-based modeling method, we study market dynamism with regard to wireless cellular services that are in competition for a greater market share and profit. In the proposed model, service providers and consumers are described as agents who interact with each other and actively participate in an economically well-defined marketplace. Parameters of the model are optimized using the Levenberg-Marquardt method. The quantitative prediction capabilities of the proposed model are examined through data reproducibility using past data from the U.S. and Korean wireless service markets. Finally, we investigate a disruptive market event, namely the introduction of the iPhone into the U.S. in 2007 and the resulting changes in the modeling parameters. We predict and analyze the impacts of the introduction of the iPhone into the Korean wireless service market assuming a release date of 2Q09 based on earlier data. © 2011 IEEE.

  8. Diverse Regular Employees and Non-regular Employment (Japanese)

    OpenAIRE

    MORISHIMA Motohiro

    2011-01-01

    Currently there are high expectations for the introduction of policies related to diverse regular employees. These policies are a response to the problem of disparities between regular and non-regular employees (part-time, temporary, contract and other non-regular employees) and will make it more likely that workers can balance work and their private lives while companies benefit from the advantages of regular employment. In this paper, I look at two issues that underlie this discussion. The ...

  9. Condition Number Regularized Covariance Estimation.

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2013-06-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n " setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.

  10. Compositional-prior-guided image reconstruction algorithm for multi-modality imaging

    Science.gov (United States)

    Fang, Qianqian; Moore, Richard H.; Kopans, Daniel B.; Boas, David A.

    2010-01-01

    The development of effective multi-modality imaging methods typically requires an efficient information fusion model, particularly when combining structural images with a complementary imaging modality that provides functional information. We propose a composition-based image segmentation method for X-ray digital breast tomosynthesis (DBT) and a structural-prior-guided image reconstruction for a combined DBT and diffuse optical tomography (DOT) breast imaging system. Using the 3D DBT images from 31 clinically measured healthy breasts, we create an empirical relationship between the X-ray intensities for adipose and fibroglandular tissue. We use this relationship to then segment another 58 healthy breast DBT images from 29 subjects into compositional maps of different tissue types. For each breast, we build a weighted-graph in the compositional space and construct a regularization matrix to incorporate the structural priors into a finite-element-based DOT image reconstruction. Use of the compositional priors enables us to fuse tissue anatomy into optical images with less restriction than when using a binary segmentation. This allows us to recover the image contrast captured by DOT but not by DBT. We show that it is possible to fine-tune the strength of the structural priors by changing a single regularization parameter. By estimating the optical properties for adipose and fibroglandular tissue using the proposed algorithm, we found the results are comparable or superior to those estimated with expert-segmentations, but does not involve the time-consuming manual selection of regions-of-interest. PMID:21258460

  11. Shift versus no-shift in local regularization of Chern-Simons theory

    International Nuclear Information System (INIS)

    Giavarini, G.; Parma Univ.; Martin, C.P.; Ruiz Ruiz, F.

    1994-01-01

    We consider a family of local BRS-invariant higher covariant derivative regularizations of SU(N) Chern-Simons theory that do not shift the value of the Chern-Simons parameter k to k + sign(k) c v at one loop. (orig.)

  12. Evaluation of Parallel Level Sets and Bowsher's Method as Segmentation-Free Anatomical Priors for Time-of-Flight PET Reconstruction.

    Science.gov (United States)

    Schramm, Georg; Holler, Martin; Rezaei, Ahmadreza; Vunckx, Kathleen; Knoll, Florian; Bredies, Kristian; Boada, Fernando; Nuyts, Johan

    2018-02-01

    In this article, we evaluate Parallel Level Sets (PLS) and Bowsher's method as segmentation-free anatomical priors for regularized brain positron emission tomography (PET) reconstruction. We derive the proximity operators for two PLS priors and use the EM-TV algorithm in combination with the first order primal-dual algorithm by Chambolle and Pock to solve the non-smooth optimization problem for PET reconstruction with PLS regularization. In addition, we compare the performance of two PLS versions against the symmetric and asymmetric Bowsher priors with quadratic and relative difference penalty function. For this aim, we first evaluate reconstructions of 30 noise realizations of simulated PET data derived from a real hybrid positron emission tomography/magnetic resonance imaging (PET/MR) acquisition in terms of regional bias and noise. Second, we evaluate reconstructions of a real brain PET/MR data set acquired on a GE Signa time-of-flight PET/MR in a similar way. The reconstructions of simulated and real 3D PET/MR data show that all priors were superior to post-smoothed maximum likelihood expectation maximization with ordered subsets (OSEM) in terms of bias-noise characteristics in different regions of interest where the PET uptake follows anatomical boundaries. Our implementation of the asymmetric Bowsher prior showed slightly superior performance compared with the two versions of PLS and the symmetric Bowsher prior. At very high regularization weights, all investigated anatomical priors suffer from the transfer of non-shared gradients.

  13. Condition Number Regularized Covariance Estimation*

    Science.gov (United States)

    Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala

    2012-01-01

    Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197

  14. Coordinate-invariant regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-01-01

    A general phase-space framework for coordinate-invariant regularization is given. The development is geometric, with all regularization contained in regularized DeWitt Superstructures on field deformations. Parallel development of invariant coordinate-space regularization is obtained by regularized functional integration of the momenta. As representative examples of the general formulation, the regularized general non-linear sigma model and regularized quantum gravity are discussed. copyright 1987 Academic Press, Inc

  15. Dimensional regularization in configuration space

    International Nuclear Information System (INIS)

    Bollini, C.G.; Giambiagi, J.J.

    1995-09-01

    Dimensional regularization is introduced in configuration space by Fourier transforming in D-dimensions the perturbative momentum space Green functions. For this transformation, Bochner theorem is used, no extra parameters, such as those of Feynman or Bogoliubov-Shirkov are needed for convolutions. The regularized causal functions in x-space have ν-dependent moderated singularities at the origin. They can be multiplied together and Fourier transformed (Bochner) without divergence problems. The usual ultraviolet divergences appear as poles of the resultant functions of ν. Several example are discussed. (author). 9 refs

  16. Dispersion in cylindrical channels on the laminar flow at low Fourier numbers.

    Science.gov (United States)

    Kucza, Witold; Dąbrowa, Juliusz; Nawara, Katarzyna

    2015-06-30

    A numerical solution of the uniform dispersion model in cylindrical channels at low Fourier numbers is presented. The presented setup allowed to eliminate experimental non-idealities interfering the laminar flow. Double-humped responses measured in a flow injection system with impedance detection agreed with those predicted by theory. Simulated concentration profiles as well as flow injection analysis (FIA) responses show the predictive and descriptive power of the numerical approach. A strong dependence of peak shapes on Fourier numbers, at its low values, makes the approach suitable for determination of diffusion coefficients. In the work, the uniform dispersion model coupled with the Levenberg-Marquardt method of optimization allowed to determine the salt diffusion coefficient for KCl, NaCl, KMnO4 and CuSO4 in water. The determined values (1.83, 1.53, 1.57 and 0.90)×10(-9)m(2)s(-1), respectively, agree well with the literature data. Copyright © 2015 Elsevier B.V. All rights reserved.

  17. Design of alluvial Egyptian irrigation canals using artificial neural networks method

    Directory of Open Access Journals (Sweden)

    Hassan Ibrahim Mohamed

    2013-06-01

    Full Text Available In the present study, artificial neural networks method (ANNs is used to estimate the main parameters which used in design of stable alluvial channels. The capability of ANN models to predict the stable alluvial channels dimensions is investigated, where the flow rate and sediment mean grain size were considered as input variables and wetted perimeter, hydraulic radius, and water surface slope were considered as output variables. The used ANN models are based on a back propagation algorithm to train a multi-layer feed-forward network (Levenberg Marquardt algorithm. The proposed models were verified using 311 data sets of field data collected from 61 manmade canals and drains. Several statistical measures and graphical representation are used to check the accuracy of the models in comparison with previous empirical equations. The results of the developed ANN model proved that this technique is reliable in such field compared with previously developed methods.

  18. Prediction of paddy drying kinetics: A comparative study between mathematical and artificial neural network modelling

    Directory of Open Access Journals (Sweden)

    Beigi Mohsen

    2017-01-01

    Full Text Available The present study aimed at investigation of deep bed drying of rough rice kernels at various thin layers at different drying air temperatures and flow rates. A comparative study was performed between mathematical thin layer models and artificial neural networks to estimate the drying curves of rough rice. The suitability of nine mathematical models in simulating the drying kinetics was examined and the Midilli model was determined as the best approach for describing drying curves. Different feed forward-back propagation artificial neural networks were examined to predict the moisture content variations of the grains. The ANN with 4-18-18-1 topology, transfer function of hyperbolic tangent sigmoid and a Levenberg-Marquardt back propagation training algorithm provided the best results with the maximum correlation coefficient and the minimum mean square error values. Furthermore, it was revealed that ANN modeling had better performance in prediction of drying curves with lower root mean square error values.

  19. Artificial neural network - Genetic algorithm to optimize wheat germ fermentation condition: Application to the production of two anti-tumor benzoquinones.

    Science.gov (United States)

    Zheng, Zi-Yi; Guo, Xiao-Na; Zhu, Ke-Xue; Peng, Wei; Zhou, Hui-Ming

    2017-07-15

    Methoxy-ρ-benzoquinone (MBQ) and 2, 6-dimethoxy-ρ-benzoquinone (DMBQ) are two potential anticancer compounds in fermented wheat germ. In present study, modeling and optimization of added macronutrients, microelements, vitamins for producing MBQ and DMBQ was investigated using artificial neural network (ANN) combined with genetic algorithm (GA). A configuration of 16-11-1 ANN model with Levenberg-Marquardt training algorithm was applied for modeling the complicated nonlinear interactions among 16 nutrients in fermentation process. Under the guidance of optimized scheme, the total contents of MBQ and DMBQ was improved by 117% compared with that in the control group. Further, by evaluating the relative importance of each nutrient in terms of the two benzoquinones' yield, macronutrients and microelements were found to have a greater influence than most of vitamins. It was also observed that a number of interactions between nutrients affected the yield of MBQ and DMBQ remarkably. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Maximum solid concentrations of coal water slurries predicted by neural network models

    Energy Technology Data Exchange (ETDEWEB)

    Cheng, Jun; Li, Yanchang; Zhou, Junhu; Liu, Jianzhong; Cen, Kefa

    2010-12-15

    The nonlinear back-propagation (BP) neural network models were developed to predict the maximum solid concentration of coal water slurry (CWS) which is a substitute for oil fuel, based on physicochemical properties of 37 typical Chinese coals. The Levenberg-Marquardt algorithm was used to train five BP neural network models with different input factors. The data pretreatment method, learning rate and hidden neuron number were optimized by training models. It is found that the Hardgrove grindability index (HGI), moisture and coalification degree of parent coal are 3 indispensable factors for the prediction of CWS maximum solid concentration. Each BP neural network model gives a more accurate prediction result than the traditional polynomial regression equation. The BP neural network model with 3 input factors of HGI, moisture and oxygen/carbon ratio gives the smallest mean absolute error of 0.40%, which is much lower than that of 1.15% given by the traditional polynomial regression equation. (author)

  1. Analytical fits to the synchrotron functions

    Science.gov (United States)

    Fouka, Mourad; Ouichaoui, Saad

    2013-06-01

    Accurate fitting formulae to the synchrotron function, F(x), and its complementary function, G(x), are performed and presented. The corresponding relative errors are less than 0.26% and 0.035% for F(x) and G(x), respectively. To this end we have, first, fitted the modified Bessel functions, K5/3(x) and K2/3(x). For all the fitted functions, the general fit expression is the same, and is based on the well known asymptotic forms for low and large values of x for each function. It consists of multiplying each asymptotic form by a function that tends to unity or zero for low and large values of x. Simple formulae are suggested in this paper, depending on adjustable parameters. The latter have been determined by adopting the Levenberg-Marquardt algorithm. The proposed formulae should be of great utility and simplicity for computing spectral powers and the degree of polarization for synchrotron radiation, both for laboratory and astrophysical applications.

  2. ECG Based Heart Arrhythmia Detection Using Wavelet Coherence and Bat Algorithm

    Science.gov (United States)

    Kora, Padmavathi; Sri Rama Krishna, K.

    2016-12-01

    Atrial fibrillation (AF) is a type of heart abnormality, during the AF electrical discharges in the atrium are rapid, results in abnormal heart beat. The morphology of ECG changes due to the abnormalities in the heart. This paper consists of three major steps for the detection of heart diseases: signal pre-processing, feature extraction and classification. Feature extraction is the key process in detecting the heart abnormality. Most of the ECG detection systems depend on the time domain features for cardiac signal classification. In this paper we proposed a wavelet coherence (WTC) technique for ECG signal analysis. The WTC calculates the similarity between two waveforms in frequency domain. Parameters extracted from WTC function is used as the features of the ECG signal. These features are optimized using Bat algorithm. The Levenberg Marquardt neural network classifier is used to classify the optimized features. The performance of the classifier can be improved with the optimized features.

  3. Performance Evaluations for Super-Resolution Mosaicing on UAS Surveillance Videos

    Directory of Open Access Journals (Sweden)

    Aldo Camargo

    2013-05-01

    Full Text Available Abstract Unmanned Aircraft Systems (UAS have been widely applied for reconnaissance and surveillance by exploiting information collected from the digital imaging payload. The super-resolution (SR mosaicing of low-resolution (LR UAS surveillance video frames has become a critical requirement for UAS video processing and is important for further effective image understanding. In this paper we develop a novel super-resolution framework, which does not require the construction of sparse matrices. The proposed method implements image operations in the spatial domain and applies an iterated back-projection to construct super-resolution mosaics from the overlapping UAS surveillance video frames. The Steepest Descent method, the Conjugate Gradient method and the Levenberg-Marquardt algorithm are used to numerically solve the nonlinear optimization problem for estimating a super-resolution mosaic. A quantitative performance comparison in terms of computation time and visual quality of the super-resolution mosaics through the three numerical techniques is presented.

  4. Computing Air Demand Using the Takagi–Sugeno Model for Dam Outlets

    Directory of Open Access Journals (Sweden)

    Mohammad Zounemat-Kermani

    2013-09-01

    Full Text Available An adaptive neuro-fuzzy inference system (ANFIS was developed using the subtractive clustering technique to study the air demand in low-level outlet works. The ANFIS model was employed to calculate vent air discharge in different gate openings for an embankment dam. A hybrid learning algorithm obtained from combining back-propagation and least square estimate was adopted to identify linear and non-linear parameters in the ANFIS model. Empirical relationships based on the experimental information obtained from physical models were applied to 108 experimental data points to obtain more reliable evaluations. The feed-forward Levenberg-Marquardt neural network (LMNN and multiple linear regression (MLR models were also built using the same data to compare model performances with each other. The results indicated that the fuzzy rule-based model performed better than the LMNN and MLR models, in terms of the simulation performance criteria established, as the root mean square error, the Nash–Sutcliffe efficiency, the correlation coefficient and the Bias.

  5. Mathematical Modelling and Optimization of Cutting Force, Tool Wear and Surface Roughness by Using Artificial Neural Network and Response Surface Methodology in Milling of Ti-6242S

    Directory of Open Access Journals (Sweden)

    Erol Kilickap

    2017-10-01

    Full Text Available In this paper, an experimental study was conducted to determine the effect of different cutting parameters such as cutting speed, feed rate, and depth of cut on cutting force, surface roughness, and tool wear in the milling of Ti-6242S alloy using the cemented carbide (WC end mills with a 10 mm diameter. Data obtained from experiments were defined both Artificial Neural Network (ANN and Response Surface Methodology (RSM. ANN trained network using Levenberg-Marquardt (LM and weights were trained. On the other hand, the mathematical models in RSM were created applying Box Behnken design. Values obtained from the ANN and the RSM was found to be very close to the data obtained from experimental studies. The lowest cutting force and surface roughness were obtained at high cutting speeds and low feed rate and depth of cut. The minimum tool wear was obtained at low cutting speed, feed rate, and depth of cut.

  6. Artificial Neural Network Model for Monitoring Oil Film Regime in Spur Gear Based on Acoustic Emission Data

    Directory of Open Access Journals (Sweden)

    Yasir Hassan Ali

    2015-01-01

    Full Text Available The thickness of an oil film lubricant can contribute to less gear tooth wear and surface failure. The purpose of this research is to use artificial neural network (ANN computational modelling to correlate spur gear data from acoustic emissions, lubricant temperature, and specific film thickness (λ. The approach is using an algorithm to monitor the oil film thickness and to detect which lubrication regime the gearbox is running either hydrodynamic, elastohydrodynamic, or boundary. This monitoring can aid identification of fault development. Feed-forward and recurrent Elman neural network algorithms were used to develop ANN models, which are subjected to training, testing, and validation process. The Levenberg-Marquardt back-propagation algorithm was applied to reduce errors. Log-sigmoid and Purelin were identified as suitable transfer functions for hidden and output nodes. The methods used in this paper shows accurate predictions from ANN and the feed-forward network performance is superior to the Elman neural network.

  7. Application of Artificial Neural Networks in Canola Crop Yield Prediction

    Directory of Open Access Journals (Sweden)

    S. J. Sajadi

    2014-02-01

    Full Text Available Crop yield prediction has an important role in agricultural policies such as specification of the crop price. Crop yield prediction researches have been based on regression analysis. In this research canola yield was predicted using Artificial Neural Networks (ANN using 11 crop year climate data (1998-2009 in Gonbad-e-Kavoos region of Golestan province. ANN inputs were mean weekly rainfall, mean weekly temperature, mean weekly relative humidity and mean weekly sun shine hours and ANN output was canola yield (kg/ha. Multi-Layer Perceptron networks (MLP with Levenberg-Marquardt backpropagation learning algorithm was used for crop yield prediction and Root Mean Square Error (RMSE and square of the Correlation Coefficient (R2 criterions were used to evaluate the performance of the ANN. The obtained results show that the 13-20-1 network has the lowest RMSE equal to 101.235 and maximum value of R2 equal to 0.997 and is suitable for predicting canola yield with climate factors.

  8. Estimation of internal heat transfer coefficients and detection of rib positions in gas turbine blades from transient surface temperature measurements

    International Nuclear Information System (INIS)

    Heidrich, P; Wolfersdorf, J v; Schmidt, S; Schnieder, M

    2008-01-01

    This paper describes a non-invasive, non-destructive, transient inverse measurement technique that allows one to determine internal heat transfer coefficients and rib positions of real gas turbine blades from outer surface temperature measurements after a sudden flow heating. The determination of internal heat transfer coefficients is important during the design process to adjust local heat transfer to spatial thermal load. The detection of rib positions is important during production to fulfill design and quality requirements. For the analysis the one-dimensional transient heat transfer problem inside of the turbine blade's wall was solved. This solution was combined with the Levenberg-Marquardt method to estimate the unknown boundary condition by an inverse technique. The method was tested with artificial data to determine uncertainties with positive results. Then experimental testing with a reference model was carried out. Based on the results, it is concluded that the presented inverse technique could be used to determine internal heat transfer coefficients and to detect rib positions of real turbine blades.

  9. ALFITeX. A new code for the deconvolution of complex alpha-particle spectra

    International Nuclear Information System (INIS)

    Caro Marroyo, B.; Martin Sanchez, A.; Jurado Vargas, M.

    2013-01-01

    A new code for the deconvolution of complex alpha-particle spectra has been developed. The ALFITeX code is written in Visual Basic for Microsoft Office Excel 2010 spreadsheets, incorporating several features aimed at making it a fast, robust and useful tool with a user-friendly interface. The deconvolution procedure is based on the Levenberg-Marquardt algorithm, with the curve fitting the experimental data being the mathematical function formed by the convolution of a Gaussian with two left-handed exponentials in the low-energy-tail region. The code also includes the capability of fitting a possible constant background contribution. The application of the singular value decomposition method for matrix inversion permits the fit of any kind of alpha-particle spectra, even those presenting singularities or an ill-conditioned curvature matrix. ALFITeX has been checked with its application to the deconvolution and the calculation of the alpha-particle emission probabilities of 239 Pu, 241 Am and 235 U. (author)

  10. New Hybrid Algorithms for Estimating Tree Stem Diameters at Breast Height Using a Two Dimensional Terrestrial Laser Scanner

    Directory of Open Access Journals (Sweden)

    Jianlei Kong

    2015-07-01

    Full Text Available In this paper, a new algorithm to improve the accuracy of estimating diameter at breast height (DBH for tree trunks in forest areas is proposed. First, the information is collected by a two-dimensional terrestrial laser scanner (2DTLS, which emits laser pulses to generate a point cloud. After extraction and filtration, the laser point clusters of the trunks are obtained, which are optimized by an arithmetic means method. Then, an algebraic circle fitting algorithm in polar form is non-linearly optimized by the Levenberg-Marquardt method to form a new hybrid algorithm, which is used to acquire the diameters and positions of the trees. Compared with previous works, this proposed method improves the accuracy of diameter estimation of trees significantly and effectively reduces the calculation time. Moreover, the experimental results indicate that this method is stable and suitable for the most challenging conditions, which has practical significance in improving the operating efficiency of forest harvester and reducing the risk of causing accidents.

  11. Applications of Monte Carlo method to nonlinear regression of rheological data

    Science.gov (United States)

    Kim, Sangmo; Lee, Junghaeng; Kim, Sihyun; Cho, Kwang Soo

    2018-02-01

    In rheological study, it is often to determine the parameters of rheological models from experimental data. Since both rheological data and values of the parameters vary in logarithmic scale and the number of the parameters is quite large, conventional method of nonlinear regression such as Levenberg-Marquardt (LM) method is usually ineffective. The gradient-based method such as LM is apt to be caught in local minima which give unphysical values of the parameters whenever the initial guess of the parameters is far from the global optimum. Although this problem could be solved by simulated annealing (SA), the Monte Carlo (MC) method needs adjustable parameter which could be determined in ad hoc manner. We suggest a simplified version of SA, a kind of MC methods which results in effective values of the parameters of most complicated rheological models such as the Carreau-Yasuda model of steady shear viscosity, discrete relaxation spectrum and zero-shear viscosity as a function of concentration and molecular weight.

  12. A simplified modeling of mechanical cooling tower for control and optimization of HVAC systems

    International Nuclear Information System (INIS)

    Jin, Guang-Yu; Cai, Wen-Jian; Lu Lu; Lee, Eng Lock; Chiang, Andrew

    2007-01-01

    This paper proposes a new, simple, yet accurate mechanical cooling tower model for the purpose of energy conservation and management. On the basis of Merkel's theory and effectiveness-NTU method, the model is developed by energy balance and heat, mass transfer analysis. Commissioning information is then used to identified, only three model parameters by Levenberg-Marquardt method. Compared with the existing models, the proposed model has simple characteristic parameters to be determined and without requiring iterative computation when the operating point changes. The model is validated by real operating data from the cooling towers of a heating, ventilating and air conditioning (HVAC) system of a commercial hotel. The testing results show that the performance of the cooling tower varies from time to time due to different operating conditions and the proposed model is able to reflect these changes by tuning its parameters. With this feature, the proposed model can be simply used and accurately predict the performance of the real-time operating cooling tower

  13. Metaheuristic and Machine Learning Models for TFE-731-2, PW4056, and JT8D-9 Cruise Thrust

    Science.gov (United States)

    Baklacioglu, Tolga

    2017-08-01

    The requirement for an accurate engine thrust model has a major antecedence in airline fuel saving programs, assessment of environmental effects of fuel consumption, emissions reduction studies, and air traffic management applications. In this study, utilizing engine manufacturers' real data, a metaheuristic model based on genetic algorithms (GAs) and a machine learning model based on neural networks (NNs) trained with Levenberg-Marquardt (LM), delta-bar-delta (DBD), and conjugate gradient (CG) algorithms were accomplished to incorporate the effect of both flight altitude and Mach number in the estimation of thrust. For the GA model, the analysis of population size impact on the model's accuracy and effect of number of data on model coefficients were also performed. For the NN model, design of optimum topology was searched for one- and two-hidden-layer networks. Predicted thrust values presented a close agreement with real thrust data for both models, among which LM trained NNs gave the best accuracies.

  14. Solution of axisymmetric transient inverse heat conduction problems using parameter estimation and multi block methods

    International Nuclear Information System (INIS)

    Azimi, A.; Hannani, S.K.; Farhanieh, B.

    2005-01-01

    In this article, a comparison between two iterative inverse techniques to solve simultaneously two unknown functions of axisymmetric transient inverse heat conduction problems in semi complex geometries is presented. The multi-block structured grid together with blocked-interface nodes is implemented for geometric decomposition of physical domain. Numerical scheme for solution of transient heat conduction equation is the finite element method with frontal technique to solve algebraic system of discrete equations. The inverse heat conduction problem involves simultaneous unknown time varying heat generation and time-space varying boundary condition estimation. Two parameter-estimation techniques are considered, Levenberg-Marquardt scheme and conjugate gradient method with adjoint problem. Numerically computed exact and noisy data are used for the measured transient temperature data needed in the inverse solution. The results of the present study for a configuration including two joined disks with different heights are compared to those of exact heat source and temperature boundary condition, and show good agreement. (author)

  15. Short-term electricity prices forecasting in a competitive market: A neural network approach

    International Nuclear Information System (INIS)

    Catalao, J.P.S.; Mariano, S.J.P.S.; Mendes, V.M.F.; Ferreira, L.A.F.M.

    2007-01-01

    This paper proposes a neural network approach for forecasting short-term electricity prices. Almost until the end of last century, electricity supply was considered a public service and any price forecasting which was undertaken tended to be over the longer term, concerning future fuel prices and technical improvements. Nowadays, short-term forecasts have become increasingly important since the rise of the competitive electricity markets. In this new competitive framework, short-term price forecasting is required by producers and consumers to derive their bidding strategies to the electricity market. Accurate forecasting tools are essential for producers to maximize their profits, avowing profit losses over the misjudgement of future price movements, and for consumers to maximize their utilities. A three-layered feedforward neural network, trained by the Levenberg-Marquardt algorithm, is used for forecasting next-week electricity prices. We evaluate the accuracy of the price forecasting attained with the proposed neural network approach, reporting the results from the electricity markets of mainland Spain and California. (author)

  16. Decomposing the permeability spectra of nanocrystalline finemet core

    Science.gov (United States)

    Varga, Lajos K.; Kovac, Jozef

    2018-04-01

    In this paper we present a theoretical and experimental investigation on the magnetization contributions to permeability spectra of normal annealed Finemet core with round type hysteresis curve. Real and imaginary parts of the permeability were determined as a function of exciting magnetic field (HAC) between 40 Hz -110 MHz using an Agilent 4294A type Precision Impedance Analyzer. The amplitude of the exciting field was below and around the coercive field of the sample. The spectra were decomposed using the Levenberg-Marquardt algorithm running under Origin 9 software in four contributions: i) eddy current; ii) Debye relaxation of magnetization rotation, iii) Debye relaxation of damped domain wall motion and iv) resonant type DW motion. For small exciting amplitudes the first two components dominate. The last two contributions connected to the DW appear for relative large HAC only, around the coercive force. All the contributions will be discussed in detail accentuating the role of eddy current that is not negligible even for the smallest applied exciting field.

  17. The thermoluminescence glow-curve analysis using GlowFit - the new powerful tool for deconvolution

    International Nuclear Information System (INIS)

    Puchalska, M.; Bilski, P.

    2005-10-01

    A new computer program, GlowFit, for deconvoluting first-order kinetics thermoluminescence (TL) glow-curves has been developed. A non-linear function describing a single glow-peak is fitted to experimental points using the least squares Levenberg-Marquardt method. The main advantage of GlowFit is in its ability to resolve complex TL glow-curves consisting of strongly overlapping peaks, such as those observed in heavily doped LiF:Mg,Ti (MTT) detectors. This resolution is achieved mainly by setting constraints or by fixing selected parameters. The initial values of the fitted parameters are placed in the so-called pattern files. GlowFit is a Microsoft Windows-operated user-friendly program. Its graphic interface enables easy intuitive manipulation of glow-peaks, at the initial stage (parameter initialization) and at the final stage (manual adjustment) of fitting peak parameters to the glow-curves. The program is freely downloadable from the web site www.ifj.edu.pl/NPP/deconvolution.htm (author)

  18. Nonlinear Schrödinger approach to European option pricing

    Science.gov (United States)

    Wróblewski, Marcin

    2017-05-01

    This paper deals with numerical option pricing methods based on a Schrödinger model rather than the Black-Scholes model. Nonlinear Schrödinger boundary value problems seem to be alternatives to linear models which better reflect the complexity and behavior of real markets. Therefore, based on the nonlinear Schrödinger option pricing model proposed in the literature, in this paper a model augmented by external atomic potentials is proposed and numerically tested. In terms of statistical physics the developed model describes the option in analogy to a pair of two identical quantum particles occupying the same state. The proposed model is used to price European call options on a stock index. the model is calibrated using the Levenberg-Marquardt algorithm based on market data. A Runge-Kutta method is used to solve the discretized boundary value problem numerically. Numerical results are provided and discussed. It seems that our proposal more accurately models phenomena observed in the real market than do linear models.

  19. Image degradation characteristics and restoration based on regularization for diffractive imaging

    Science.gov (United States)

    Zhi, Xiyang; Jiang, Shikai; Zhang, Wei; Wang, Dawei; Li, Yun

    2017-11-01

    The diffractive membrane optical imaging system is an important development trend of ultra large aperture and lightweight space camera. However, related investigations on physics-based diffractive imaging degradation characteristics and corresponding image restoration methods are less studied. In this paper, the model of image quality degradation for the diffraction imaging system is first deduced mathematically based on diffraction theory and then the degradation characteristics are analyzed. On this basis, a novel regularization model of image restoration that contains multiple prior constraints is established. After that, the solving approach of the equation with the multi-norm coexistence and multi-regularization parameters (prior's parameters) is presented. Subsequently, the space-variant PSF image restoration method for large aperture diffractive imaging system is proposed combined with block idea of isoplanatic region. Experimentally, the proposed algorithm demonstrates its capacity to achieve multi-objective improvement including MTF enhancing, dispersion correcting, noise and artifact suppressing as well as image's detail preserving, and produce satisfactory visual quality. This can provide scientific basis for applications and possesses potential application prospects on future space applications of diffractive membrane imaging technology.

  20. Regular periodical public disclosure obligations of public companies

    Directory of Open Access Journals (Sweden)

    Marjanski Vladimir

    2011-01-01

    Full Text Available Public companies in the capacity of capital market participants have the obligation to inform the public on their legal and financial status, their general business operations, as well as on the issuance of securities and other financial instruments. Such obligations may be divided into two groups: The first group consists of regular periodical public disclosures, such as the publication of financial reports (annual, semi-annual and quarterly, and the management's reports on the public company's business operations. The second group comprises the obligation of occasional (ad hoc public disclosure. The thesis analyses the obligation of public companies to inform the public in course of their regular reporting. The new Capital Market Law based on two EU Directives (the Transparency Directive and the Directive on Public Disclosure of Inside Information and the Definition of Market Manipulation regulates such obligation of public companies in substantially more detail than the prior Law on the Market of Securities and Other Financial Instruments (hereinafter: ZTHV. Due to the above the ZTHV's provisions are compared to the new solutions within the domain of regular periodical disclosure of the Capital Market Law.

  1. Robust regularized least-squares beamforming approach to signal estimation

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2017-05-12

    In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.

  2. difusividad, masa, humedad, volumen y sólidos en yacón (Smallantus sonchifolius deshidratado osmóticamente

    Directory of Open Access Journals (Sweden)

    Julio Rojas Naccha

    2012-01-01

    Full Text Available Se evaluó la capacidad predictiva de la Red Neuronal Artificial (RNA en el efecto de la concentración (30,40, 50 y 60 % p/p y temperatura (30, 40 y 50°C de la solución de fructooligosacaridos (FOS en la masa,humedad, volumen y sólidos en cubos de yacón osmodeshidratados, y en el coeficiente de difusividad efectivamedia del agua, con y sin encogimiento. Se aplicó la RNA del tipoFeedforwardcon los algoritmos deentrenamientoBackpropagationy de ajuste de pesosLevenberg-Marquardt, usando la topología: error metade 10-5, tasa de aprendizaje de 0.01, coeficiente de momento de 0.5, 2 neuronas de entrada, 6 neuronas desalida, una capa oculta con 18 neuronas, 15 etapas de entrenamiento y funciones de transferencialogsig-purelin. El error promedio global por la RNA fue 3.44% y los coeficientes de correlación fueron mayores a0.9. No se encontraron diferencias significativas entre los valores experimentales con losvalores predichos porla RNA y con los valores predichos por un modelo estadístico de regresión polinomial de segundo orden (p >0.95.Palabras clave:Red Neuronal Artificial (RNA, difusividad efectiva, yacón, deshidratación osmótica

  3. A new approach using artificial neural networks for determination of the thermodynamic properties of fluid couples

    International Nuclear Information System (INIS)

    Sencan, Arzu; Kalogirou, Soteris A.

    2005-01-01

    This paper presents a new approach using artificial neural networks (ANN) to determine the thermodynamic properties of two alternative refrigerant/absorbent couples (LiCl-H 2 O and LiBr + LiNO 3 + LiI + LiCl-H 2 O). These pairs can be used in absorption heat pump systems, and their main advantage is that they do not cause ozone depletion. In order to train the network, limited experimental measurements were used as training and test data. Two feedforward ANNs were trained, one for each pair, using the Levenberg-Marquardt algorithm. The training and validation were performed with good accuracy. The correlation coefficient obtained when unknown data were applied to the networks was 0.9997 and 0.9987 for the two pairs, respectively, which is very satisfactory. The present methodology proved to be much better than linear multiple regression analysis. Using the weights obtained from the trained network, a new formulation is presented for determination of the vapor pressures of the two refrigerant/absorbent couples. The use of this new formulation, which can be employed with any programming language or spreadsheet program for estimation of the vapor pressures of fluid couples, as described in this paper, may make the use of dedicated ANN software unnecessary

  4. Limited angle CT reconstruction by simultaneous spatial and Radon domain regularization based on TV and data-driven tight frame

    Science.gov (United States)

    Zhang, Wenkun; Zhang, Hanming; Wang, Linyuan; Cai, Ailong; Li, Lei; Yan, Bin

    2018-02-01

    Limited angle computed tomography (CT) reconstruction is widely performed in medical diagnosis and industrial testing because of the size of objects, engine/armor inspection requirements, and limited scan flexibility. Limited angle reconstruction necessitates usage of optimization-based methods that utilize additional sparse priors. However, most of conventional methods solely exploit sparsity priors of spatial domains. When CT projection suffers from serious data deficiency or various noises, obtaining reconstruction images that meet the requirement of quality becomes difficult and challenging. To solve this problem, this paper developed an adaptive reconstruction method for limited angle CT problem. The proposed method simultaneously uses spatial and Radon domain regularization model based on total variation (TV) and data-driven tight frame. Data-driven tight frame being derived from wavelet transformation aims at exploiting sparsity priors of sinogram in Radon domain. Unlike existing works that utilize pre-constructed sparse transformation, the framelets of the data-driven regularization model can be adaptively learned from the latest projection data in the process of iterative reconstruction to provide optimal sparse approximations for given sinogram. At the same time, an effective alternating direction method is designed to solve the simultaneous spatial and Radon domain regularization model. The experiments for both simulation and real data demonstrate that the proposed algorithm shows better performance in artifacts depression and details preservation than the algorithms solely using regularization model of spatial domain. Quantitative evaluations for the results also indicate that the proposed algorithm applying learning strategy performs better than the dual domains algorithms without learning regularization model

  5. Optical characterization of two-layered turbid media for non-invasive, absolute oximetry in cerebral and extracerebral tissue.

    Directory of Open Access Journals (Sweden)

    Bertan Hallacoglu

    Full Text Available We introduce a multi-distance, frequency-domain, near-infrared spectroscopy (NIRS method to measure the optical coefficients of two-layered media and the thickness of the top layer from diffuse reflectance measurements. This method features a direct solution based on diffusion theory and an inversion procedure based on the Levenberg-Marquardt algorithm. We have validated our method through Monte Carlo simulations, experiments on tissue-like phantoms, and measurements on the forehead of three human subjects. The Monte Carlo simulations and phantom measurements have shown that, in ideal two-layered samples, our method accurately recovers the top layer thickness (L, the absorption coefficient (µ a and the reduced scattering coefficient (µ' s of both layers with deviations that are typically less than 10% for all parameters. Our method is aimed at absolute measurements of hemoglobin concentration and saturation in cerebral and extracerebral tissue of adult human subjects, where the top layer (layer 1 represents extracerebral tissue (scalp, skull, dura mater, subarachnoid space, etc. and the bottom layer (layer 2 represents cerebral tissue. Human subject measurements have shown a significantly greater total hemoglobin concentration in cerebral tissue (82±14 µM with respect to extracerebral tissue (30±7 µM. By contrast, there was no significant difference between the hemoglobin saturation measured in cerebral tissue (56%±10% and extracerebral tissue (62%±6%. To our knowledge, this is the first time that an inversion procedure in the frequency domain with six unknown parameters with no other prior knowledge is used for the retrieval of the optical coefficients and top layer thickness with high accuracy on two-layered media. Our absolute measurements of cerebral hemoglobin concentration and saturation are based on the discrimination of extracerebral and cerebral tissue layers, and they can enhance the impact of NIRS for cerebral hemodynamics and

  6. Results of 7th regular inspection of No.1 plant in Oi Power Station, Kansai Electric Power Co., Inc

    International Nuclear Information System (INIS)

    1989-01-01

    The 7th regular inspection of No.1 plant in Oi Power Station was carried out from December 25, 1987 to July 15, 1988. The parallel operation was resumed on June 23, 1988, 182 days after the start of the inspection. The facilities to be inspected were the reactor proper, reactor cooling system, measurement and control system, fuel facilities, radiation control facilities, waste facilities, reactor containment installation and emergency power generation system. On these facilities to be inspected, the appearance, disassembling, leak, function, performance and other inspections were carried out, and as the results, a part of the fitting of a water chamber partition cover for a steam generator broke off, significant signals were observed in 936 heating tubes of steam generators, 72 bolts for fixing the blade of a primary coolant pump were damaged, and leak was found in two fuel assemblies. The works related to this regular inspection were accomplished within the range of allowable radiation dose based on the relevant laws. The main reconstruction works carried out during the period of this regular inspection were the use of the fuel containing gadolinia, the removal of a thermometer bypass piping and the repair of defective steam generator tubes. (K.I.)

  7. Results of 6th regular inspection of No.1 unit in Oi Power Plant

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    This report presents results of the 6th regular inspection of the No.1 unit in the Oi Power Plant. It was carried out during the period from July 11, 1986, to January 28, 1987. The inspection covered the main unit of the nuclear reactor, facilities for the nuclear reactor cooling system, facilities for the instrumentation control system, fuel facilities, radiation control facilities, disposal facilities, nuclear reactor containment facilities, and emergency power generation system. Checking of appearance, disassemblage, leak and functions-performance of these facilities was conducted. No abnormalities were found except that significant signs were detected in 725 steam generator heat transfer pipes and that leak was suspected in 2 fuel assemblies. The pipes were repaired and the fuel assemblies were replaced. All operations involved in the inspection were performed under conditions within the permissible dose as specified in the applicable laws. Major modification work carried out during the inspection period included the adoption of a burnable poison (B Type) and the charging of fuel for high burn-up demonstration test. The exposure dose of the company members and non-company members who performed the inspection work is also shown. (Nogami, K.)

  8. Regularity effect in prospective memory during aging

    Directory of Open Access Journals (Sweden)

    Geoffrey Blondelle

    2016-10-01

    Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical

  9. Comparative age and growth of common snook Centropomus undecimalis (Pisces: Centropomidae from coastal and riverine areas in Southern Mexico

    Directory of Open Access Journals (Sweden)

    Martha A. Perera-Garcia

    2013-06-01

    Full Text Available Common snook Centropomus unidecimalis is an important commercial and fishery species in Southern Mexico, however the high exploitation rates have resulted in a strong reduction of its abundances. Since, the information about its population structure is scarce, the objective of the present research was to determine and compare the age structure in four important fishery sites. For this, age and growth of common snook were determined from specimens collected monthly, from July 2006 to March 2008, from two coastal (Barra Bosque and Barra San Pedro and two riverine (San Pedro and Tres Brazos commercial fishery sites in Tabasco, Mexico. Age was determined using sectioned saggitae otoliths and data analyzed by von Bertalanffy and Levenberg-Marquardt among others. Estimated ages ranged from 2 to 17 years. Monthly patterns of marginal increment formation and the percentage of otoliths with opaque rings on the outer edge demonstrated that a single annulus was formed each year. The von Bertalanffy parameters were calculated for males and females using linear adjustment and the non-linear method of Levenberg-Marquardt. The von Bertalanffy growth equations were FLt=109.21(1-e-0.21(t+0.57 for Barra Bosque, FLt=94.56(1-e-0.27(t+0.48 for Barra San Pedro, FLt=97.15(1-e-0.17(t+1.32 for San Pedro and FLt=83.77(1-e-0.26(t+0.49 for Tres Brazos. According to (Hotelling’s T², pEl robalo blanco Centropomus undecimalis representa un ingreso monetario significativo y un recurso alimentario para todas las comunidades rurales cercanas a su distribución. Se determinó la edad y crecimiento de esta especie. Los organismos se recolectaron mensualmente en los desembarcos de la pesca artesanal de las cooperativas de mayor contribución en la zona costera (Barra Bosque y San Pedro y ribereña (San Pedro y Tres Brazos entre julio 2006 y marzo 2008. La edad se determinó mediante otolitos seccionados. La edad estimada fue de 2 a 17 años. Mensualmente se estableció la

  10. Regularized lattice Boltzmann model for immiscible two-phase flows with power-law rheology

    Science.gov (United States)

    Ba, Yan; Wang, Ningning; Liu, Haihu; Li, Qiang; He, Guoqiang

    2018-03-01

    In this work, a regularized lattice Boltzmann color-gradient model is developed for the simulation of immiscible two-phase flows with power-law rheology. This model is as simple as the Bhatnagar-Gross-Krook (BGK) color-gradient model except that an additional regularization step is introduced prior to the collision step. In the regularization step, the pseudo-inverse method is adopted as an alternative solution for the nonequilibrium part of the total distribution function, and it can be easily extended to other discrete velocity models no matter whether a forcing term is considered or not. The obtained expressions for the nonequilibrium part are merely related to macroscopic variables and velocity gradients that can be evaluated locally. Several numerical examples, including the single-phase and two-phase layered power-law fluid flows between two parallel plates, and the droplet deformation and breakup in a simple shear flow, are conducted to test the capability and accuracy of the proposed color-gradient model. Results show that the present model is more stable and accurate than the BGK color-gradient model for power-law fluids with a wide range of power-law indices. Compared to its multiple-relaxation-time counterpart, the present model can increase the computing efficiency by around 15%, while keeping the same accuracy and stability. Also, the present model is found to be capable of reasonably predicting the critical capillary number of droplet breakup.

  11. Results of 8th regular inspection of No.2 plant in Hamaoka Nuclear Power Station, Chubu Electric Power Co., Inc

    International Nuclear Information System (INIS)

    1989-01-01

    The 8th regular inspection of No.2 plant in Hamaoka Nuclear Power Station was carried out from January 23 to June 28, 1988. The parallel operation was resumed on June 13, 1988, 143 days after the parallel off. The facilities to be inspected were the reactor proper, reactor cooling system, measurement and control system, fuel facilities, radiation control facilities, waste facilities, reactor containment installation and emergency power generation system. On these facilities to be inspected, the appearance, disassembling, leak, function, performance and other inspections were carried out, and as the result, abnormality was not found. However, during the preparation for running-in after starting up the reactor, the leak from a steam drain piping was found, therefore it was repaired. The works related to this regular inspection were accomplished within the range of allowable radiation dose based on the relevant laws. The maim reconstruction works carried out during the period of this regular inspection were the replacement of the components of cooling seawater pumps, the repair of a steam drain piping in the high pressure injection system and the replacement of LP turbine rotors. (K.I.)

  12. Artificial intelligence techniques for embryo and oocyte classification.

    Science.gov (United States)

    Manna, Claudio; Nanni, Loris; Lumini, Alessandra; Pappalardo, Sebastiana

    2013-01-01

    One of the most relevant aspects in assisted reproduction technology is the possibility of characterizing and identifying the most viable oocytes or embryos. In most cases, embryologists select them by visual examination and their evaluation is totally subjective. Recently, due to the rapid growth in the capacity to extract texture descriptors from a given image, a growing interest has been shown in the use of artificial intelligence methods for embryo or oocyte scoring/selection in IVF programmes. This work concentrates the efforts on the possible prediction of the quality of embryos and oocytes in order to improve the performance of assisted reproduction technology, starting from their images. The artificial intelligence system proposed in this work is based on a set of Levenberg-Marquardt neural networks trained using textural descriptors (the local binary patterns). The proposed system was tested on two data sets of 269 oocytes and 269 corresponding embryos from 104 women and compared with other machine learning methods already proposed in the past for similar classification problems. Although the results are only preliminary, they show an interesting classification performance. This technique may be of particular interest in those countries where legislation restricts embryo selection. One of the most relevant aspects in assisted reproduction technology is the possibility of characterizing and identifying the most viable oocytes or embryos. In most cases, embryologists select them by visual examination and their evaluation is totally subjective. Recently, due to the rapid growth in our capacity to extract texture descriptors from a given image, a growing interest has been shown in the use of artificial intelligence methods for embryo or oocyte scoring/selection in IVF programmes. In this work, we concentrate our efforts on the possible prediction of the quality of embryos and oocytes in order to improve the performance of assisted reproduction technology

  13. Prophylactic implantable defibrillator in patients with arrhythmogenic right ventricular cardiomyopathy/dysplasia and no prior ventricular fibrillation or sustained ventricular tachycardia.

    LENUS (Irish Health Repository)

    Corrado, Domenico

    2010-09-21

    The role of implantable cardioverter-defibrillator (ICD) in patients with arrhythmogenic right ventricular cardiomyopathy\\/dysplasia and no prior ventricular fibrillation (VF) or sustained ventricular tachycardia is an unsolved issue.

  14. Smoothly Clipped Absolute Deviation (SCAD) regularization for compressed sensing MRI Using an augmented Lagrangian scheme

    NARCIS (Netherlands)

    Mehranian, Abolfazl; Rad, Hamidreza Saligheh; Rahmim, Arman; Ay, Mohammad Reza; Zaidi, Habib

    2013-01-01

    Purpose: Compressed sensing (CS) provides a promising framework for MR image reconstruction from highly undersampled data, thus reducing data acquisition time. In this context, sparsity-promoting regularization techniques exploit the prior knowledge that MR images are sparse or compressible in a

  15. Improvements in GRACE Gravity Fields Using Regularization

    Science.gov (United States)

    Save, H.; Bettadpur, S.; Tapley, B. D.

    2008-12-01

    The unconstrained global gravity field models derived from GRACE are susceptible to systematic errors that show up as broad "stripes" aligned in a North-South direction on the global maps of mass flux. These errors are believed to be a consequence of both systematic and random errors in the data that are amplified by the nature of the gravity field inverse problem. These errors impede scientific exploitation of the GRACE data products, and limit the realizable spatial resolution of the GRACE global gravity fields in certain regions. We use regularization techniques to reduce these "stripe" errors in the gravity field products. The regularization criteria are designed such that there is no attenuation of the signal and that the solutions fit the observations as well as an unconstrained solution. We have used a computationally inexpensive method, normally referred to as "L-ribbon", to find the regularization parameter. This paper discusses the characteristics and statistics of a 5-year time-series of regularized gravity field solutions. The solutions show markedly reduced stripes, are of uniformly good quality over time, and leave little or no systematic observation residuals, which is a frequent consequence of signal suppression from regularization. Up to degree 14, the signal in regularized solution shows correlation greater than 0.8 with the un-regularized CSR Release-04 solutions. Signals from large-amplitude and small-spatial extent events - such as the Great Sumatra Andaman Earthquake of 2004 - are visible in the global solutions without using special post-facto error reduction techniques employed previously in the literature. Hydrological signals as small as 5 cm water-layer equivalent in the small river basins, like Indus and Nile for example, are clearly evident, in contrast to noisy estimates from RL04. The residual variability over the oceans relative to a seasonal fit is small except at higher latitudes, and is evident without the need for de-striping or

  16. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  17. Lessons learned: the effect of prior technology use on Web-based interventions.

    Science.gov (United States)

    Carey, Joanne C; Wade, Shari L; Wolfe, Christopher R

    2008-04-01

    This study examined the role of regular prior technology use in treatment response to an online family problem-solving (OFPS) intervention and an Internet resource intervention (IRI) for pediatric traumatic brain injury (TBI). Participants were 150 individuals in 40 families of children with TBI randomly assigned to OFPS intervention or an IRI. All families received free computers and Internet access to TBI resources. OFPS families received Web-based sessions and therapist-guided synchronous videoconferences focusing on problem solving, communication skills, and behavior management. All participants completed measures of depression, anxiety, and computer usage. OFPS participants rated treatment satisfaction, therapeutic alliance, and Web site and technology comfort. With the OFPS intervention, depression and anxiety improved significantly more among technology using parents (n = 14) than nontechnology users (n = 6). Technology users reported increasing comfort with technology over time, and this change was predictive of depression at followup. Satisfaction and ease-of-use ratings did not differ by technology usage. Lack of regular prior home computer usage and nonadherence were predictive of anxiety at followup. The IRI was not globally effective. However, controlling for prior depression, age, and technology at work, there was a significant effect of technology at home for depression. Families with technology experience at home (n = 11) reported significantly greater improvements in depression than families without prior technology experience at home (n = 8). Although Web-based OFPS was effective in improving caregiver functioning, individuals with limited computer experience may benefit less from an online intervention due to increased nonadherence.

  18. Random template placement and prior information

    International Nuclear Information System (INIS)

    Roever, Christian

    2010-01-01

    In signal detection problems, one is usually faced with the task of searching a parameter space for peaks in the likelihood function which indicate the presence of a signal. Random searches have proven to be very efficient as well as easy to implement, compared e.g. to searches along regular grids in parameter space. Knowledge of the parameterised shape of the signal searched for adds structure to the parameter space, i.e., there are usually regions requiring to be densely searched while in other regions a coarser search is sufficient. On the other hand, prior information identifies the regions in which a search will actually be promising or may likely be in vain. Defining specific figures of merit allows one to combine both template metric and prior distribution and devise optimal sampling schemes over the parameter space. We show an example related to the gravitational wave signal from a binary inspiral event. Here the template metric and prior information are particularly contradictory, since signals from low-mass systems tolerate the least mismatch in parameter space while high-mass systems are far more likely, as they imply a greater signal-to-noise ratio (SNR) and hence are detectable to greater distances. The derived sampling strategy is implemented in a Markov chain Monte Carlo (MCMC) algorithm where it improves convergence.

  19. Consistent Partial Least Squares Path Modeling via Regularization.

    Science.gov (United States)

    Jung, Sunho; Park, JaeHong

    2018-01-01

    Partial least squares (PLS) path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc), designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  20. Consistent Partial Least Squares Path Modeling via Regularization

    Directory of Open Access Journals (Sweden)

    Sunho Jung

    2018-02-01

    Full Text Available Partial least squares (PLS path modeling is a component-based structural equation modeling that has been adopted in social and psychological research due to its data-analytic capability and flexibility. A recent methodological advance is consistent PLS (PLSc, designed to produce consistent estimates of path coefficients in structural models involving common factors. In practice, however, PLSc may frequently encounter multicollinearity in part because it takes a strategy of estimating path coefficients based on consistent correlations among independent latent variables. PLSc has yet no remedy for this multicollinearity problem, which can cause loss of statistical power and accuracy in parameter estimation. Thus, a ridge type of regularization is incorporated into PLSc, creating a new technique called regularized PLSc. A comprehensive simulation study is conducted to evaluate the performance of regularized PLSc as compared to its non-regularized counterpart in terms of power and accuracy. The results show that our regularized PLSc is recommended for use when serious multicollinearity is present.

  1. Physical model of dimensional regularization

    Energy Technology Data Exchange (ETDEWEB)

    Schonfeld, Jonathan F.

    2016-12-15

    We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)

  2. Multilinear Graph Embedding: Representation and Regularization for Images.

    Science.gov (United States)

    Chen, Yi-Lei; Hsu, Chiou-Ting

    2014-02-01

    Given a set of images, finding a compact and discriminative representation is still a big challenge especially when multiple latent factors are hidden in the way of data generation. To represent multifactor images, although multilinear models are widely used to parameterize the data, most methods are based on high-order singular value decomposition (HOSVD), which preserves global statistics but interprets local variations inadequately. To this end, we propose a novel method, called multilinear graph embedding (MGE), as well as its kernelization MKGE to leverage the manifold learning techniques into multilinear models. Our method theoretically links the linear, nonlinear, and multilinear dimensionality reduction. We also show that the supervised MGE encodes informative image priors for image regularization, provided that an image is represented as a high-order tensor. From our experiments on face and gait recognition, the superior performance demonstrates that MGE better represents multifactor images than classic methods, including HOSVD and its variants. In addition, the significant improvement in image (or tensor) completion validates the potential of MGE for image regularization.

  3. Distance-regular graphs

    NARCIS (Netherlands)

    van Dam, Edwin R.; Koolen, Jack H.; Tanaka, Hajime

    2016-01-01

    This is a survey of distance-regular graphs. We present an introduction to distance-regular graphs for the reader who is unfamiliar with the subject, and then give an overview of some developments in the area of distance-regular graphs since the monograph 'BCN'[Brouwer, A.E., Cohen, A.M., Neumaier,

  4. Regular expressions cookbook

    CERN Document Server

    Goyvaerts, Jan

    2009-01-01

    This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a

  5. Application of regularization technique in image super-resolution algorithm via sparse representation

    Science.gov (United States)

    Huang, De-tian; Huang, Wei-qin; Huang, Hui; Zheng, Li-xin

    2017-11-01

    To make use of the prior knowledge of the image more effectively and restore more details of the edges and structures, a novel sparse coding objective function is proposed by applying the principle of the non-local similarity and manifold learning on the basis of super-resolution algorithm via sparse representation. Firstly, the non-local similarity regularization term is constructed by using the similar image patches to preserve the edge information. Then, the manifold learning regularization term is constructed by utilizing the locally linear embedding approach to enhance the structural information. The experimental results validate that the proposed algorithm has a significant improvement compared with several super-resolution algorithms in terms of the subjective visual effect and objective evaluation indices.

  6. Basis Expansion Approaches for Regularized Sequential Dictionary Learning Algorithms With Enforced Sparsity for fMRI Data Analysis.

    Science.gov (United States)

    Seghouane, Abd-Krim; Iqbal, Asif

    2017-09-01

    Sequential dictionary learning algorithms have been successfully applied to functional magnetic resonance imaging (fMRI) data analysis. fMRI data sets are, however, structured data matrices with the notions of temporal smoothness in the column direction. This prior information, which can be converted into a constraint of smoothness on the learned dictionary atoms, has seldomly been included in classical dictionary learning algorithms when applied to fMRI data analysis. In this paper, we tackle this problem by proposing two new sequential dictionary learning algorithms dedicated to fMRI data analysis by accounting for this prior information. These algorithms differ from the existing ones in their dictionary update stage. The steps of this stage are derived as a variant of the power method for computing the SVD. The proposed algorithms generate regularized dictionary atoms via the solution of a left regularized rank-one matrix approximation problem where temporal smoothness is enforced via regularization through basis expansion and sparse basis expansion in the dictionary update stage. Applications on synthetic data experiments and real fMRI data sets illustrating the performance of the proposed algorithms are provided.

  7. LL-regular grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    1980-01-01

    Culik II and Cogen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this paper we consider an analogous extension of the LL(k) grammars called the LL-regular grammars. The relation of this class of grammars to other classes of grammars will be shown. Any LL-regular

  8. New model for prediction binary mixture of antihistamine decongestant using artificial neural networks and least squares support vector machine by spectrophotometry method

    Science.gov (United States)

    Mofavvaz, Shirin; Sohrabi, Mahmoud Reza; Nezamzadeh-Ejhieh, Alireza

    2017-07-01

    In the present study, artificial neural networks (ANNs) and least squares support vector machines (LS-SVM) as intelligent methods based on absorption spectra in the range of 230-300 nm have been used for determination of antihistamine decongestant contents. In the first step, one type of network (feed-forward back-propagation) from the artificial neural network with two different training algorithms, Levenberg-Marquardt (LM) and gradient descent with momentum and adaptive learning rate back-propagation (GDX) algorithm, were employed and their performance was evaluated. The performance of the LM algorithm was better than the GDX algorithm. In the second one, the radial basis network was utilized and results compared with the previous network. In the last one, the other intelligent method named least squares support vector machine was proposed to construct the antihistamine decongestant prediction model and the results were compared with two of the aforementioned networks. The values of the statistical parameters mean square error (MSE), Regression coefficient (R2), correlation coefficient (r) and also mean recovery (%), relative standard deviation (RSD) used for selecting the best model between these methods. Moreover, the proposed methods were compared to the high- performance liquid chromatography (HPLC) as a reference method. One way analysis of variance (ANOVA) test at the 95% confidence level applied to the comparison results of suggested and reference methods that there were no significant differences between them.

  9. neural control system

    International Nuclear Information System (INIS)

    Elshazly, A.A.E.

    2002-01-01

    Automatic power stabilization control is the desired objective for any reactor operation , especially, nuclear power plants. A major problem in this area is inevitable gap between a real plant ant the theory of conventional analysis and the synthesis of linear time invariant systems. in particular, the trajectory tracking control of a nonlinear plant is a class of problems in which the classical linear transfer function methods break down because no transfer function can represent the system over the entire operating region . there is a considerable amount of research on the model-inverse approach using feedback linearization technique. however, this method requires a prices plant model to implement the exact linearizing feedback, for nuclear reactor systems, this approach is not an easy task because of the uncertainty in the plant parameters and un-measurable state variables . therefore, artificial neural network (ANN) is used either in self-tuning control or in improving the conventional rule-based exper system.the main objective of this thesis is to suggest an ANN, based self-learning controller structure . this method is capable of on-line reinforcement learning and control for a nuclear reactor with a totally unknown dynamics model. previously, researches are based on back- propagation algorithm . back -propagation (BP), fast back -propagation (FBP), and levenberg-marquardt (LM), algorithms are discussed and compared for reinforcement learning. it is found that, LM algorithm is quite superior

  10. Fault Diagnosis Method of Polymerization Kettle Equipment Based on Rough Sets and BP Neural Network

    Directory of Open Access Journals (Sweden)

    Shu-zhi Gao

    2013-01-01

    Full Text Available Polyvinyl chloride (PVC polymerizing production process is a typical complex controlled object, with complexity features, such as nonlinear, multivariable, strong coupling, and large time-delay. Aiming at the real-time fault diagnosis and optimized monitoring requirements of the large-scale key polymerization equipment of PVC production process, a real-time fault diagnosis strategy is proposed based on rough sets theory with the improved discernibility matrix and BP neural networks. The improved discernibility matrix is adopted to reduct the attributes of rough sets in order to decrease the input dimensionality of fault characteristics effectively. Levenberg-Marquardt BP neural network is trained to diagnose the polymerize faults according to the reducted decision table, which realizes the nonlinear mapping from fault symptom set to polymerize fault set. Simulation experiments are carried out combining with the industry history datum to show the effectiveness of the proposed rough set neural networks fault diagnosis method. The proposed strategy greatly increased the accuracy rate and efficiency of the polymerization fault diagnosis system.

  11. ANN Synthesis Model of Single-Feed Corner-Truncated Circularly Polarized Microstrip Antenna with an Air Gap for Wideband Applications

    Directory of Open Access Journals (Sweden)

    Zhongbao Wang

    2014-01-01

    Full Text Available A computer-aided design model based on the artificial neural network (ANN is proposed to directly obtain patch physical dimensions of the single-feed corner-truncated circularly polarized microstrip antenna (CPMA with an air gap for wideband applications. To take account of the effect of the air gap, an equivalent relative permittivity is introduced and adopted to calculate the resonant frequency and Q-factor of square microstrip antennas for obtaining the training data sets. ANN architectures using multilayered perceptrons (MLPs and radial basis function networks (RBFNs are compared. Also, six learning algorithms are used to train the MLPs for comparison. It is found that MLPs trained with the Levenberg-Marquardt (LM algorithm are better than RBFNs for the synthesis of the CPMA. An accurate model is achieved by using an MLP with three hidden layers. The model is validated by the electromagnetic simulation and measurements. It is enormously useful to antenna engineers for facilitating the design of the single-feed CPMA with an air gap.

  12. Feedforward neural network model estimating pollutant removal process within mesophilic upflow anaerobic sludge blanket bioreactor treating industrial starch processing wastewater.

    Science.gov (United States)

    Antwi, Philip; Li, Jianzheng; Meng, Jia; Deng, Kaiwen; Koblah Quashie, Frank; Li, Jiuling; Opoku Boadi, Portia

    2018-06-01

    In this a, three-layered feedforward-backpropagation artificial neural network (BPANN) model was developed and employed to evaluate COD removal an upflow anaerobic sludge blanket (UASB) reactor treating industrial starch processing wastewater. At the end of UASB operation, microbial community characterization revealed satisfactory composition of microbes whereas morphology depicted rod-shaped archaea. pH, COD, NH 4 + , VFA, OLR and biogas yield were selected by principal component analysis and used as input variables. Whilst tangent sigmoid function (tansig) and linear function (purelin) were assigned as activation functions at the hidden-layer and output-layer, respectively, optimum BPANN architecture was achieved with Levenberg-Marquardt algorithm (trainlm) after eleven training algorithms had been tested. Based on performance indicators such the mean squared errors, fractional variance, index of agreement and coefficient of determination (R 2 ), the BPANN model demonstrated significant performance with R 2 reaching 87%. The study revealed that, control and optimization of an anaerobic digestion process with BPANN model was feasible. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. DSP implementation of a PV system with GA-MLP-NN based MPPT controller supplying BLDC motor drive

    International Nuclear Information System (INIS)

    Akkaya, R.; Kulaksiz, A.A.; Aydogdu, O.

    2007-01-01

    This paper presents a brushless dc motor drive for heating, ventilating and air conditioning fans, which is utilized as the load of a photovoltaic system with a maximum power point tracking (MPPT) controller. The MPPT controller is based on a genetic assisted, multi-layer perceptron neural network (GA-MLP-NN) structure and includes a DC-DC boost converter. Genetic assistance in the neural network is used to optimize the size of the hidden layer. Also, for training the network, a genetic assisted, Levenberg-Marquardt (GA-LM) algorithm is utilized. The off line GA-MLP-NN, trained by this hybrid algorithm, is utilized for online estimation of the voltage and current values in the maximum power point. A brushless dc (BLDC) motor drive system that incorporates a motor controller with proportional integral (PI) speed control loop is successfully implemented to operate the fans. The digital signal processor (DSP) based unit provides rapid achievement of the MPPT and current control of the BLDC motor drive. The performance results of the system are given, and experimental results are presented for a laboratory prototype of 120 W

  14. Optimized Structure of the Traffic Flow Forecasting Model With a Deep Learning Approach.

    Science.gov (United States)

    Yang, Hao-Fan; Dillon, Tharam S; Chen, Yi-Ping Phoebe

    2017-10-01

    Forecasting accuracy is an important issue for successful intelligent traffic management, especially in the domain of traffic efficiency and congestion reduction. The dawning of the big data era brings opportunities to greatly improve prediction accuracy. In this paper, we propose a novel model, stacked autoencoder Levenberg-Marquardt model, which is a type of deep architecture of neural network approach aiming to improve forecasting accuracy. The proposed model is designed using the Taguchi method to develop an optimized structure and to learn traffic flow features through layer-by-layer feature granulation with a greedy layerwise unsupervised learning algorithm. It is applied to real-world data collected from the M6 freeway in the U.K. and is compared with three existing traffic predictors. To the best of our knowledge, this is the first time that an optimized structure of the traffic flow forecasting model with a deep learning approach is presented. The evaluation results demonstrate that the proposed model with an optimized structure has superior performance in traffic flow forecasting.

  15. Fast algorithm for spectral processing with application to on-line welding quality assurance

    Science.gov (United States)

    Mirapeix, J.; Cobo, A.; Jaúregui, C.; López-Higuera, J. M.

    2006-10-01

    A new technique is presented in this paper for the analysis of welding process emission spectra to accurately estimate in real-time the plasma electronic temperature. The estimation of the electronic temperature of the plasma, through the analysis of the emission lines from multiple atomic species, may be used to monitor possible perturbations during the welding process. Unlike traditional techniques, which usually involve peak fitting to Voigt functions using the Levenberg-Marquardt recursive method, sub-pixel algorithms are used to more accurately estimate the central wavelength of the peaks. Three different sub-pixel algorithms will be analysed and compared, and it will be shown that the LPO (linear phase operator) sub-pixel algorithm is a better solution within the proposed system. Experimental tests during TIG-welding using a fibre optic to capture the arc light, together with a low cost CCD-based spectrometer, show that some typical defects associated with perturbations in the electron temperature can be easily detected and identified with this technique. A typical processing time for multiple peak analysis is less than 20 ms running on a conventional PC.

  16. Prediction and Optimization of Key Performance Indicators in the Production of Stator Core Using a GA-NN Approach

    Science.gov (United States)

    Rajora, M.; Zou, P.; Xu, W.; Jin, L.; Chen, W.; Liang, S. Y.

    2017-12-01

    With the rapidly changing demands of the manufacturing market, intelligent techniques are being used to solve engineering problems due to their ability to handle nonlinear complex problems. For example, in the conventional production of stator cores, it is relied upon experienced engineers to make an initial plan on the number of compensation sheets to be added to achieve uniform pressure distribution throughout the laminations. Additionally, these engineers must use their experience to revise the initial plans based upon the measurements made during the production of stator core. However, this method yields inconsistent results as humans are incapable of storing and analysing large amounts of data. In this article, first, a Neural Network (NN), trained using a hybrid Levenberg-Marquardt (LM) - Genetic Algorithm (GA), is developed to assist the engineers with the decision-making process. Next, the trained NN is used as a fitness function in an optimization algorithm to find the optimal values of the initial compensation sheet plan with the aim of minimizing the required revisions during the production of the stator core.

  17. CubiCal - Fast radio interferometric calibration suite exploiting complex optimisation

    Science.gov (United States)

    Kenyon, J. S.; Smirnov, O. M.; Grobler, T. L.; Perkins, S. J.

    2018-05-01

    It has recently been shown that radio interferometric gain calibration can be expressed succinctly in the language of complex optimisation. In addition to providing an elegant framework for further development, it exposes properties of the calibration problem which can be exploited to accelerate traditional non-linear least squares solvers such as Gauss-Newton and Levenberg-Marquardt. We extend existing derivations to chains of Jones terms: products of several gains which model different aberrant effects. In doing so, we find that the useful properties found in the single term case still hold. We also develop several specialised solvers which deal with complex gains parameterised by real values. The newly developed solvers have been implemented in a Python package called CubiCal, which uses a combination of Cython, multiprocessing and shared memory to leverage the power of modern hardware. We apply CubiCal to both simulated and real data, and perform both direction-independent and direction-dependent self-calibration. Finally, we present the results of some rudimentary profiling to show that CubiCal is competitive with respect to existing calibration tools such as MeqTrees.

  18. Artificial neural networks and adaptive neuro-fuzzy assessments for ground-coupled heat pump system

    Energy Technology Data Exchange (ETDEWEB)

    Esen, Hikmet; Esen, Mehmet [Department of Mechanical Education, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey); Inalli, Mustafa [Department of Mechanical Engineering, Faculty of Engineering, Firat University, 23279 Elazig (Turkey); Sengur, Abdulkadir [Department of Electronic and Computer Science, Faculty of Technical Education, Firat University, 23119 Elazig (Turkey)

    2008-07-01

    This article present a comparison of artificial neural network (ANN) and adaptive neuro-fuzzy inference systems (ANFIS) applied for modelling a ground-coupled heat pump system (GCHP). The aim of this study is predicting system performance related to ground and air (condenser inlet and outlet) temperatures by using desired models. Performance forecasting is the precondition for the optimal design and energy-saving operation of air-conditioning systems. So obtained models will help the system designer to realize this precondition. The most suitable algorithm and neuron number in the hidden layer are found as Levenberg-Marquardt (LM) with seven neurons for ANN model whereas the most suitable membership function and number of membership functions are found as Gauss and two, respectively, for ANFIS model. The root-mean squared (RMS) value and the coefficient of variation in percent (cov) value are 0.0047 and 0.1363, respectively. The absolute fraction of variance (R{sup 2}) is 0.9999 which can be considered as very promising. This paper shows the appropriateness of ANFIS for the quantitative modeling of GCHP systems. (author)

  19. Artificial Neural Network (ANN) design for Hg-Se interactions and their effect on reduction of Hg uptake by radish plant

    International Nuclear Information System (INIS)

    Kumar Rohit Raj; Abhishek Kardam; Shalini Srivastava; Jyoti Kumar Arora

    2010-01-01

    The tendency of selenium to interact with heavy metals in presence of naturally occurring species has been exploited for the development of green bioremediation of toxic metals from soil using Artificial Neural Network (ANN) modeling. The cross validation of the data for the reduction in uptake of Hg(II) ions in the plant R. sativus grown in soil and sand culture in presence of selenium has been used for ANN modeling. ANN model based on the combination of back propagation and principal component analysis was able to predict the reduction in Hg uptake with a sigmoid axon transfer function. The data of fifty laboratory experimental sets were used for structuring single layer ANN model. Series of experiments resulted into the performance evaluation based on considering 20% data for testing and 20% data for cross validation at 1,500 Epoch with 0.70 momentums The Levenberg-Marquardt algorithm (LMA) was found as the best of BP algorithms with a minimum mean squared error at the eighth place of the decimal for training (MSE) and cross validation. (author)

  20. Prediction of Film Cooling Effectiveness on a Gas Turbine Blade Leading Edge Using ANN and CFD

    Science.gov (United States)

    Dávalos, J. O.; García, J. C.; Urquiza, G.; Huicochea, A.; De Santiago, O.

    2018-05-01

    In this work, the area-averaged film cooling effectiveness (AAFCE) on a gas turbine blade leading edge was predicted by employing an artificial neural network (ANN) using as input variables: hole diameter, injection angle, blowing ratio, hole and columns pitch. The database used to train the network was built using computational fluid dynamics (CFD) based on a two level full factorial design of experiments. The CFD numerical model was validated with an experimental rig, where a first stage blade of a gas turbine was represented by a cylindrical specimen. The ANN architecture was composed of three layers with four neurons in hidden layer and Levenberg-Marquardt was selected as ANN optimization algorithm. The AAFCE was successfully predicted by the ANN with a regression coefficient R2<0.99 and a root mean square error RMSE=0.0038. The ANN weight coefficients were used to estimate the relative importance of the input parameters. Blowing ratio was the most influential parameter with relative importance of 40.36 % followed by hole diameter. Additionally, by using the ANN model, the relationship between input parameters was analyzed.

  1. Trace analysis of acids and bases by conductometric titration with multiparametric non-linear regression.

    Science.gov (United States)

    Coelho, Lúcia H G; Gutz, Ivano G R

    2006-03-15

    A chemometric method for analysis of conductometric titration data was introduced to extend its applicability to lower concentrations and more complex acid-base systems. Auxiliary pH measurements were made during the titration to assist the calculation of the distribution of protonable species on base of known or guessed equilibrium constants. Conductivity values of each ionized or ionizable species possibly present in the sample were introduced in a general equation where the only unknown parameters were the total concentrations of (conjugated) bases and of strong electrolytes not involved in acid-base equilibria. All these concentrations were adjusted by a multiparametric nonlinear regression (NLR) method, based on the Levenberg-Marquardt algorithm. This first conductometric titration method with NLR analysis (CT-NLR) was successfully applied to simulated conductometric titration data and to synthetic samples with multiple components at concentrations as low as those found in rainwater (approximately 10 micromol L(-1)). It was possible to resolve and quantify mixtures containing a strong acid, formic acid, acetic acid, ammonium ion, bicarbonate and inert electrolyte with accuracy of 5% or better.

  2. Characterization of acid functional groups of carbon dots by nonlinear regression data fitting of potentiometric titration curves

    Science.gov (United States)

    Alves, Larissa A.; de Castro, Arthur H.; de Mendonça, Fernanda G.; de Mesquita, João P.

    2016-05-01

    The oxygenated functional groups present on the surface of carbon dots with an average size of 2.7 ± 0.5 nm were characterized by a variety of techniques. In particular, we discussed the fit data of potentiometric titration curves using a nonlinear regression method based on the Levenberg-Marquardt algorithm. The results obtained by statistical treatment of the titration curve data showed that the best fit was obtained considering the presence of five Brønsted-Lowry acids on the surface of the carbon dots with constant ionization characteristics of carboxylic acids, cyclic ester, phenolic and pyrone-like groups. The total number of oxygenated acid groups obtained was 5 mmol g-1, with approximately 65% (∼2.9 mmol g-1) originating from groups with pKa titrated and initial concentration of HCl solution. Finally, we believe that the methodology used here, together with other characterization techniques, is a simple, fast and powerful tool to characterize the complex acid-base properties of these so interesting and intriguing nanoparticles.

  3. Modeling and Multiresponse Optimization for Anaerobic Codigestion of Oil Refinery Wastewater and Chicken Manure by Using Artificial Neural Network and the Taguchi Method

    Directory of Open Access Journals (Sweden)

    Esmaeil Mehryar

    2017-01-01

    Full Text Available To study the optimum process conditions for pretreatments and anaerobic codigestion of oil refinery wastewater (ORWW with chicken manure, L9 (34 Taguchi’s orthogonal array was applied. The biogas production (BGP, biomethane content (BMP, and chemical oxygen demand solubilization (CODS in stabilization rate were evaluated as the process outputs. The optimum conditions were obtained by using Design Expert software (Version 7.0.0. The results indicated that the optimum conditions could be achieved with 44% ORWW, 36°C temperature, 30 min sonication, and 6% TS in the digester. The optimum BGP, BMP, and CODS removal rates by using the optimum conditions were 294.76 mL/gVS, 151.95 mL/gVS, and 70.22%, respectively, as concluded by the experimental results. In addition, the artificial neural network (ANN technique was implemented to develop an ANN model for predicting BGP yield and BMP content. The Levenberg-Marquardt algorithm was utilized to train ANN, and the architecture of 9-19-2 for the ANN model was obtained.

  4. Machine learning modelling for predicting soil liquefaction susceptibility

    Directory of Open Access Journals (Sweden)

    P. Samui

    2011-01-01

    Full Text Available This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN based on multi-layer perceptions (MLP that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT [(N160] and cyclic stress ratio (CSR. Further, an attempt has been made to simplify the models, requiring only the two parameters [(N160 and peck ground acceleration (amax/g], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.

  5. Day-ahead price forecasting in restructured power systems using artificial neural networks

    International Nuclear Information System (INIS)

    Vahidinasab, V.; Jadid, S.; Kazemi, A.

    2008-01-01

    Over the past 15 years most electricity supply companies around the world have been restructured from monopoly utilities to deregulated competitive electricity markets. Market participants in the restructured electricity markets find short-term electricity price forecasting (STPF) crucial in formulating their risk management strategies. They need to know future electricity prices as their profitability depends on them. This research project classifies and compares different techniques of electricity price forecasting in the literature and selects artificial neural networks (ANN) as a suitable method for price forecasting. To perform this task, market knowledge should be used to optimize the selection of input data for an electricity price forecasting tool. Then sensitivity analysis is used in this research to aid in the selection of the optimum inputs of the ANN and fuzzy c-mean (FCM) algorithm is used for daily load pattern clustering. Finally, ANN with a modified Levenberg-Marquardt (LM) learning algorithm are implemented for forecasting prices in Pennsylvania-New Jersey-Maryland (PJM) market. The forecasting results were compared with the previous works and showed that the results are reasonable and accurate. (author)

  6. Multi-Scale Parameter Identification of Lithium-Ion Battery Electric Models Using a PSO-LM Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-Jing Shen

    2017-03-01

    Full Text Available This paper proposes a multi-scale parameter identification algorithm for the lithium-ion battery (LIB electric model by using a combination of particle swarm optimization (PSO and Levenberg-Marquardt (LM algorithms. Two-dimensional Poisson equations with unknown parameters are used to describe the potential and current density distribution (PDD of the positive and negative electrodes in the LIB electric model. The model parameters are difficult to determine in the simulation due to the nonlinear complexity of the model. In the proposed identification algorithm, PSO is used for the coarse-scale parameter identification and the LM algorithm is applied for the fine-scale parameter identification. The experiment results show that the multi-scale identification not only improves the convergence rate and effectively escapes from the stagnation of PSO, but also overcomes the local minimum entrapment drawback of the LM algorithm. The terminal voltage curves from the PDD model with the identified parameter values are in good agreement with those from the experiments at different discharge/charge rates.

  7. Automatic Parking Based on a Bird's Eye View Vision System

    Directory of Open Access Journals (Sweden)

    Chunxiang Wang

    2014-03-01

    Full Text Available This paper aims at realizing an automatic parking method through a bird's eye view vision system. With this method, vehicles can make robust and real-time detection and recognition of parking spaces. During parking process, the omnidirectional information of the environment can be obtained by using four on-board fisheye cameras around the vehicle, which are the main part of the bird's eye view vision system. In order to achieve this purpose, a polynomial fisheye distortion model is firstly used for camera calibration. An image mosaicking method based on the Levenberg-Marquardt algorithm is used to combine four individual images from fisheye cameras into one omnidirectional bird's eye view image. Secondly, features of the parking spaces are extracted with a Radon transform based method. Finally, double circular trajectory planning and a preview control strategy are utilized to realize autonomous parking. Through experimental analysis, we can see that the proposed method can get effective and robust real-time results in both parking space recognition and automatic parking.

  8. A Feature-Free 30-Disease Pathological Brain Detection System by Linear Regression Classifier.

    Science.gov (United States)

    Chen, Yi; Shao, Ying; Yan, Jie; Yuan, Ti-Fei; Qu, Yanwen; Lee, Elizabeth; Wang, Shuihua

    2017-01-01

    Alzheimer's disease patients are increasing rapidly every year. Scholars tend to use computer vision methods to develop automatic diagnosis system. (Background) In 2015, Gorji et al. proposed a novel method using pseudo Zernike moment. They tested four classifiers: learning vector quantization neural network, pattern recognition neural network trained by Levenberg-Marquardt, by resilient backpropagation, and by scaled conjugate gradient. This study presents an improved method by introducing a relatively new classifier-linear regression classification. Our method selects one axial slice from 3D brain image, and employed pseudo Zernike moment with maximum order of 15 to extract 256 features from each image. Finally, linear regression classification was harnessed as the classifier. The proposed approach obtains an accuracy of 97.51%, a sensitivity of 96.71%, and a specificity of 97.73%. Our method performs better than Gorji's approach and five other state-of-the-art approaches. Therefore, it can be used to detect Alzheimer's disease. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  9. Computer-assisted detection of colonic polyps with CT colonography using neural networks and binary classification trees

    International Nuclear Information System (INIS)

    Jerebko, Anna K.; Summers, Ronald M.; Malley, James D.; Franaszek, Marek; Johnson, C. Daniel

    2003-01-01

    Detection of colonic polyps in CT colonography is problematic due to complexities of polyp shape and the surface of the normal colon. Published results indicate the feasibility of computer-aided detection of polyps but better classifiers are needed to improve specificity. In this paper we compare the classification results of two approaches: neural networks and recursive binary trees. As our starting point we collect surface geometry information from three-dimensional reconstruction of the colon, followed by a filter based on selected variables such as region density, Gaussian and average curvature and sphericity. The filter returns sites that are candidate polyps, based on earlier work using detection thresholds, to which the neural nets or the binary trees are applied. A data set of 39 polyps from 3 to 25 mm in size was used in our investigation. For both neural net and binary trees we use tenfold cross-validation to better estimate the true error rates. The backpropagation neural net with one hidden layer trained with Levenberg-Marquardt algorithm achieved the best results: sensitivity 90% and specificity 95% with 16 false positives per study

  10. Reactive decontamination of absorbing thin film polymer coatings: model development and parameter determination

    Science.gov (United States)

    Varady, Mark; Mantooth, Brent; Pearl, Thomas; Willis, Matthew

    2014-03-01

    A continuum model of reactive decontamination in absorbing polymeric thin film substrates exposed to the chemical warfare agent O-ethyl S-[2-(diisopropylamino)ethyl] methylphosphonothioate (known as VX) was developed to assess the performance of various decontaminants. Experiments were performed in conjunction with an inverse analysis method to obtain the necessary model parameters. The experiments involved contaminating a substrate with a fixed VX exposure, applying a decontaminant, followed by a time-resolved, liquid phase extraction of the absorbing substrate to measure the residual contaminant by chromatography. Decontamination model parameters were uniquely determined using the Levenberg-Marquardt nonlinear least squares fitting technique to best fit the experimental time evolution of extracted mass. The model was implemented numerically in both a 2D axisymmetric finite element program and a 1D finite difference code, and it was found that the more computationally efficient 1D implementation was sufficiently accurate. The resulting decontamination model provides an accurate quantification of contaminant concentration profile in the material, which is necessary to assess exposure hazards.

  11. Satellite single-axis attitude determination based on Automatic Dependent Surveillance - Broadcast signals

    Science.gov (United States)

    Zhou, Kaixing; Sun, Xiucong; Huang, Hai; Wang, Xinsheng; Ren, Guangwei

    2017-10-01

    The space-based Automatic Dependent Surveillance - Broadcast (ADS-B) is a new technology for air traffic management. The satellite equipped with spaceborne ADS-B system receives the broadcast signals from aircraft and transfers the message to ground stations, so as to extend the coverage area of terrestrial-based ADS-B. In this work, a novel satellite single-axis attitude determination solution based on the ADS-B receiving system is proposed. This solution utilizes the signal-to-noise ratio (SNR) measurement of the broadcast signals from aircraft to determine the boresight orientation of the ADS-B receiving antenna fixed on the satellite. The basic principle of this solution is described. The feasibility study of this new attitude determination solution is implemented, including the link budget and the access analysis. On this basis, the nonlinear least squares estimation based on the Levenberg-Marquardt method is applied to estimate the single-axis orientation. A full digital simulation has been carried out to verify the effectiveness and performance of this solution. Finally, the corresponding results are processed and presented minutely.

  12. Optimality in Microwave-Assisted Drying of Aloe Vera ( Aloe barbadensis Miller) Gel using Response Surface Methodology and Artificial Neural Network Modeling

    Science.gov (United States)

    Das, Chandan; Das, Arijit; Kumar Golder, Animes

    2016-10-01

    The present work illustrates the Microwave-Assisted Drying (MWAD) characteristic of aloe vera gel combined with process optimization and artificial neural network modeling. The influence of microwave power (160-480 W), gel quantity (4-8 g) and drying time (1-9 min) on the moisture ratio was investigated. The drying of aloe gel exhibited typical diffusion-controlled characteristics with a predominant interaction between input power and drying time. Falling rate period was observed for the entire MWAD of aloe gel. Face-centered Central Composite Design (FCCD) developed a regression model to evaluate their effects on moisture ratio. The optimal MWAD conditions were established as microwave power of 227.9 W, sample amount of 4.47 g and 5.78 min drying time corresponding to the moisture ratio of 0.15. A computer-stimulated Artificial Neural Network (ANN) model was generated for mapping between process variables and the desired response. `Levenberg-Marquardt Back Propagation' algorithm with 3-5-1 architect gave the best prediction, and it showed a clear superiority over FCCD.

  13. An Improved Calibration Method for a Rotating 2D LIDAR System.

    Science.gov (United States)

    Zeng, Yadan; Yu, Heng; Dai, Houde; Song, Shuang; Lin, Mingqiang; Sun, Bo; Jiang, Wei; Meng, Max Q-H

    2018-02-07

    This paper presents an improved calibration method of a rotating two-dimensional light detection and ranging (R2D-LIDAR) system, which can obtain the 3D scanning map of the surroundings. The proposed R2D-LIDAR system, composed of a 2D LIDAR and a rotating unit, is pervasively used in the field of robotics owing to its low cost and dense scanning data. Nevertheless, the R2D-LIDAR system must be calibrated before building the geometric model because there are assembled deviation and abrasion between the 2D LIDAR and the rotating unit. Hence, the calibration procedures should contain both the adjustment between the two devices and the bias of 2D LIDAR itself. The main purpose of this work is to resolve the 2D LIDAR bias issue with a flat plane based on the Levenberg-Marquardt (LM) algorithm. Experimental results for the calibration of the R2D-LIDAR system prove the reliability of this strategy to accurately estimate sensor offsets with the error range from -15 mm to 15 mm for the performance of capturing scans.

  14. A Study on the Leakage Characteristic Evaluation of High Temperature and Pressure Pipeline at Nuclear Power Plants Using the Acoustic Emission Technique

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Hoon; Kim, Jin Hyun; Song, Bong Min; Lee, Joon Hyun; Cho, Youn Ho [Pusan National University, Busan (Korea, Republic of)

    2009-10-15

    An acoustic leak monitoring system(ALMS) using acoustic emission(AE) technique was applied for leakage detection of nuclear power plant's pipeline which is operated in high temperature and pressure condition. Since this system only monitors the existence of leak using the root mean square(RMS) value of raw signal from AE sensor, the difficulty occurs when the characteristics of leak size and shape need to be evaluated. In this study, dual monitoring system using AE sensor and accelerometer was introduced in order to solve this problem. In addition, artificial neural network(ANN) with Levenberg Marquardt(LM) training algorithm was also applied due to rapid training rate and gave the reliable classification performance. The input parameters of this ANN were extracted from varying signal received from experimental conditions such as the fluid pressure inside pipe, the shape and size of the leak area. Additional experiments were also carried out and with different objective which is to study the generation and characteristic of lamb and surface wave according to the pipe thickness

  15. A Study on the Leakage Characteristic Evaluation of High Temperature and Pressure Pipeline at Nuclear Power Plants Using the Acoustic Emission Technique

    International Nuclear Information System (INIS)

    Kim, Young Hoon; Kim, Jin Hyun; Song, Bong Min; Lee, Joon Hyun; Cho, Youn Ho

    2009-01-01

    An acoustic leak monitoring system(ALMS) using acoustic emission(AE) technique was applied for leakage detection of nuclear power plant's pipeline which is operated in high temperature and pressure condition. Since this system only monitors the existence of leak using the root mean square(RMS) value of raw signal from AE sensor, the difficulty occurs when the characteristics of leak size and shape need to be evaluated. In this study, dual monitoring system using AE sensor and accelerometer was introduced in order to solve this problem. In addition, artificial neural network(ANN) with Levenberg Marquardt(LM) training algorithm was also applied due to rapid training rate and gave the reliable classification performance. The input parameters of this ANN were extracted from varying signal received from experimental conditions such as the fluid pressure inside pipe, the shape and size of the leak area. Additional experiments were also carried out and with different objective which is to study the generation and characteristic of lamb and surface wave according to the pipe thickness

  16. Heart murmur detection based on wavelet transformation and a synergy between artificial neural network and modified neighbor annealing methods.

    Science.gov (United States)

    Eslamizadeh, Gholamhossein; Barati, Ramin

    2017-05-01

    Early recognition of heart disease plays a vital role in saving lives. Heart murmurs are one of the common heart problems. In this study, Artificial Neural Network (ANN) is trained with Modified Neighbor Annealing (MNA) to classify heart cycles into normal and murmur classes. Heart cycles are separated from heart sounds using wavelet transformer. The network inputs are features extracted from individual heart cycles, and two classification outputs. Classification accuracy of the proposed model is compared with five multilayer perceptron trained with Levenberg-Marquardt, Extreme-learning-machine, back-propagation, simulated-annealing, and neighbor-annealing algorithms. It is also compared with a Self-Organizing Map (SOM) ANN. The proposed model is trained and tested using real heart sounds available in the Pascal database to show the applicability of the proposed scheme. Also, a device to record real heart sounds has been developed and used for comparison purposes too. Based on the results of this study, MNA can be used to produce considerable results as a heart cycle classifier. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Simulation of CO2 Solubility in Polystyrene-b-Polybutadieneb-Polystyrene (SEBS) by artificial intelligence network (ANN) method

    Science.gov (United States)

    Sharudin, R. W.; AbdulBari Ali, S.; Zulkarnain, M.; Shukri, M. A.

    2018-05-01

    This study reports on the integration of Artificial Neural Network (ANNs) with experimental data in predicting the solubility of carbon dioxide (CO2) blowing agent in SEBS by generating highest possible value for Regression coefficient (R2). Basically, foaming of thermoplastic elastomer with CO2 is highly affected by the CO2 solubility. The ability of ANN in predicting interpolated data of CO2 solubility was investigated by comparing training results via different method of network training. Regards to the final prediction result for CO2 solubility by ANN, the prediction trend (output generate) was corroborated with the experimental results. The obtained result of different method of training showed the trend of output generated by Gradient Descent with Momentum & Adaptive LR (traingdx) required longer training time and required more accurate input to produce better output with final Regression Value of 0.88. However, it goes vice versa with Levenberg-Marquardt (trainlm) technique as it produced better output in quick detention time with final Regression Value of 0.91.

  18. Nonlinear estimation of ring-down time for a Fabry-Perot optical cavity.

    Science.gov (United States)

    Kallapur, Abhijit G; Boyson, Toby K; Petersen, Ian R; Harb, Charles C

    2011-03-28

    This paper discusses the application of a discrete-time extended Kalman filter (EKF) to the problem of estimating the decay time constant for a Fabry-Perot optical cavity for cavity ring-down spectroscopy (CRDS). The data for the estimation process is obtained from a CRDS experimental setup in terms of the light intensity at the output of the cavity. The cavity is held in lock with the input laser frequency by controlling the distance between the mirrors within the cavity by means of a proportional-integral (PI) controller. The cavity is purged with nitrogen and placed under vacuum before chopping the incident light at 25 KHz and recording the light intensity at its output. In spite of beginning the EKF estimation process with uncertainties in the initial value for the decay time constant, its estimates converge well within a small neighborhood of the expected value for the decay time constant of the cavity within a few ring-down cycles. Also, the EKF estimation results for the decay time constant are compared to those obtained using the Levenberg-Marquardt estimation scheme.

  19. Iterative CT shading correction with no prior information

    Science.gov (United States)

    Wu, Pengwei; Sun, Xiaonan; Hu, Hongjie; Mao, Tingyu; Zhao, Wei; Sheng, Ke; Cheung, Alice A.; Niu, Tianye

    2015-11-01

    Shading artifacts in CT images are caused by scatter contamination, beam-hardening effect and other non-ideal imaging conditions. The purpose of this study is to propose a novel and general correction framework to eliminate low-frequency shading artifacts in CT images (e.g. cone-beam CT, low-kVp CT) without relying on prior information. The method is based on the general knowledge of the relatively uniform CT number distribution in one tissue component. The CT image is first segmented to construct a template image where each structure is filled with the same CT number of a specific tissue type. Then, by subtracting the ideal template from the CT image, the residual image from various error sources are generated. Since forward projection is an integration process, non-continuous shading artifacts in the image become continuous signals in a line integral. Thus, the residual image is forward projected and its line integral is low-pass filtered in order to estimate the error that causes shading artifacts. A compensation map is reconstructed from the filtered line integral error using a standard FDK algorithm and added back to the original image for shading correction. As the segmented image does not accurately depict a shaded CT image, the proposed scheme is iterated until the variation of the residual image is minimized. The proposed method is evaluated using cone-beam CT images of a Catphan©600 phantom and a pelvis patient, and low-kVp CT angiography images for carotid artery assessment. Compared with the CT image without correction, the proposed method reduces the overall CT number error from over 200 HU to be less than 30 HU and increases the spatial uniformity by a factor of 1.5. Low-contrast object is faithfully retained after the proposed correction. An effective iterative algorithm for shading correction in CT imaging is proposed that is only assisted by general anatomical information without relying on prior knowledge. The proposed method is thus practical

  20. Do the majority of South Africans regularly consult traditional healers?

    Directory of Open Access Journals (Sweden)

    Gabriel Louw

    2016-12-01

    Full Text Available Background The statutory recognition of traditional healers as healthcare practitioners in South Africa in terms of the Traditional Health Practitioners Act 22 of 2007 is based on various assumptions, opinions and generalizations. One of the prominent views is that the majority of South Africans regularly consult traditional healers. It even has been alleged that this number can be as high as 80 per cent of the South African population. For medical doctors and other health practitioners registered with the Health Professions Council of South Africa (HPCSA, this new statutory status of traditional health practitioners, means the required presence of not only a healthcare competitor that can overstock the healthcare market with service lending, medical claims and healthcare costs, but also a competitor prone to malpractice. Aims The study aimed to determine if the majority of South Africans regularly consult traditional healers. Methods This is an exploratory and descriptive study following the modern historical approach of investigation and literature review. The emphasis is on using current documentation like articles, books and newspapers, as primary sources to determine if the majority of South Africans regularly consult traditional healers. The findings are offered in narrative form. Results It is clear that there is no trustworthy statistics on the percentages of South Africans using traditional healers. A scientific survey is needed to determine the extent to which traditional healers are consulted. This will only be possible after the Traditional Health Practitioners Act No 22 has been fully enacted and traditional health practitioners have become fully active in the healthcare sector. Conclusion In poorer, rural areas no more than 11.2 per cent of the South African population regularly consult traditional healers, while the figure for the total population seems to be no more than 1.4 per cent. The argument that the majority of South

  1. An iterative method for Tikhonov regularization with a general linear regularization operator

    NARCIS (Netherlands)

    Hochstenbach, M.E.; Reichel, L.

    2010-01-01

    Tikhonov regularization is one of the most popular approaches to solve discrete ill-posed problems with error-contaminated data. A regularization operator and a suitable value of a regularization parameter have to be chosen. This paper describes an iterative method, based on Golub-Kahan

  2. Method of transferring regular shaped vessel into cell

    International Nuclear Information System (INIS)

    Murai, Tsunehiko.

    1997-01-01

    The present invention concerns a method of transferring regular shaped vessels from a non-contaminated area to a contaminated cell. A passage hole for allowing the regular shaped vessels to pass in the longitudinal direction is formed to a partitioning wall at the bottom of the contaminated cell. A plurality of regular shaped vessel are stacked in multiple stages in a vertical direction from the non-contaminated area present below the passage hole, allowed to pass while being urged and transferred successively into the contaminated cell. As a result, since they are transferred while substantially closing the passage hole by the regular shaped vessels, radiation rays or contaminated materials are prevented from discharging from the contaminated cell to the non-contaminated area. Since there is no requirement to open/close an isolation door frequently, the workability upon transfer can be improved remarkably. In addition, the sealing member for sealing the gap between the regular shaped vessel passing through the passage hole and the partitioning wall of the bottom is disposed to the passage hole, the contaminated materials in the contaminated cells can be prevented from discharging from the gap to the non-contaminated area. (N.H.)

  3. EIT image reconstruction with four dimensional regularization.

    Science.gov (United States)

    Dai, Tao; Soleimani, Manuchehr; Adler, Andy

    2008-09-01

    Electrical impedance tomography (EIT) reconstructs internal impedance images of the body from electrical measurements on body surface. The temporal resolution of EIT data can be very high, although the spatial resolution of the images is relatively low. Most EIT reconstruction algorithms calculate images from data frames independently, although data are actually highly correlated especially in high speed EIT systems. This paper proposes a 4-D EIT image reconstruction for functional EIT. The new approach is developed to directly use prior models of the temporal correlations among images and 3-D spatial correlations among image elements. A fast algorithm is also developed to reconstruct the regularized images. Image reconstruction is posed in terms of an augmented image and measurement vector which are concatenated from a specific number of previous and future frames. The reconstruction is then based on an augmented regularization matrix which reflects the a priori constraints on temporal and 3-D spatial correlations of image elements. A temporal factor reflecting the relative strength of the image correlation is objectively calculated from measurement data. Results show that image reconstruction models which account for inter-element correlations, in both space and time, show improved resolution and noise performance, in comparison to simpler image models.

  4. Regular Expression Pocket Reference

    CERN Document Server

    Stubblebine, Tony

    2007-01-01

    This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp

  5. Bayesian image reconstruction in SPECT using higher order mechanical models as priors

    International Nuclear Information System (INIS)

    Lee, S.J.; Gindi, G.; Rangarajan, A.

    1995-01-01

    While the ML-EM (maximum-likelihood-expectation maximization) algorithm for reconstruction for emission tomography is unstable due to the ill-posed nature of the problem, Bayesian reconstruction methods overcome this instability by introducing prior information, often in the form of a spatial smoothness regularizer. More elaborate forms of smoothness constraints may be used to extend the role of the prior beyond that of a stabilizer in order to capture actual spatial information about the object. Previously proposed forms of such prior distributions were based on the assumption of a piecewise constant source distribution. Here, the authors propose an extension to a piecewise linear model--the weak plate--which is more expressive than the piecewise constant model. The weak plate prior not only preserves edges but also allows for piecewise ramplike regions in the reconstruction. Indeed, for the application in SPECT, such ramplike regions are observed in ground-truth source distributions in the form of primate autoradiographs of rCBF radionuclides. To incorporate the weak plate prior in a MAP approach, the authors model the prior as a Gibbs distribution and use a GEM formulation for the optimization. They compare quantitative performance of the ML-EM algorithm, a GEM algorithm with a prior favoring piecewise constant regions, and a GEM algorithm with the weak plate prior. Pointwise and regional bias and variance of ensemble image reconstructions are used as indications of image quality. The results show that the weak plate and membrane priors exhibit improved bias and variance relative to ML-EM techniques

  6. TU-CD-BRA-12: Coupling PET Image Restoration and Segmentation Using Variational Method with Multiple Regularizations

    Energy Technology Data Exchange (ETDEWEB)

    Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)

    2015-06-15

    Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural

  7. The geometry of continuum regularization

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1987-03-01

    This lecture is primarily an introduction to coordinate-invariant regularization, a recent advance in the continuum regularization program. In this context, the program is seen as fundamentally geometric, with all regularization contained in regularized DeWitt superstructures on field deformations

  8. Regular expression containment

    DEFF Research Database (Denmark)

    Henglein, Fritz; Nielsen, Lasse

    2011-01-01

    We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...

  9. Supersymmetric dimensional regularization

    International Nuclear Information System (INIS)

    Siegel, W.; Townsend, P.K.; van Nieuwenhuizen, P.

    1980-01-01

    There is a simple modification of dimension regularization which preserves supersymmetry: dimensional reduction to real D < 4, followed by analytic continuation to complex D. In terms of component fields, this means fixing the ranges of all indices on the fields (and therefore the numbers of Fermi and Bose components). For superfields, it means continuing in the dimensionality of x-space while fixing the dimensionality of theta-space. This regularization procedure allows the simple manipulation of spinor derivatives in supergraph calculations. The resulting rules are: (1) First do all algebra exactly as in D = 4; (2) Then do the momentum integrals as in ordinary dimensional regularization. This regularization procedure needs extra rules before one can say that it is consistent. Such extra rules needed for superconformal anomalies are discussed. Problems associated with renormalizability and higher order loops are also discussed

  10. An interior-point method for total variation regularized positron emission tomography image reconstruction

    Science.gov (United States)

    Bai, Bing

    2012-03-01

    There has been a lot of work on total variation (TV) regularized tomographic image reconstruction recently. Many of them use gradient-based optimization algorithms with a differentiable approximation of the TV functional. In this paper we apply TV regularization in Positron Emission Tomography (PET) image reconstruction. We reconstruct the PET image in a Bayesian framework, using Poisson noise model and TV prior functional. The original optimization problem is transformed to an equivalent problem with inequality constraints by adding auxiliary variables. Then we use an interior point method with logarithmic barrier functions to solve the constrained optimization problem. In this method, a series of points approaching the solution from inside the feasible region are found by solving a sequence of subproblems characterized by an increasing positive parameter. We use preconditioned conjugate gradient (PCG) algorithm to solve the subproblems directly. The nonnegativity constraint is enforced by bend line search. The exact expression of the TV functional is used in our calculations. Simulation results show that the algorithm converges fast and the convergence is insensitive to the values of the regularization and reconstruction parameters.

  11. Structure-Based Low-Rank Model With Graph Nuclear Norm Regularization for Noise Removal.

    Science.gov (United States)

    Ge, Qi; Jing, Xiao-Yuan; Wu, Fei; Wei, Zhi-Hui; Xiao, Liang; Shao, Wen-Ze; Yue, Dong; Li, Hai-Bo

    2017-07-01

    Nonlocal image representation methods, including group-based sparse coding and block-matching 3-D filtering, have shown their great performance in application to low-level tasks. The nonlocal prior is extracted from each group consisting of patches with similar intensities. Grouping patches based on intensity similarity, however, gives rise to disturbance and inaccuracy in estimation of the true images. To address this problem, we propose a structure-based low-rank model with graph nuclear norm regularization. We exploit the local manifold structure inside a patch and group the patches by the distance metric of manifold structure. With the manifold structure information, a graph nuclear norm regularization is established and incorporated into a low-rank approximation model. We then prove that the graph-based regularization is equivalent to a weighted nuclear norm and the proposed model can be solved by a weighted singular-value thresholding algorithm. Extensive experiments on additive white Gaussian noise removal and mixed noise removal demonstrate that the proposed method achieves a better performance than several state-of-the-art algorithms.

  12. Implementation of a computationally efficient least-squares algorithm for highly under-determined three-dimensional diffuse optical tomography problems.

    Science.gov (United States)

    Yalavarthy, Phaneendra K; Lynch, Daniel R; Pogue, Brian W; Dehghani, Hamid; Paulsen, Keith D

    2008-05-01

    Three-dimensional (3D) diffuse optical tomography is known to be a nonlinear, ill-posed and sometimes under-determined problem, where regularization is added to the minimization to allow convergence to a unique solution. In this work, a generalized least-squares (GLS) minimization method was implemented, which employs weight matrices for both data-model misfit and optical properties to include their variances and covariances, using a computationally efficient scheme. This allows inversion of a matrix that is of a dimension dictated by the number of measurements, instead of by the number of imaging parameters. This increases the computation speed up to four times per iteration in most of the under-determined 3D imaging problems. An analytic derivation, using the Sherman-Morrison-Woodbury identity, is shown for this efficient alternative form and it is proven to be equivalent, not only analytically, but also numerically. Equivalent alternative forms for other minimization methods, like Levenberg-Marquardt (LM) and Tikhonov, are also derived. Three-dimensional reconstruction results indicate that the poor recovery of quantitatively accurate values in 3D optical images can also be a characteristic of the reconstruction algorithm, along with the target size. Interestingly, usage of GLS reconstruction methods reduces error in the periphery of the image, as expected, and improves by 20% the ability to quantify local interior regions in terms of the recovered optical contrast, as compared to LM methods. Characterization of detector photo-multiplier tubes noise has enabled the use of the GLS method for reconstructing experimental data and showed a promise for better quantification of target in 3D optical imaging. Use of these new alternative forms becomes effective when the ratio of the number of imaging property parameters exceeds the number of measurements by a factor greater than 2.

  13. Soil hydraulic material properties and layered architecture from time-lapse GPR

    Science.gov (United States)

    Jaumann, Stefan; Roth, Kurt

    2018-04-01

    Quantitative knowledge of the subsurface material distribution and its effective soil hydraulic material properties is essential to predict soil water movement. Ground-penetrating radar (GPR) is a noninvasive and nondestructive geophysical measurement method that is suitable to monitor hydraulic processes. Previous studies showed that the GPR signal from a fluctuating groundwater table is sensitive to the soil water characteristic and the hydraulic conductivity function. In this work, we show that the GPR signal originating from both the subsurface architecture and the fluctuating groundwater table is suitable to estimate the position of layers within the subsurface architecture together with the associated effective soil hydraulic material properties with inversion methods. To that end, we parameterize the subsurface architecture, solve the Richards equation, convert the resulting water content to relative permittivity with the complex refractive index model (CRIM), and solve Maxwell's equations numerically. In order to analyze the GPR signal, we implemented a new heuristic algorithm that detects relevant signals in the radargram (events) and extracts the corresponding signal travel time and amplitude. This algorithm is applied to simulated as well as measured radargrams and the detected events are associated automatically. Using events instead of the full wave regularizes the inversion focussing on the relevant measurement signal. For optimization, we use a global-local approach with preconditioning. Starting from an ensemble of initial parameter sets drawn with a Latin hypercube algorithm, we sequentially couple a simulated annealing algorithm with a Levenberg-Marquardt algorithm. The method is applied to synthetic as well as measured data from the ASSESS test site. We show that the method yields reasonable estimates for the position of the layers as well as for the soil hydraulic material properties by comparing the results to references derived from ground

  14. Annotation of Regular Polysemy

    DEFF Research Database (Denmark)

    Martinez Alonso, Hector

    Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... and metonymic. We have conducted an analysis in English, Danish and Spanish. Later on, we have tried to replicate the human judgments by means of unsupervised and semi-supervised sense prediction. The automatic sense-prediction systems have been unable to find empiric evidence for the underspecified sense, even...

  15. Regularization by External Variables

    DEFF Research Database (Denmark)

    Bossolini, Elena; Edwards, R.; Glendinning, P. A.

    2016-01-01

    Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula......Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind...

  16. Age-related patterns of drug use initiation among polydrug using regular psychostimulant users.

    Science.gov (United States)

    Darke, Shane; Kaye, Sharlene; Torok, Michelle

    2012-09-01

    To determine age-related patterns of drug use initiation, drug sequencing and treatment entry among regular psychostimulant users. Cross-sectional study of 269 regular psychostimulant users, administered a structured interview examining onset of use for major licit and illicit drugs. The mean age at first intoxication was not associated with age or gender. In contrast, younger age was associated with earlier ages of onset for all of the illicit drug classes. Each additional year of age was associated with a 4 month increase in onset age for methamphetamine, and 3 months for heroin. By the age of 17, those born prior to 1961 had, on average, used only tobacco and alcohol, whereas those born between 1986 and 1990 had used nine different drug classes. The period between initial use and the transition to regular use, however, was stable. Age was also negatively correlated with both age at initial injection and regular injecting. Onset sequences, however, remained stable. Consistent with the age-related patterns of drug use, each additional year of age associated with a 0.47 year increase in the age at first treatment. While the age at first intoxication appeared stable, the trajectory through illicit drug use was substantially truncated. The data indicate that, at least among those who progress to regular illicit drug use, younger users are likely to be exposed to far broader polydrug use in their teens than has previously been the case. © 2012 Australasian Professional Society on Alcohol and other Drugs.

  17. Contour Propagation With Riemannian Elasticity Regularization

    DEFF Research Database (Denmark)

    Bjerre, Troels; Hansen, Mads Fogtmann; Sapru, W.

    2011-01-01

    Purpose/Objective(s): Adaptive techniques allow for correction of spatial changes during the time course of the fractionated radiotherapy. Spatial changes include tumor shrinkage and weight loss, causing tissue deformation and residual positional errors even after translational and rotational image...... the planning CT onto the rescans and correcting to reflect actual anatomical changes. For deformable registration, a free-form, multi-level, B-spline deformation model with Riemannian elasticity, penalizing non-rigid local deformations, and volumetric changes, was used. Regularization parameters was defined...... on the original delineation and tissue deformation in the time course between scans form a better starting point than rigid propagation. There was no significant difference of locally and globally defined regularization. The method used in the present study suggests that deformed contours need to be reviewed...

  18. Regular Single Valued Neutrosophic Hypergraphs

    Directory of Open Access Journals (Sweden)

    Muhammad Aslam Malik

    2016-12-01

    Full Text Available In this paper, we define the regular and totally regular single valued neutrosophic hypergraphs, and discuss the order and size along with properties of regular and totally regular single valued neutrosophic hypergraphs. We also extend work on completeness of single valued neutrosophic hypergraphs.

  19. The persistence of the attentional bias to regularities in a changing environment.

    Science.gov (United States)

    Yu, Ru Qi; Zhao, Jiaying

    2015-10-01

    The environment often is stable, but some aspects may change over time. The challenge for the visual system is to discover and flexibly adapt to the changes. We examined how attention is shifted in the presence of changes in the underlying structure of the environment. In six experiments, observers viewed four simultaneous streams of objects while performing a visual search task. In the first half of each experiment, the stream in the structured location contained regularities, the shapes in the random location were randomized, and gray squares appeared in two neutral locations. In the second half, the stream in the structured or the random location may change. In the first half of all experiments, visual search was facilitated in the structured location, suggesting that attention was consistently biased toward regularities. In the second half, this bias persisted in the structured location when no change occurred (Experiment 1), when the regularities were removed (Experiment 2), or when new regularities embedded in the original or novel stimuli emerged in the previously random location (Experiments 3 and 6). However, visual search was numerically but no longer reliably faster in the structured location when the initial regularities were removed and new regularities were introduced in the previously random location (Experiment 4), or when novel random stimuli appeared in the random location (Experiment 5). This suggests that the attentional bias was weakened. Overall, the results demonstrate that the attentional bias to regularities was persistent but also sensitive to changes in the environment.

  20. Do specialist self-referral insurance policies improve access to HIV-experienced physicians as a regular source of care?

    Science.gov (United States)

    Heslin, Kevin C; Andersen, Ronald M; Ettner, Susan L; Kominski, Gerald F; Belin, Thomas R; Morgenstern, Hal; Cunningham, William E

    2005-10-01

    Health insurance policies that require prior authorization for specialty care may be detrimental to persons with HIV, according to evidence that having a regular physician with HIV expertise leads to improved patient outcomes. The objective of this study is to determine whether HIV patients who can self-refer to specialists are more likely to have physicians who mainly treat HIV. The authors analyze cross-sectional survey data from the HIV Costs and Services Utilization Study. At baseline, 67 percent of patients had insurance that permitted self-referral. In multivariate analyses, being able to self-refer was associated with an 8-12 percent increased likelihood of having a physician at a regular source of care that mainly treats patients with HIV. Patients who can self-refer are more likely to have HIV-experienced physicians than are patients who need prior authorization. Insurance policies allowing self-referral to specialists may result in HIV patients seeing physicians with clinical expertise relevant to HIV care.

  1. Background field removal technique using regularization enabled sophisticated harmonic artifact reduction for phase data with varying kernel sizes.

    Science.gov (United States)

    Kan, Hirohito; Kasai, Harumasa; Arai, Nobuyuki; Kunitomo, Hiroshi; Hirose, Yasujiro; Shibamoto, Yuta

    2016-09-01

    An effective background field removal technique is desired for more accurate quantitative susceptibility mapping (QSM) prior to dipole inversion. The aim of this study was to evaluate the accuracy of regularization enabled sophisticated harmonic artifact reduction for phase data with varying spherical kernel sizes (REV-SHARP) method using a three-dimensional head phantom and human brain data. The proposed REV-SHARP method used the spherical mean value operation and Tikhonov regularization in the deconvolution process, with varying 2-14mm kernel sizes. The kernel sizes were gradually reduced, similar to the SHARP with varying spherical kernel (VSHARP) method. We determined the relative errors and relationships between the true local field and estimated local field in REV-SHARP, VSHARP, projection onto dipole fields (PDF), and regularization enabled SHARP (RESHARP). Human experiment was also conducted using REV-SHARP, VSHARP, PDF, and RESHARP. The relative errors in the numerical phantom study were 0.386, 0.448, 0.838, and 0.452 for REV-SHARP, VSHARP, PDF, and RESHARP. REV-SHARP result exhibited the highest correlation between the true local field and estimated local field. The linear regression slopes were 1.005, 1.124, 0.988, and 0.536 for REV-SHARP, VSHARP, PDF, and RESHARP in regions of interest on the three-dimensional head phantom. In human experiments, no obvious errors due to artifacts were present in REV-SHARP. The proposed REV-SHARP is a new method combined with variable spherical kernel size and Tikhonov regularization. This technique might make it possible to be more accurate backgroud field removal and help to achive better accuracy of QSM. Copyright © 2016 Elsevier Inc. All rights reserved.

  2. Determining the boundary of inclusions with known conductivities using a Levenberg–Marquardt algorithm by electrical resistance tomography

    International Nuclear Information System (INIS)

    Tan, Chao; Xu, Yaoyuan; Dong, Feng

    2011-01-01

    Electrical resistance tomography (ERT) is a non-intrusive technique to image the electrical conductivity distribution of a closed vessel by injecting exciting current into the vessel and measuring the boundary voltages induced. ERT image reconstruction is characterized as a severely nonlinear and ill-posed inverse problem with many unknowns. In recent years, a growing number of papers have been published which aim to determine the locations and shapes of inclusions by assuming that their conductivities are piecewise constant and isotropic. In this work, the boundary of inclusions is reconstructed by ERT with a boundary element method. The Jacobian matrix of the forward problem is first calculated with a direct linearization method based on the boundary element, and validated through comparison with that determined by the finite element method and analytical method. A boundary reconstruction algorithm is later presented based on the Levenberg–Marquardt (L-M) method. Several numerical simulations and static experiments were conducted to study the reconstruction quality, where much importance was given to the smoothness of boundaries in the reconstruction; thus, a restriction of the curve radius is introduced to adjust the damping parameter for the L-M algorithm. Analytical results on the stability and precision of the boundary reconstruction demonstrate that stable reconstruction can be achieved when the conductivity of the objects differs much from that of the background medium, and convex boundaries can also be precisely reconstructed. Contrarily, the reconstructions for inclusions with similar conductivities to the background medium are not stable. The situation of an incorrect initial estimation of the inclusions' number is numerically studied and the results show that the boundary of inclusions could be correctly reconstructed with a splitting/merging function under the aforementioned proper operation condition of the present algorithm

  3. On a correspondence between regular and non-regular operator monotone functions

    DEFF Research Database (Denmark)

    Gibilisco, P.; Hansen, Frank; Isola, T.

    2009-01-01

    We prove the existence of a bijection between the regular and the non-regular operator monotone functions satisfying a certain functional equation. As an application we give a new proof of the operator monotonicity of certain functions related to the Wigner-Yanase-Dyson skew information....

  4. Influence of angiographic collateral circulation on myocardial perfusion in patients with chronic total occlusion of a single coronary artery and no prior myocardial infarction.

    Science.gov (United States)

    Aboul-Enein, Fatma; Kar, Saibal; Hayes, Sean W; Sciammarella, Maria; Abidov, Aiden; Makkar, Raj; Friedman, John D; Eigler, Neal; Berman, Daniel S

    2004-06-01

    The functional role of various angiographic grades for coronary collaterals remains controversial. The aim of this study was to assess the influence of the Rentrop angiographic grading of coronary collaterals on myocardial perfusion in patients with single-vessel chronic total occlusion (CTO) and no prior myocardial infarction (MI). The study included 56 patients with single-vessel CTO and no prior MI who underwent rest-stress myocardial perfusion SPECT and coronary angiography within 6 mo. All patients had angiographic evidence of coronary collaterals. Patients were divided according to the Rentrop classification: Group I had grade 1 or 2 (n = 25) and group II had grade 3 collaterals (n = 31). Group I had a higher frequency of resting regional wall motion abnormalities on left ventriculography (52.6% vs. 19.2% [P = 0.019]). The mean perfusion scores of the overall population showed severe and extensive stress perfusion defects (summed stress score of 14.1 +/- 7.1 and summed difference score of 12.9 +/- 6.9) but minimal resting perfusion defects (summed rest score of 1.0 +/- 2.7). No perfusion scores differed between the 2 groups. The perfusion findings suggested that chronic stunning rather than hibernation is the principal cause of regional wall motion abnormalities in these patients. In the setting of single-vessel CTO and no prior MI, coronary collaterals appear to protect against resting perfusion defects. Excellent angiographic collaterals may prevent resting regional wall motion abnormalities but do not appear to protect against stress-induced perfusion defects.

  5. Sets of priors reflecting prior-data conflict and agreement

    NARCIS (Netherlands)

    Walter, G.M.; Coolen, F.P.A.; Carvalho, J.P.; Lesot, M.-J.; Kaymak, U.; Vieira, S.; Bouchon-Meunier, B.; Yager, R.R.

    2016-01-01

    Bayesian inference enables combination of observations with prior knowledge in the reasoning process. The choice of a particular prior distribution to represent the available prior knowledge is, however, often debatable, especially when prior knowledge is limited or data are scarce, as then

  6. Stochastic analytic regularization

    International Nuclear Information System (INIS)

    Alfaro, J.

    1984-07-01

    Stochastic regularization is reexamined, pointing out a restriction on its use due to a new type of divergence which is not present in the unregulated theory. Furthermore, we introduce a new form of stochastic regularization which permits the use of a minimal subtraction scheme to define the renormalized Green functions. (author)

  7. HIERARCHICAL REGULARIZATION OF POLYGONS FOR PHOTOGRAMMETRIC POINT CLOUDS OF OBLIQUE IMAGES

    Directory of Open Access Journals (Sweden)

    L. Xie

    2017-05-01

    Full Text Available Despite the success of multi-view stereo (MVS reconstruction from massive oblique images in city scale, only point clouds and triangulated meshes are available from existing MVS pipelines, which are topologically defect laden, free of semantical information and hard to edit and manipulate interactively in further applications. On the other hand, 2D polygons and polygonal models are still the industrial standard. However, extraction of the 2D polygons from MVS point clouds is still a non-trivial task, given the fact that the boundaries of the detected planes are zigzagged and regularities, such as parallel and orthogonal, cannot preserve. Aiming to solve these issues, this paper proposes a hierarchical polygon regularization method for the photogrammetric point clouds from existing MVS pipelines, which comprises of local and global levels. After boundary points extraction, e.g. using alpha shapes, the local level is used to consolidate the original points, by refining the orientation and position of the points using linear priors. The points are then grouped into local segments by forward searching. In the global level, regularities are enforced through a labeling process, which encourage the segments share the same label and the same label represents segments are parallel or orthogonal. This is formulated as Markov Random Field and solved efficiently. Preliminary results are made with point clouds from aerial oblique images and compared with two classical regularization methods, which have revealed that the proposed method are more powerful in abstracting a single building and is promising for further 3D polygonal model reconstruction and GIS applications.

  8. The Effect of Three Months Regular Aerobic Exercise on Premenstrual Syndrome

    Directory of Open Access Journals (Sweden)

    Zinat Ghanbari

    2008-12-01

    Full Text Available Objective: To determine the effects of three-month regular aerobic exercise on the PMS symptoms. Also correlations with age, education, marital status and severity of PMS symptoms were studied.Materials and Methods: A Quasi- Experimental study was conducted on 91 volunteer women with regular menstrual cycle and no history of gynecological, endocrinological and psychological disorders. The study was done during March 2005- March 2007, in Tehran University of Medical Sciences. A Modified Menstrual Distress Questionnaire (MMDQ was used in this study. Participants were divided into two groups: Non-exercised, they also didn't have any past experience of regular exercise (n= 48 and Exercised (n= 43. The exercise time duration was one hour and was carried out three times per week for three months.  Emotional, behavioral, electrolyte, autonomic, neurovegatative and skin symptoms of PMS were compared between two groups. P value was considered significant at < 0.05.Results: A significant difference was observed for electrolytic, neurovegetative and cognitive symptoms before and after the exercise. Also the severity of skin and neurovegetative symptoms were different in experimental groups with and without past history of doing regular exercise. There was no correlation between age, education, marital status and severity of PMS symptoms.Conclusion: Three months of regular aerobic exercise effectively reduces the severity of PMS symptoms.

  9. A constrained regularization method for inverting data represented by linear algebraic or integral equations

    Science.gov (United States)

    Provencher, Stephen W.

    1982-09-01

    CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.

  10. Penalised Complexity Priors for Stationary Autoregressive Processes

    KAUST Repository

    Sø rbye, Sigrunn Holbek; Rue, Haavard

    2017-01-01

    The autoregressive (AR) process of order p(AR(p)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR(p) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior distribution, to ensure that it behaves according to the users' prior knowledge. In this article, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These prior have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR(p) is the corresponding AR(p-1) model expressed using the partial autocorrelations. The properties of the new prior distribution are compared with the reference prior in a simulation study.

  11. Penalised Complexity Priors for Stationary Autoregressive Processes

    KAUST Repository

    Sørbye, Sigrunn Holbek

    2017-05-25

    The autoregressive (AR) process of order p(AR(p)) is a central model in time series analysis. A Bayesian approach requires the user to define a prior distribution for the coefficients of the AR(p) model. Although it is easy to write down some prior, it is not at all obvious how to understand and interpret the prior distribution, to ensure that it behaves according to the users\\' prior knowledge. In this article, we approach this problem using the recently developed ideas of penalised complexity (PC) priors. These prior have important properties like robustness and invariance to reparameterisations, as well as a clear interpretation. A PC prior is computed based on specific principles, where model component complexity is penalised in terms of deviation from simple base model formulations. In the AR(1) case, we discuss two natural base model choices, corresponding to either independence in time or no change in time. The latter case is illustrated in a survival model with possible time-dependent frailty. For higher-order processes, we propose a sequential approach, where the base model for AR(p) is the corresponding AR(p-1) model expressed using the partial autocorrelations. The properties of the new prior distribution are compared with the reference prior in a simulation study.

  12. The regularized monotonicity method: detecting irregular indefinite inclusions

    DEFF Research Database (Denmark)

    Garde, Henrik; Staboulis, Stratos

    2018-01-01

    inclusions, where the conductivity distribution has both more and less conductive parts relative to the background conductivity; one such method is the monotonicity method of Harrach, Seo, and Ullrich. We formulate the method for irregular indefinite inclusions, meaning that we make no regularity assumptions...

  13. A fonoaudiologia na relação entre escolas regulares de ensino fundamental e escolas de educação especial no processo de inclusão Speech therapy in the interaction between regular primary schools and special education schools in the process of inclusion

    Directory of Open Access Journals (Sweden)

    Alice de Souza Ramos

    2008-08-01

    Full Text Available A presente pesquisa teve como objetivo conhecer como ocorre o processo de inclusão de crianças com necessidades especiais no Ensino Fundamental, como acontece a comunicação entre escolas de educação especial e regular, bem como a atuação dos inúmeros profissionais envolvidos, enfocando o papel do fonoaudiólogo. Como metodologia, realizou-se um delineamento descritivo e analítico, por meio de inquérito, aplicados em seis Escolas de Educação Especial e seis Escolas de Ensino Regular da Rede Pública Municipal de Belo Horizonte. Participaram do estudo 6 coordenadores e 42 professores de escola regular, nove coordenadores e 61 professores de escola especial, totalizando 118 sujeitos pesquisados. Os questionários abordaram aspectos relacionados à gestão da escola, à formação docente, ao perfil dos alunos, profissionais atuantes no processo educacional, além de formas de contato entre instituições de serviço de Saúde e Educação. Na análise dos resultados, dentre outras questões, foi observada grande demanda para serviço fonoaudiológico, ainda pouco presente na área educacional. A comunicação entre os dois tipos de escola não acontece em todas as instituições pesquisadas. Ambas possuem conhecimento restrito da fonoaudiologia, principalmente as escolas regulares. Percebeu-se a falta de investimento para aperfeiçoamento pessoal dos professores, bem como para orientação aos pais acerca do processo de inclusão. Concluímos que o campo da fonoaudiologia no processo de inclusão mostra-se extenso e aberto. Sua atuação na promoção da saúde em âmbito escolar depende diretamente da interdisciplinaridade entre serviços da área da Educação e da Saúde, além da parceria entre fonoaudiólogos, educadores e pais.This research aimed to investigate the inclusion process of children with special needs in Primary Schools, interaction among regular schools and special education centers, as well as the role of the many

  14. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    Science.gov (United States)

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. © The Author(s) 2016.

  15. Risk, treatment duration, and recurrence risk of postpartum affective disorder in women with no prior psychiatric history

    DEFF Research Database (Denmark)

    Rasmussen, Marie-Louise H; Strøm, Marin; Wohlfahrt, Jan

    2017-01-01

    BACKGROUND: Some 5%-15% of all women experience postpartum depression (PPD), which for many is their first psychiatric disorder. The purpose of this study was to estimate the incidence of postpartum affective disorder (AD), duration of treatment, and rate of subsequent postpartum AD and other...... total of 789,068 births) and no prior psychiatric hospital contacts and/or use of antidepressants. These women were followed from 1 January 1996 to 31 December 2014. Postpartum AD was defined as use of antidepressants and/or hospital contact for PPD within 6 months after childbirth. The main outcome.......4%. The recurrence risk of postpartum AD for women with a PPD hospital contact after first birth was 55.4 per 100 person-years; for women with postpartum antidepressant medication after first birth, it was 35.0 per 100 person-years. The rate of postpartum AD after second birth for women with no history of postpartum...

  16. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...

  17. A Regularized Linear Dynamical System Framework for Multivariate Time Series Analysis.

    Science.gov (United States)

    Liu, Zitao; Hauskrecht, Milos

    2015-01-01

    Linear Dynamical System (LDS) is an elegant mathematical framework for modeling and learning Multivariate Time Series (MTS). However, in general, it is difficult to set the dimension of an LDS's hidden state space. A small number of hidden states may not be able to model the complexities of a MTS, while a large number of hidden states can lead to overfitting. In this paper, we study learning methods that impose various regularization penalties on the transition matrix of the LDS model and propose a regularized LDS learning framework (rLDS) which aims to (1) automatically shut down LDSs' spurious and unnecessary dimensions, and consequently, address the problem of choosing the optimal number of hidden states; (2) prevent the overfitting problem given a small amount of MTS data; and (3) support accurate MTS forecasting. To learn the regularized LDS from data we incorporate a second order cone program and a generalized gradient descent method into the Maximum a Posteriori framework and use Expectation Maximization to obtain a low-rank transition matrix of the LDS model. We propose two priors for modeling the matrix which lead to two instances of our rLDS. We show that our rLDS is able to recover well the intrinsic dimensionality of the time series dynamics and it improves the predictive performance when compared to baselines on both synthetic and real-world MTS datasets.

  18. Effective field theory dimensional regularization

    International Nuclear Information System (INIS)

    Lehmann, Dirk; Prezeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed

  19. Effective field theory dimensional regularization

    Science.gov (United States)

    Lehmann, Dirk; Prézeau, Gary

    2002-01-01

    A Lorentz-covariant regularization scheme for effective field theories with an arbitrary number of propagating heavy and light particles is given. This regularization scheme leaves the low-energy analytic structure of Greens functions intact and preserves all the symmetries of the underlying Lagrangian. The power divergences of regularized loop integrals are controlled by the low-energy kinematic variables. Simple diagrammatic rules are derived for the regularization of arbitrary one-loop graphs and the generalization to higher loops is discussed.

  20. Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus

    2012-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...

  1. Quality assurance in postgraduate pathology training the Dutch way: regular assessment, monitoring of training programs but no end of training examination.

    Science.gov (United States)

    van der Valk, Paul

    2016-01-01

    It might seem self-evident that in the transition from a supervised trainee to an independent professional who is no longer supervised, formal assessment of whether the trainee knows his/her trade well enough to function independently is necessary. This would then constitute an end of training examination. Such examinations are practiced in several countries but a rather heterogeneous situation exists in the EU countries. In the Netherlands, the training program is not concluded by a summative examination and reasons behind this situation are discussed. Quality assurance of postgraduate medical training in the Netherlands has been developed along two tracks: (1) not a single testing moment but continuous evaluation of the performance of the trainee in 'real time' situations and (2) monitoring of the quality of the offered training program through regular site-visits. Regular (monthly and/or yearly) evaluations should be part of every self-respecting training program. In the Netherlands, these evaluations are formative only: their intention is to provide the trainee a tool by which he or she can see whether they are on track with their training schedule. In the system in the Netherlands, regular site-visits to training programs constitute a crucial element of quality assurance of postgraduate training. During the site-visit, the position and perceptions of the trainee are key elements. The perception by the trainee of the training program, the institution (or department) offering the training program, and the professionals involved in the training program is explicitly solicited and systematically assessed. With this two-tiered approach high-quality postgraduate training is assured without the need for an end of training examination.

  2. Kinetic modeling and fitting software for interconnected reaction schemes: VisKin.

    Science.gov (United States)

    Zhang, Xuan; Andrews, Jared N; Pedersen, Steen E

    2007-02-15

    Reaction kinetics for complex, highly interconnected kinetic schemes are modeled using analytical solutions to a system of ordinary differential equations. The algorithm employs standard linear algebra methods that are implemented using MatLab functions in a Visual Basic interface. A graphical user interface for simple entry of reaction schemes facilitates comparison of a variety of reaction schemes. To ensure microscopic balance, graph theory algorithms are used to determine violations of thermodynamic cycle constraints. Analytical solutions based on linear differential equations result in fast comparisons of first order kinetic rates and amplitudes as a function of changing ligand concentrations. For analysis of higher order kinetics, we also implemented a solution using numerical integration. To determine rate constants from experimental data, fitting algorithms that adjust rate constants to fit the model to imported data were implemented using the Levenberg-Marquardt algorithm or using Broyden-Fletcher-Goldfarb-Shanno methods. We have included the ability to carry out global fitting of data sets obtained at varying ligand concentrations. These tools are combined in a single package, which we have dubbed VisKin, to guide and analyze kinetic experiments. The software is available online for use on PCs.

  3. Neural Network Based Intrusion Detection System for Critical Infrastructures

    Energy Technology Data Exchange (ETDEWEB)

    Todd Vollmer; Ondrej Linda; Milos Manic

    2009-07-01

    Resiliency and security in control systems such as SCADA and Nuclear plant’s in today’s world of hackers and malware are a relevant concern. Computer systems used within critical infrastructures to control physical functions are not immune to the threat of cyber attacks and may be potentially vulnerable. Tailoring an intrusion detection system to the specifics of critical infrastructures can significantly improve the security of such systems. The IDS-NNM – Intrusion Detection System using Neural Network based Modeling, is presented in this paper. The main contributions of this work are: 1) the use and analyses of real network data (data recorded from an existing critical infrastructure); 2) the development of a specific window based feature extraction technique; 3) the construction of training dataset using randomly generated intrusion vectors; 4) the use of a combination of two neural network learning algorithms – the Error-Back Propagation and Levenberg-Marquardt, for normal behavior modeling. The presented algorithm was evaluated on previously unseen network data. The IDS-NNM algorithm proved to be capable of capturing all intrusion attempts presented in the network communication while not generating any false alerts.

  4. Identification and control of plasma vertical position using neural network in Damavand tokamak

    Energy Technology Data Exchange (ETDEWEB)

    Rasouli, H. [School of Plasma Physics and Nuclear Fusion, Institute of Nuclear Science and Technology, AEOI, P.O. Box 14155-1339, Tehran (Iran, Islamic Republic of); Advanced Process Automation and Control (APAC) Research Group, Faculty of Electrical Engineering, K.N. Toosi University of Technology, P.O. Box 16315-1355, Tehran (Iran, Islamic Republic of); Rasouli, C.; Koohi, A. [School of Plasma Physics and Nuclear Fusion, Institute of Nuclear Science and Technology, AEOI, P.O. Box 14155-1339, Tehran (Iran, Islamic Republic of)

    2013-02-15

    In this work, a nonlinear model is introduced to determine the vertical position of the plasma column in Damavand tokamak. Using this model as a simulator, a nonlinear neural network controller has been designed. In the first stage, the electronic drive and sensory circuits of Damavand tokamak are modified. These circuits can control the vertical position of the plasma column inside the vacuum vessel. Since the vertical position of plasma is an unstable parameter, a direct closed loop system identification algorithm is performed. In the second stage, a nonlinear model is identified for plasma vertical position, based on the multilayer perceptron (MLP) neural network (NN) structure. Estimation of simulator parameters has been performed by back-propagation error algorithm using Levenberg-Marquardt gradient descent optimization technique. The model is verified through simulation of the whole closed loop system using both simulator and actual plant in similar conditions. As the final stage, a MLP neural network controller is designed for simulator model. In the last step, online training is performed to tune the controller parameters. Simulation results justify using of the NN controller for the actual plant.

  5. Inverse optimal design of the radiant heating in materials processing and manufacturing

    Science.gov (United States)

    Fedorov, A. G.; Lee, K. H.; Viskanta, R.

    1998-12-01

    Combined convective, conductive, and radiative heat transfer is analyzed during heating of a continuously moving load in the industrial radiant oven. A transient, quasi-three-dimensional model of heat transfer between a continuous load of parts moving inside an oven on a conveyor belt at a constant speed and an array of radiant heaters/burners placed inside the furnace enclosure is developed. The model accounts for radiative exchange between the heaters and the load, heat conduction in the load, and convective heat transfer between the moving load and oven environment. The thermal model developed has been used to construct a general framework for an inverse optimal design of an industrial oven as an example. In particular, the procedure based on the Levenberg-Marquardt nonlinear least squares optimization algorithm has been developed to obtain the optimal temperatures of the heaters/burners that need to be specified to achieve a prescribed temperature distribution of the surface of a load. The results of calculations for several sample cases are reported to illustrate the capabilities of the procedure developed for the optimal inverse design of an industrial radiant oven.

  6. Non-intrusive reduced order modeling of nonlinear problems using neural networks

    Science.gov (United States)

    Hesthaven, J. S.; Ubbiali, S.

    2018-06-01

    We develop a non-intrusive reduced basis (RB) method for parametrized steady-state partial differential equations (PDEs). The method extracts a reduced basis from a collection of high-fidelity solutions via a proper orthogonal decomposition (POD) and employs artificial neural networks (ANNs), particularly multi-layer perceptrons (MLPs), to accurately approximate the coefficients of the reduced model. The search for the optimal number of neurons and the minimum amount of training samples to avoid overfitting is carried out in the offline phase through an automatic routine, relying upon a joint use of the Latin hypercube sampling (LHS) and the Levenberg-Marquardt (LM) training algorithm. This guarantees a complete offline-online decoupling, leading to an efficient RB method - referred to as POD-NN - suitable also for general nonlinear problems with a non-affine parametric dependence. Numerical studies are presented for the nonlinear Poisson equation and for driven cavity viscous flows, modeled through the steady incompressible Navier-Stokes equations. Both physical and geometrical parametrizations are considered. Several results confirm the accuracy of the POD-NN method and show the substantial speed-up enabled at the online stage as compared to a traditional RB strategy.

  7. Ground Motion Prediction Model Using Artificial Neural Network

    Science.gov (United States)

    Dhanya, J.; Raghukanth, S. T. G.

    2018-03-01

    This article focuses on developing a ground motion prediction equation based on artificial neural network (ANN) technique for shallow crustal earthquakes. A hybrid technique combining genetic algorithm and Levenberg-Marquardt technique is used for training the model. The present model is developed to predict peak ground velocity, and 5% damped spectral acceleration. The input parameters for the prediction are moment magnitude ( M w), closest distance to rupture plane ( R rup), shear wave velocity in the region ( V s30) and focal mechanism ( F). A total of 13,552 ground motion records from 288 earthquakes provided by the updated NGA-West2 database released by Pacific Engineering Research Center are utilized to develop the model. The ANN architecture considered for the model consists of 192 unknowns including weights and biases of all the interconnected nodes. The performance of the model is observed to be within the prescribed error limits. In addition, the results from the study are found to be comparable with the existing relations in the global database. The developed model is further demonstrated by estimating site-specific response spectra for Shimla city located in Himalayan region.

  8. Designing Artificial Neural Networks Using Particle Swarm Optimization Algorithms.

    Science.gov (United States)

    Garro, Beatriz A; Vázquez, Roberto A

    2015-01-01

    Artificial Neural Network (ANN) design is a complex task because its performance depends on the architecture, the selected transfer function, and the learning algorithm used to train the set of synaptic weights. In this paper we present a methodology that automatically designs an ANN using particle swarm optimization algorithms such as Basic Particle Swarm Optimization (PSO), Second Generation of Particle Swarm Optimization (SGPSO), and a New Model of PSO called NMPSO. The aim of these algorithms is to evolve, at the same time, the three principal components of an ANN: the set of synaptic weights, the connections or architecture, and the transfer functions for each neuron. Eight different fitness functions were proposed to evaluate the fitness of each solution and find the best design. These functions are based on the mean square error (MSE) and the classification error (CER) and implement a strategy to avoid overtraining and to reduce the number of connections in the ANN. In addition, the ANN designed with the proposed methodology is compared with those designed manually using the well-known Back-Propagation and Levenberg-Marquardt Learning Algorithms. Finally, the accuracy of the method is tested with different nonlinear pattern classification problems.

  9. Neural network analysis of head-flow curves in deep well pumps

    International Nuclear Information System (INIS)

    Goelcue, Mustafa

    2006-01-01

    In impellers with splitter blades, the difficulty in calculation of the flow area of the impeller is because of the unknown flow rate occurring in the two separate areas when the splitter blades are added. Experimental studies were made to investigate the effects of splitter blade length on deep well pump performance for different numbers of blades. Head-flow curves of deep well pump impellers with splitter blades were investigated using artificial neural networks (ANNs). Gradient descent (GD), Gradient descent with momentum (GDM) and Levenberg-Marquardt (LM) learning algorithms were used in the networks. Experimental studies were completed to obtain training and test data. Blade number (z), non-dimensional splitter blade length (L-bar ) and flow rate (Q) were used as the input layer, while the output is head (H m ). For the testing data, the root mean squared error (RMSE), fraction of variance (R 2 ) and mean absolute percentage error (MAPE) were found to be 0.1285, 0.9999 and 1.6821%, respectively. With these results, we believe that the ANN can be used for prediction of head-flow curves as an appropriate method in deep well pump impellers with splitter blades.

  10. Prediction of the antimicrobial activity of walnut (Juglans regia L.) kernel aqueous extracts using artificial neural network and multiple linear regression.

    Science.gov (United States)

    Kavuncuoglu, Hatice; Kavuncuoglu, Erhan; Karatas, Seyda Merve; Benli, Büsra; Sagdic, Osman; Yalcin, Hasan

    2018-04-09

    The mathematical model was established to determine the diameter of inhibition zone of the walnut extract on the twelve bacterial species. Type of extraction, concentration, and pathogens were taken as input variables. Two models were used with the aim of designing this system. One of them was developed with artificial neural networks (ANN), and the other was formed with multiple linear regression (MLR). Four common training algorithms were used. Levenberg-Marquardt (LM), Bayesian regulation (BR), scaled conjugate gradient (SCG) and resilient back propagation (RP) were investigated, and the algorithms were compared. Root mean squared error and correlation coefficient were evaluated as performance criteria. When these criteria were analyzed, ANN showed high prediction performance, while MLR showed low prediction performance. As a result, it is seen that when the different input values are provided to the system developed with ANN, the most accurate inhibition zone (IZ) estimates were obtained. The results of this study could offer new perspectives, particularly in the field of microbiology, because these could be applied to other type of extraction, concentrations, and pathogens, without resorting to experiments. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Artificial neural network based modelling approach for municipal solid waste gasification in a fluidized bed reactor.

    Science.gov (United States)

    Pandey, Daya Shankar; Das, Saptarshi; Pan, Indranil; Leahy, James J; Kwapinski, Witold

    2016-12-01

    In this paper, multi-layer feed forward neural networks are used to predict the lower heating value of gas (LHV), lower heating value of gasification products including tars and entrained char (LHV p ) and syngas yield during gasification of municipal solid waste (MSW) during gasification in a fluidized bed reactor. These artificial neural networks (ANNs) with different architectures are trained using the Levenberg-Marquardt (LM) back-propagation algorithm and a cross validation is also performed to ensure that the results generalise to other unseen datasets. A rigorous study is carried out on optimally choosing the number of hidden layers, number of neurons in the hidden layer and activation function in a network using multiple Monte Carlo runs. Nine input and three output parameters are used to train and test various neural network architectures in both multiple output and single output prediction paradigms using the available experimental datasets. The model selection procedure is carried out to ascertain the best network architecture in terms of predictive accuracy. The simulation results show that the ANN based methodology is a viable alternative which can be used to predict the performance of a fluidized bed gasifier. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. Camera-pose estimation via projective Newton optimization on the manifold.

    Science.gov (United States)

    Sarkis, Michel; Diepold, Klaus

    2012-04-01

    Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.

  13. Electron dose map inversion based on several algorithms

    International Nuclear Information System (INIS)

    Li Gui; Zheng Huaqing; Wu Yican; Fds Team

    2010-01-01

    The reconstruction to the electron dose map in radiation therapy was investigated by constructing the inversion model of electron dose map with different algorithms. The inversion model of electron dose map based on nonlinear programming was used, and this model was applied the penetration dose map to invert the total space one. The realization of this inversion model was by several inversion algorithms. The test results with seven samples show that except the NMinimize algorithm, which worked for just one sample, with great error,though,all the inversion algorithms could be realized to our inversion model rapidly and accurately. The Levenberg-Marquardt algorithm, having the greatest accuracy and speed, could be considered as the first choice in electron dose map inversion.Further tests show that more error would be created when the data close to the electron range was used (tail error). The tail error might be caused by the approximation of mean energy spectra, and this should be considered to improve the method. The time-saving and accurate algorithms could be used to achieve real-time dose map inversion. By selecting the best inversion algorithm, the clinical need in real-time dose verification can be satisfied. (authors)

  14. A Parameterized Inversion Model for Soil Moisture and Biomass from Polarimetric Backscattering Coefficients

    Science.gov (United States)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2012-01-01

    A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha

  15. SURFACE FITTING FILTERING OF LIDAR POINT CLOUD WITH WAVEFORM INFORMATION

    Directory of Open Access Journals (Sweden)

    S. Xing

    2017-09-01

    Full Text Available Full-waveform LiDAR is an active technology of photogrammetry and remote sensing. It provides more detailed information about objects along the path of a laser pulse than discrete-return topographic LiDAR. The point cloud and waveform information with high quality can be obtained by waveform decomposition, which could make contributions to accurate filtering. The surface fitting filtering method with waveform information is proposed to present such advantage. Firstly, discrete point cloud and waveform parameters are resolved by global convergent Levenberg Marquardt decomposition. Secondly, the ground seed points are selected, of which the abnormal ones are detected by waveform parameters and robust estimation. Thirdly, the terrain surface is fitted and the height difference threshold is determined in consideration of window size and mean square error. Finally, the points are classified gradually with the rising of window size. The filtering process is finished until window size is larger than threshold. The waveform data in urban, farmland and mountain areas from “WATER (Watershed Allied Telemetry Experimental Research” are selected for experiments. Results prove that compared with traditional method, the accuracy of point cloud filtering is further improved and the proposed method has highly practical value.

  16. A Novel Intelligent Method for the State of Charge Estimation of Lithium-Ion Batteries Using a Discrete Wavelet Transform-Based Wavelet Neural Network

    Directory of Open Access Journals (Sweden)

    Deyu Cui

    2018-04-01

    Full Text Available State of charge (SOC estimation is becoming increasingly important, along with electric vehicle (EV rapid development, while SOC is one of the most significant parameters for the battery management system, indicating remaining energy and ensuring the safety and reliability of EV. In this paper, a hybrid wavelet neural network (WNN model combining the discrete wavelet transform (DWT method and adaptive WNN is proposed to estimate the SOC of lithium-ion batteries. The WNN model is trained by Levenberg-Marquardt (L-M algorithm, whose inputs are processed by discrete wavelet decomposition and reconstitution. Compared with back-propagation neural network (BPNN, L-M based BPNN (LMBPNN, L-M based WNN (LMWNN, DWT with L-M based BPNN (DWTLMBPNN and extend Kalman filter (EKF, the proposed intelligent SOC estimation method is validated and proved to be effective. Under the New European Driving Cycle (NEDC, the mean absolute error and maximum error can be reduced to 0.59% and 3.13%, respectively. The characteristics of high accuracy and strong robustness of the proposed method are verified by comparison study and robustness evaluation results (e.g., measurement noise test and untrained driving cycle test.

  17. Predicting the supercritical carbon dioxide extraction of oregano bract essential oil

    Directory of Open Access Journals (Sweden)

    Abdolreza Moghadassi

    2011-10-01

    Full Text Available The extraction of essential oils using compressed carbon dioxide is a modern technique offering significant advantagesover more conventional methods, especially in particular applications. The prediction of extraction efficiency is a powerful toolfor designing and optimizing the process. The current work proposed a new method based on the artificial neural network(ANN for the estimation of the extraction efficiency of the essential oil oregano bract. In addition, the work used the backpropagationlearning algorithm, incorporating different training methods. The required data were collected; pre-treating wasused for ANN training. The accuracy and trend stability of the trained networks were verified according to their ability to predictunseen data. The Levenberg-Marquardt algorithm has been found to be the most suitable algorithm, with the appropriatenumber of neurons (i.e., ten neurons in the hidden layer and a minimum average absolute relative error (i.e., 0.019164. Inaddition, some excellent predictions with maximum error of 0.039313 were observed. The results demonstrated the ANN’scapability to predict the measured data. The ANN model performance was also compared to a suitable mathematical model,thereby confirming the superiority of the ANN model.

  18. The fatigue life prediction of aluminium alloy using genetic algorithm and neural network

    Science.gov (United States)

    Susmikanti, Mike

    2013-09-01

    The behavior of the fatigue life of the industrial materials is very important. In many cases, the material with experiencing fatigue life cannot be avoided, however, there are many ways to control their behavior. Many investigations of the fatigue life phenomena of alloys have been done, but it is high cost and times consuming computation. This paper report the modeling and simulation approaches to predict the fatigue life behavior of Aluminum Alloys and resolves some problems of computation. First, the simulation using genetic algorithm was utilized to optimize the load to obtain the stress values. These results can be used to provide N-cycle fatigue life of the material. Furthermore, the experimental data was applied as input data in the neural network learning, while the samples data were applied for testing of the training data. Finally, the multilayer perceptron algorithm is applied to predict whether the given data sets in accordance with the fatigue life of the alloy. To achieve rapid convergence, the Levenberg-Marquardt algorithm was also employed. The simulations results shows that the fatigue behaviors of aluminum under pressure can be predicted. In addition, implementation of neural networks successfully identified a model for material fatigue life.

  19. Hierarchical regular small-world networks

    International Nuclear Information System (INIS)

    Boettcher, Stefan; Goncalves, Bruno; Guclu, Hasan

    2008-01-01

    Two new networks are introduced that resemble small-world properties. These networks are recursively constructed but retain a fixed, regular degree. They possess a unique one-dimensional lattice backbone overlaid by a hierarchical sequence of long-distance links, mixing real-space and small-world features. Both networks, one 3-regular and the other 4-regular, lead to distinct behaviors, as revealed by renormalization group studies. The 3-regular network is planar, has a diameter growing as √N with system size N, and leads to super-diffusion with an exact, anomalous exponent d w = 1.306..., but possesses only a trivial fixed point T c = 0 for the Ising ferromagnet. In turn, the 4-regular network is non-planar, has a diameter growing as ∼2 √(log 2 N 2 ) , exhibits 'ballistic' diffusion (d w = 1), and a non-trivial ferromagnetic transition, T c > 0. It suggests that the 3-regular network is still quite 'geometric', while the 4-regular network qualifies as a true small world with mean-field properties. As an engineering application we discuss synchronization of processors on these networks. (fast track communication)

  20. 75 FR 76006 - Regular Meeting

    Science.gov (United States)

    2010-12-07

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. ACTION: Regular meeting. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). Date and Time: The meeting of the Board will be held...

  1. General inverse problems for regular variation

    DEFF Research Database (Denmark)

    Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan

    2014-01-01

    Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...

  2. Joint Segmentation and Shape Regularization with a Generalized Forward Backward Algorithm.

    Science.gov (United States)

    Stefanoiu, Anca; Weinmann, Andreas; Storath, Martin; Navab, Nassir; Baust, Maximilian

    2016-05-11

    This paper presents a method for the simultaneous segmentation and regularization of a series of shapes from a corresponding sequence of images. Such series arise as time series of 2D images when considering video data, or as stacks of 2D images obtained by slicewise tomographic reconstruction. We first derive a model where the regularization of the shape signal is achieved by a total variation prior on the shape manifold. The method employs a modified Kendall shape space to facilitate explicit computations together with the concept of Sobolev gradients. For the proposed model, we derive an efficient and computationally accessible splitting scheme. Using a generalized forward-backward approach, our algorithm treats the total variation atoms of the splitting via proximal mappings, whereas the data terms are dealt with by gradient descent. The potential of the proposed method is demonstrated on various application examples dealing with 3D data. We explain how to extend the proposed combined approach to shape fields which, for instance, arise in the context of 3D+t imaging modalities, and show an application in this setup as well.

  3. Prior Elicitation, Assessment and Inference with a Dirichlet Prior

    Directory of Open Access Journals (Sweden)

    Michael Evans

    2017-10-01

    Full Text Available Methods are developed for eliciting a Dirichlet prior based upon stating bounds on the individual probabilities that hold with high prior probability. This approach to selecting a prior is applied to a contingency table problem where it is demonstrated how to assess the prior with respect to the bias it induces as well as how to check for prior-data conflict. It is shown that the assessment of a hypothesis via relative belief can easily take into account what it means for the falsity of the hypothesis to correspond to a difference of practical importance and provide evidence in favor of a hypothesis.

  4. Continuum-regularized quantum gravity

    International Nuclear Information System (INIS)

    Chan Huesum; Halpern, M.B.

    1987-01-01

    The recent continuum regularization of d-dimensional Euclidean gravity is generalized to arbitrary power-law measure and studied in some detail as a representative example of coordinate-invariant regularization. The weak-coupling expansion of the theory illustrates a generic geometrization of regularized Schwinger-Dyson rules, generalizing previous rules in flat space and flat superspace. The rules are applied in a non-trivial explicit check of Einstein invariance at one loop: the cosmological counterterm is computed and its contribution is included in a verification that the graviton mass is zero. (orig.)

  5. Elapsed decision time affects the weighting of prior probability in a perceptual decision task

    Science.gov (United States)

    Hanks, Timothy D.; Mazurek, Mark E.; Kiani, Roozbeh; Hopp, Elizabeth; Shadlen, Michael N.

    2012-01-01

    Decisions are often based on a combination of new evidence with prior knowledge of the probable best choice. Optimal combination requires knowledge about the reliability of evidence, but in many realistic situations, this is unknown. Here we propose and test a novel theory: the brain exploits elapsed time during decision formation to combine sensory evidence with prior probability. Elapsed time is useful because (i) decisions that linger tend to arise from less reliable evidence, and (ii) the expected accuracy at a given decision time depends on the reliability of the evidence gathered up to that point. These regularities allow the brain to combine prior information with sensory evidence by weighting the latter in accordance with reliability. To test this theory, we manipulated the prior probability of the rewarded choice while subjects performed a reaction-time discrimination of motion direction using a range of stimulus reliabilities that varied from trial to trial. The theory explains the effect of prior probability on choice and reaction time over a wide range of stimulus strengths. We found that prior probability was incorporated into the decision process as a dynamic bias signal that increases as a function of decision time. This bias signal depends on the speed-accuracy setting of human subjects, and it is reflected in the firing rates of neurons in the lateral intraparietal cortex (LIP) of rhesus monkeys performing this task. PMID:21525274

  6. Image-guided regularization level set evolution for MR image segmentation and bias field correction.

    Science.gov (United States)

    Wang, Lingfeng; Pan, Chunhong

    2014-01-01

    Magnetic resonance (MR) image segmentation is a crucial step in surgical and treatment planning. In this paper, we propose a level-set-based segmentation method for MR images with intensity inhomogeneous problem. To tackle the initialization sensitivity problem, we propose a new image-guided regularization to restrict the level set function. The maximum a posteriori inference is adopted to unify segmentation and bias field correction within a single framework. Under this framework, both the contour prior and the bias field prior are fully used. As a result, the image intensity inhomogeneity can be well solved. Extensive experiments are provided to evaluate the proposed method, showing significant improvements in both segmentation and bias field correction accuracies as compared with other state-of-the-art approaches. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  8. Geometric continuum regularization of quantum field theory

    International Nuclear Information System (INIS)

    Halpern, M.B.

    1989-01-01

    An overview of the continuum regularization program is given. The program is traced from its roots in stochastic quantization, with emphasis on the examples of regularized gauge theory, the regularized general nonlinear sigma model and regularized quantum gravity. In its coordinate-invariant form, the regularization is seen as entirely geometric: only the supermetric on field deformations is regularized, and the prescription provides universal nonperturbative invariant continuum regularization across all quantum field theory. 54 refs

  9. Bypassing the Limits of Ll Regularization: Convex Sparse Signal Processing Using Non-Convex Regularization

    Science.gov (United States)

    Parekh, Ankit

    Sparsity has become the basis of some important signal processing methods over the last ten years. Many signal processing problems (e.g., denoising, deconvolution, non-linear component analysis) can be expressed as inverse problems. Sparsity is invoked through the formulation of an inverse problem with suitably designed regularization terms. The regularization terms alone encode sparsity into the problem formulation. Often, the ℓ1 norm is used to induce sparsity, so much so that ℓ1 regularization is considered to be `modern least-squares'. The use of ℓ1 norm, as a sparsity-inducing regularizer, leads to a convex optimization problem, which has several benefits: the absence of extraneous local minima, well developed theory of globally convergent algorithms, even for large-scale problems. Convex regularization via the ℓ1 norm, however, tends to under-estimate the non-zero values of sparse signals. In order to estimate the non-zero values more accurately, non-convex regularization is often favored over convex regularization. However, non-convex regularization generally leads to non-convex optimization, which suffers from numerous issues: convergence may be guaranteed to only a stationary point, problem specific parameters may be difficult to set, and the solution is sensitive to the initialization of the algorithm. The first part of this thesis is aimed toward combining the benefits of non-convex regularization and convex optimization to estimate sparse signals more effectively. To this end, we propose to use parameterized non-convex regularizers with designated non-convexity and provide a range for the non-convex parameter so as to ensure that the objective function is strictly convex. By ensuring convexity of the objective function (sum of data-fidelity and non-convex regularizer), we can make use of a wide variety of convex optimization algorithms to obtain the unique global minimum reliably. The second part of this thesis proposes a non-linear signal

  10. Borderline personality disorder and regularly drinking alcohol before sex.

    Science.gov (United States)

    Thompson, Ronald G; Eaton, Nicholas R; Hu, Mei-Chen; Hasin, Deborah S

    2017-07-01

    Drinking alcohol before sex increases the likelihood of engaging in unprotected intercourse, having multiple sexual partners and becoming infected with sexually transmitted infections. Borderline personality disorder (BPD), a complex psychiatric disorder characterised by pervasive instability in emotional regulation, self-image, interpersonal relationships and impulse control, is associated with substance use disorders and sexual risk behaviours. However, no study has examined the relationship between BPD and drinking alcohol before sex in the USA. This study examined the association between BPD and regularly drinking before sex in a nationally representative adult sample. Participants were 17 491 sexually active drinkers from Wave 2 of the National Epidemiologic Survey on Alcohol and Related Conditions. Logistic regression models estimated effects of BPD diagnosis, specific borderline diagnostic criteria and BPD criterion count on the likelihood of regularly (mostly or always) drinking alcohol before sex, adjusted for controls. Borderline personality disorder diagnosis doubled the odds of regularly drinking before sex [adjusted odds ratio (AOR) = 2.26; confidence interval (CI) = 1.63, 3.14]. Of nine diagnostic criteria, impulsivity in areas that are self-damaging remained a significant predictor of regularly drinking before sex (AOR = 1.82; CI = 1.42, 2.35). The odds of regularly drinking before sex increased by 20% for each endorsed criterion (AOR = 1.20; CI = 1.14, 1.27) DISCUSSION AND CONCLUSIONS: This is the first study to examine the relationship between BPD and regularly drinking alcohol before sex in the USA. Substance misuse treatment should assess regularly drinking before sex, particularly among patients with BPD, and BPD treatment should assess risk at the intersection of impulsivity, sexual behaviour and substance use. [Thompson Jr RG, Eaton NR, Hu M-C, Hasin DS Borderline personality disorder and regularly drinking alcohol

  11. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan; Wang, Yunji; Jing, Bing-Yi; Gao, Xin

    2015-01-01

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  12. Regularized maximum correntropy machine

    KAUST Repository

    Wang, Jim Jing-Yan

    2015-02-12

    In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.

  13. Block matching sparsity regularization-based image reconstruction for incomplete projection data in computed tomography

    Science.gov (United States)

    Cai, Ailong; Li, Lei; Zheng, Zhizhong; Zhang, Hanming; Wang, Linyuan; Hu, Guoen; Yan, Bin

    2018-02-01

    In medical imaging many conventional regularization methods, such as total variation or total generalized variation, impose strong prior assumptions which can only account for very limited classes of images. A more reasonable sparse representation frame for images is still badly needed. Visually understandable images contain meaningful patterns, and combinations or collections of these patterns can be utilized to form some sparse and redundant representations which promise to facilitate image reconstructions. In this work, we propose and study block matching sparsity regularization (BMSR) and devise an optimization program using BMSR for computed tomography (CT) image reconstruction for an incomplete projection set. The program is built as a constrained optimization, minimizing the L1-norm of the coefficients of the image in the transformed domain subject to data observation and positivity of the image itself. To solve the program efficiently, a practical method based on the proximal point algorithm is developed and analyzed. In order to accelerate the convergence rate, a practical strategy for tuning the BMSR parameter is proposed and applied. The experimental results for various settings, including real CT scanning, have verified the proposed reconstruction method showing promising capabilities over conventional regularization.

  14. A inserção do aluno surdo no ensino regular: visão de um grupo de professores do Estado do Paraná Deaf student insertion in regular schools: deaf teachers from Parana State views

    Directory of Open Access Journals (Sweden)

    Ana Cristina Guarinello

    2006-12-01

    Full Text Available Com o objetivo de debater a problemática que envolve a inclusão do aluno surdo no ensino regular, esse estudo se propõe a analisar aspectos envolvidos em tal problemática a partir da visão de um grupo de professores. Para tanto, foi aplicado questionário junto a 36 professores inseridos na Rede Pública do Ensino Fundamental e Médio do Estado do Paraná. A análise dos dados evidencia que as principais dificuldades citadas ora relacionam-se aos próprios professores - à falta de conhecimento acerca da surdez, à dificuldade de interação com o surdo, ao desconhecimento de LIBRAS -, ora aos sujeitos surdos - a própria surdez e a dificuldade de compreensão que tais sujeitos apresentam na ótica dos professores. Cabe ressaltar que os professores, sujeitos dessa pesquisa, não relacionam as suas dificuldades para ensinar com as dificuldades de seus alunos para aprender, como se o desconhecimento dos professores acerca da surdez, por exemplo, não tivesse implicações diretas na aprendizagem dos surdos. Conclui-se que a inclusão de surdos no ensino regular significa mais do que apenas criar vagas e proporcionar recursos materiais, é necessário que a escola e a sociedade sejam inclusivas, assegurando igualdade de oportunidades a todos os alunos e contando com professores capacitados e compromissados com a educação de todos.Since this study aims to discuss the placement of deaf students in regular schools, we intend to analyze various aspects pertaining to such issues from the perspective of a group of teachers of deaf students. We applied a questionnaire to 36 teachers working in public schools in elementary, middle and high school levels in the state of Paraná. The data analysis showed that the main difficulties mentioned were related either to the teachers themselves or to their students. Teacher related issues ranged from the teachers' lack of knowledge about deafness, or to problems concerning interaction with the deaf and lack

  15. Processing SPARQL queries with regular expressions in RDF databases

    Science.gov (United States)

    2011-01-01

    Background As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns. PMID:21489225

  16. Processing SPARQL queries with regular expressions in RDF databases.

    Science.gov (United States)

    Lee, Jinsoo; Pham, Minh-Duc; Lee, Jihwan; Han, Wook-Shin; Cho, Hune; Yu, Hwanjo; Lee, Jeong-Hoon

    2011-03-29

    As the Resource Description Framework (RDF) data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf) or Bio2RDF (bio2rdf.org), SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users' requests for extracting information from the RDF data as well as the lack of users' knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1) We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2) We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3) We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  17. Nudging toward Inquiry: Awakening and Building upon Prior Knowledge

    Science.gov (United States)

    Fontichiaro, Kristin, Comp.

    2010-01-01

    "Prior knowledge" (sometimes called schema or background knowledge) is information one already knows that helps him/her make sense of new information. New learning builds on existing prior knowledge. In traditional reporting-style research projects, students bypass this crucial step and plow right into answer-finding. It's no wonder that many…

  18. Prior exercise and antioxidant supplementation: effect on oxidative stress and muscle injury

    Directory of Open Access Journals (Sweden)

    Schilling Brian K

    2007-10-01

    Full Text Available Abstract Background Both acute bouts of prior exercise (preconditioning and antioxidant nutrients have been used in an attempt to attenuate muscle injury or oxidative stress in response to resistance exercise. However, most studies have focused on untrained participants rather than on athletes. The purpose of this work was to determine the independent and combined effects of antioxidant supplementation (vitamin C + mixed tocopherols/tocotrienols and prior eccentric exercise in attenuating markers of skeletal muscle injury and oxidative stress in resistance trained men. Methods Thirty-six men were randomly assigned to: no prior exercise + placebo; no prior exercise + antioxidant; prior exercise + placebo; prior exercise + antioxidant. Markers of muscle/cell injury (muscle performance, muscle soreness, C-reactive protein, and creatine kinase activity, as well as oxidative stress (blood protein carbonyls and peroxides, were measured before and through 48 hours of exercise recovery. Results No group by time interactions were noted for any variable (P > 0.05. Time main effects were noted for creatine kinase activity, muscle soreness, maximal isometric force and peak velocity (P Conclusion There appears to be no independent or combined effect of a prior bout of eccentric exercise or antioxidant supplementation as used here on markers of muscle injury in resistance trained men. Moreover, eccentric exercise as used in the present study results in minimal blood oxidative stress in resistance trained men. Hence, antioxidant supplementation for the purpose of minimizing blood oxidative stress in relation to eccentric exercise appears unnecessary in this population.

  19. Regularities of Multifractal Measures

    Indian Academy of Sciences (India)

    First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in R R d . This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we ...

  20. Adaptive Regularization of Neural Classifiers

    DEFF Research Database (Denmark)

    Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai

    1997-01-01

    We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo......, we propose an improved neural classification architecture eliminating an inherent redundancy in the widely used SoftMax classification network. Numerical results demonstrate the viability of the method...

  1. Effects of attitude, social influence, and self-efficacy model factors on regular mammography performance in life-transition aged women in Korea.

    Science.gov (United States)

    Lee, Chang Hyun; Kim, Young Im

    2015-01-01

    This study analyzed predictors of regular mammography performance in Korea. In addition, we determined factors affecting regular mammography performance in life-transition aged women by applying an attitude, social influence, and self-efficacy (ASE) model. Data were collected from women aged over 40 years residing in province J in Korea. The 178 enrolled subjects provided informed voluntary consent prior to completing a structural questionnaire. The overall regular mammography performance rate of the subjects was 41.6%. Older age, city residency, high income and part-time job were associated with a high regular mammography performance. Among women who had undergone more breast self-examinations (BSE) or more doctors' physical examinations (PE), there were higher regular mammography performance rates. All three ASE model factors were significantly associated with regular mammography performance. Women with a high level of positive ASE values had a significantly high regular mammography performance rate. Within the ASE model, self-efficacy and social influence were particularly important. Logistic regression analysis explained 34.7% of regular mammography performance and PE experience (β=4.645, p=.003), part- time job (β=4.010, p=.050), self-efficacy (β=1.820, p=.026) and social influence (β=1.509, p=.038) were significant factors. Promotional strategies that could improve self-efficacy, reinforce social influence and reduce geographical, time and financial barriers are needed to increase the regular mammography performance rate in life-transition aged.

  2. Trouble found during regular inspection of No.1 plant in Takahama Power Station, Kansai Electric Power Co., Inc

    International Nuclear Information System (INIS)

    1990-01-01

    No.1 plant in Takahama Power Station, Kansai Electric Power Co., Inc. is a PWR plant with the rated output of 826 MWe. Its regular inspection has been carried out since August 10, 1989, and eddy current flaw detection inspection was performed on the total number of steam generator heating tubes (9619 tubes except already plugged tubes). As the result, significant indication was observed in 6 tubes in the U-bend part, in 6 tubes in the tube-supporting plate part, in 4 tubes in the crevice part in the tube plate, in 9 tubes in the expanded part in the tube plate and in 11 tubes at the boundary of the expanded part, in total in 36 heating tubes, all of them on high temperature side. Consequently, it was decided to plug these 36 defective heating tubes. The heating tubes are those made of Inconel 600, having 22.2 mm outside diameter and 1.27 mm wall thickness. (K.I.)

  3. Trouble found during regular inspection of No.3 plant in Mihama Power Station, Kansai Electric Power Co., Inc

    International Nuclear Information System (INIS)

    1990-01-01

    No.3 plant in Mihama Power Station, Kansai Electric Power Co., Inc. is a PWR type plant with the rated output of 826 MWe. Its regular inspection has been carried out since September 11, 1989, and eddy current flaw detection inspection was carried out on the total number of steam generator heating tubes (9997 tubes except already plugged tubes). As the result, significant indication was observed in 24 tubes in the expanded parts in tube plates, and in 36 tubes at the boundary of the expanded parts (all on high temperature side), in total in 60 tubes. Consequently, it was decided to plug these 60 defective heating tubes. The heating tubes are those made of Inconel 600, having 22.2 mm outside diameter and 1.27 mm wall thickness. The total number of heating tubes in 10164 (3388 tubes x 3 steam generators), the number of plugged tubes is 227, and the ratio of plugging is 2.2 %. (K.I.)

  4. Inserção de alunos com deficiência no ensino regular: perfil da cidade de Marília Inclusion of students with disabilities in regular school: profile of the city of Marília

    Directory of Open Access Journals (Sweden)

    Walkiria Gonçalves Reganhan

    2008-12-01

    Full Text Available Objetivou-se com esse estudo identificar o perfil de professores de ensino regular da cidade de Marília que tinham alunos com deficiências matriculados em suas salas de aula, bem como o perfil da clientela atendida por estes professores. Participaram do estudo 68 professores da cidade de Marília - SP que tinham alunos com deficiência matriculados em suas salas. O instrumento de coleta de dados utilizado foi um questionário contendo 13 questões divididas em 2 partes: 1 identificação dos participantes e 2 identificação dos alunos com deficiência. Os dados colhidos nos questionários foram submetidos à análise da freqüência absoluta e relativa. Foram identificadas 10 categorias. De acordo com os resultados dessa pesquisa, concluiu-se que a inserção do aluno com deficiência no ensino regular ocorre com a modificação da formação que favorece ao profissional o conhecimento e a compreensão das distintas formas de aprendizagem do seu alunado, a fim de estruturar sua própria prática pedagógica para atender, com qualidade, a diversidade.The aim of this study was to identify the profile of regular school teachers of Marília-SP, who have students with disabilities registered in their classrooms, as well as to identify the clientele these teachers were working with. Sixty eight teachers participated in the study in the city of Marília - SP, all of whom had students with disabilities in their classrooms. The instrument used to collect data was a questionnaire with 13 questions divided in 2 parts: 1 identification of participants and 2 identification of students with disabilities. The data was submitted to analysis of absolute and relative frequency. Ten categories were identified. We were able to conclude, based on the results of the study, that the inclusion of students with disabilities in regular school occurs due to changes in teacher development that enable professionals to acquire knowledge and understanding of the distinct

  5. Processing SPARQL queries with regular expressions in RDF databases

    Directory of Open Access Journals (Sweden)

    Cho Hune

    2011-03-01

    Full Text Available Abstract Background As the Resource Description Framework (RDF data model is widely used for modeling and sharing a lot of online bioinformatics resources such as Uniprot (dev.isb-sib.ch/projects/uniprot-rdf or Bio2RDF (bio2rdf.org, SPARQL - a W3C recommendation query for RDF databases - has become an important query language for querying the bioinformatics knowledge bases. Moreover, due to the diversity of users’ requests for extracting information from the RDF data as well as the lack of users’ knowledge about the exact value of each fact in the RDF databases, it is desirable to use the SPARQL query with regular expression patterns for querying the RDF data. To the best of our knowledge, there is currently no work that efficiently supports regular expression processing in SPARQL over RDF databases. Most of the existing techniques for processing regular expressions are designed for querying a text corpus, or only for supporting the matching over the paths in an RDF graph. Results In this paper, we propose a novel framework for supporting regular expression processing in SPARQL query. Our contributions can be summarized as follows. 1 We propose an efficient framework for processing SPARQL queries with regular expression patterns in RDF databases. 2 We propose a cost model in order to adapt the proposed framework in the existing query optimizers. 3 We build a prototype for the proposed framework in C++ and conduct extensive experiments demonstrating the efficiency and effectiveness of our technique. Conclusions Experiments with a full-blown RDF engine show that our framework outperforms the existing ones by up to two orders of magnitude in processing SPARQL queries with regular expression patterns.

  6. The perception of regularity in an isochronous stimulus in zebra finches (Taeniopygia guttata) and humans

    NARCIS (Netherlands)

    van der Aa, J.; Honing, H.; ten Cate, C.

    2015-01-01

    Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous

  7. Median prior constrained TV algorithm for sparse view low-dose CT reconstruction.

    Science.gov (United States)

    Liu, Yi; Shangguan, Hong; Zhang, Quan; Zhu, Hongqing; Shu, Huazhong; Gui, Zhiguo

    2015-05-01

    It is known that lowering the X-ray tube current (mAs) or tube voltage (kVp) and simultaneously reducing the total number of X-ray views (sparse view) is an effective means to achieve low-dose in computed tomography (CT) scan. However, the associated image quality by the conventional filtered back-projection (FBP) usually degrades due to the excessive quantum noise. Although sparse-view CT reconstruction algorithm via total variation (TV), in the scanning protocol of reducing X-ray tube current, has been demonstrated to be able to result in significant radiation dose reduction while maintain image quality, noticeable patchy artifacts still exist in reconstructed images. In this study, to address the problem of patchy artifacts, we proposed a median prior constrained TV regularization to retain the image quality by introducing an auxiliary vector m in register with the object. Specifically, the approximate action of m is to draw, in each iteration, an object voxel toward its own local median, aiming to improve low-dose image quality with sparse-view projection measurements. Subsequently, an alternating optimization algorithm is adopted to optimize the associative objective function. We refer to the median prior constrained TV regularization as "TV_MP" for simplicity. Experimental results on digital phantoms and clinical phantom demonstrated that the proposed TV_MP with appropriate control parameters can not only ensure a higher signal to noise ratio (SNR) of the reconstructed image, but also its resolution compared with the original TV method. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. GENERAL ASPECTS REGARDING THE PRIOR DISCIPLINARY RESEARCH

    Directory of Open Access Journals (Sweden)

    ANDRA PURAN (DASCĂLU

    2012-05-01

    Full Text Available Disciplinary research is the first phase of the disciplinary action. According to art. 251 paragraph 1 of the Labour Code no disciplinary sanction may be ordered before performing the prior disciplinary research.These regulations provide an exception: the sanction of written warning. The current regulations in question, kept from the old regulation, provides a protection for employees against abuses made by employers, since sanctions are affecting the salary or the position held, or even the development of individual employment contract. Thus, prior research of the fact that is a misconduct, before a disciplinary sanction is applied, is an essential condition for the validity of the measure ordered. Through this study we try to highlight some general issues concerning the characteristics, processes and effects of prior disciplinary research.

  9. Regular-, irregular-, and pseudo-character processing in Chinese: The regularity effect in normal adult readers

    Directory of Open Access Journals (Sweden)

    Dustin Kai Yan Lau

    2014-03-01

    Full Text Available Background Unlike alphabetic languages, Chinese uses a logographic script. However, the pronunciation of many character’s phonetic radical has the same pronunciation as the character as a whole. These are considered regular characters and can be read through a lexical non-semantic route (Weekes & Chen, 1999. Pseudocharacters are another way to study this non-semantic route. A pseudocharacter is the combination of existing semantic and phonetic radicals in their legal positions resulting in a non-existing character (Ho, Chan, Chung, Lee, & Tsang, 2007. Pseudocharacters can be pronounced by direct derivation from the sound of its phonetic radical. Conversely, if the pronunciation of a character does not follow that of the phonetic radical, it is considered as irregular and can only be correctly read through the lexical-semantic route. The aim of the current investigation was to examine reading aloud in normal adults. We hypothesized that the regularity effect, previously described for alphabetical scripts and acquired dyslexic patients of Chinese (Weekes & Chen, 1999; Wu, Liu, Sun, Chromik, & Zhang, 2014, would also be present in normal adult Chinese readers. Method Participants. Thirty (50% female native Hong Kong Cantonese speakers with a mean age of 19.6 years and a mean education of 12.9 years. Stimuli. Sixty regular-, 60 irregular-, and 60 pseudo-characters (with at least 75% of name agreement in Chinese were matched by initial phoneme, number of strokes and family size. Additionally, regular- and irregular-characters were matched by frequency (low and consistency. Procedure. Each participant was asked to read aloud the stimuli presented on a laptop using the DMDX software. The order of stimuli presentation was randomized. Data analysis. ANOVAs were carried out by participants and items with RTs and errors as dependent variables and type of stimuli (regular-, irregular- and pseudo-character as repeated measures (F1 or between subject

  10. A biological network-based regularized artificial neural network model for robust phenotype prediction from gene expression data.

    Science.gov (United States)

    Kang, Tianyu; Ding, Wei; Zhang, Luoyan; Ziemek, Daniel; Zarringhalam, Kourosh

    2017-12-19

    Stratification of patient subpopulations that respond favorably to treatment or experience and adverse reaction is an essential step toward development of new personalized therapies and diagnostics. It is currently feasible to generate omic-scale biological measurements for all patients in a study, providing an opportunity for machine learning models to identify molecular markers for disease diagnosis and progression. However, the high variability of genetic background in human populations hampers the reproducibility of omic-scale markers. In this paper, we develop a biological network-based regularized artificial neural network model for prediction of phenotype from transcriptomic measurements in clinical trials. To improve model sparsity and the overall reproducibility of the model, we incorporate regularization for simultaneous shrinkage of gene sets based on active upstream regulatory mechanisms into the model. We benchmark our method against various regression, support vector machines and artificial neural network models and demonstrate the ability of our method in predicting the clinical outcomes using clinical trial data on acute rejection in kidney transplantation and response to Infliximab in ulcerative colitis. We show that integration of prior biological knowledge into the classification as developed in this paper, significantly improves the robustness and generalizability of predictions to independent datasets. We provide a Java code of our algorithm along with a parsed version of the STRING DB database. In summary, we present a method for prediction of clinical phenotypes using baseline genome-wide expression data that makes use of prior biological knowledge on gene-regulatory interactions in order to increase robustness and reproducibility of omic-scale markers. The integrated group-wise regularization methods increases the interpretability of biological signatures and gives stable performance estimates across independent test sets.

  11. Dietary Sodium and Potassium Intake is Not Associated with Elevated Blood Pressure in US Adults with No Prior History of Hypertension

    Science.gov (United States)

    Sharma, Shailendra; McFann, Kim; Chonchol, Michel; Kendrick, Jessica

    2014-01-01

    The relationship between dietary sodium and potassium intake with elevated blood pressure (BP) levels is unclear. We examined the association between dietary sodium and potassium intake and BP levels in 6985 adults 18 years of age or older with no prior history of hypertension who participated in the National Health and Nutrition Examination Survey (2001–2006). After adjustment for age, sex, race, body mass index, diabetes and eGFR, there was no association between higher quartiles of sodium or potassium intake with the risk of a BP >140/90 mmHg or >130/80 mmHg. There was also no relationship between dietary sodium and potassium intake with BP when systolic and diastolic BP were measured as continuous outcomes (p=0.68 and p=0.74, respectively). Furthermore, no association was found between combinations of sodium and potassium intake with elevated BP. In the US adult population without hypertension, increased dietary sodium or low potassium intake was not associated with elevated BP levels. PMID:24720647

  12. J-regular rings with injectivities

    OpenAIRE

    Shen, Liang

    2010-01-01

    A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.

  13. Intelligence system based classification approach for medical disease diagnosis

    Science.gov (United States)

    Sagir, Abdu Masanawa; Sathasivam, Saratha

    2017-08-01

    The prediction of breast cancer in women who have no signs or symptoms of the disease as well as survivability after undergone certain surgery has been a challenging problem for medical researchers. The decision about presence or absence of diseases depends on the physician's intuition, experience and skill for comparing current indicators with previous one than on knowledge rich data hidden in a database. This measure is a very crucial and challenging task. The goal is to predict patient condition by using an adaptive neuro fuzzy inference system (ANFIS) pre-processed by grid partitioning. To achieve an accurate diagnosis at this complex stage of symptom analysis, the physician may need efficient diagnosis system. A framework describes methodology for designing and evaluation of classification performances of two discrete ANFIS systems of hybrid learning algorithms least square estimates with Modified Levenberg-Marquardt and Gradient descent algorithms that can be used by physicians to accelerate diagnosis process. The proposed method's performance was evaluated based on training and test datasets with mammographic mass and Haberman's survival Datasets obtained from benchmarked datasets of University of California at Irvine's (UCI) machine learning repository. The robustness of the performance measuring total accuracy, sensitivity and specificity is examined. In comparison, the proposed method achieves superior performance when compared to conventional ANFIS based gradient descent algorithm and some related existing methods. The software used for the implementation is MATLAB R2014a (version 8.3) and executed in PC Intel Pentium IV E7400 processor with 2.80 GHz speed and 2.0 GB of RAM.

  14. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    Science.gov (United States)

    He, K.; Zhu, W. D.

    2011-07-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  15. Structural Damage Detection Using Changes in Natural Frequencies: Theory and Applications

    International Nuclear Information System (INIS)

    He, K; Zhu, W D

    2011-01-01

    A vibration-based method that uses changes in natural frequencies of a structure to detect damage has advantages over conventional nondestructive tests in detecting various types of damage, including loosening of bolted joints, using minimum measurement data. Two major challenges associated with applications of the vibration-based damage detection method to engineering structures are addressed: accurate modeling of structures and the development of a robust inverse algorithm to detect damage, which are defined as the forward and inverse problems, respectively. To resolve the forward problem, new physics-based finite element modeling techniques are developed for fillets in thin-walled beams and for bolted joints, so that complex structures can be accurately modeled with a reasonable model size. To resolve the inverse problem, a logistical function transformation is introduced to convert the constrained optimization problem to an unconstrained one, and a robust iterative algorithm using a trust-region method, called the Levenberg-Marquardt method, is developed to accurately detect the locations and extent of damage. The new methodology can ensure global convergence of the iterative algorithm in solving under-determined system equations and deal with damage detection problems with relatively large modeling error and measurement noise. The vibration-based damage detection method is applied to various structures including lightning masts, a space frame structure and one of its components, and a pipeline. The exact locations and extent of damage can be detected in the numerical simulation where there is no modeling error and measurement noise. The locations and extent of damage can be successfully detected in experimental damage detection.

  16. External Prior Guided Internal Prior Learning for Real-World Noisy Image Denoising

    Science.gov (United States)

    Xu, Jun; Zhang, Lei; Zhang, David

    2018-06-01

    Most of existing image denoising methods learn image priors from either external data or the noisy image itself to remove noise. However, priors learned from external data may not be adaptive to the image to be denoised, while priors learned from the given noisy image may not be accurate due to the interference of corrupted noise. Meanwhile, the noise in real-world noisy images is very complex, which is hard to be described by simple distributions such as Gaussian distribution, making real noisy image denoising a very challenging problem. We propose to exploit the information in both external data and the given noisy image, and develop an external prior guided internal prior learning method for real noisy image denoising. We first learn external priors from an independent set of clean natural images. With the aid of learned external priors, we then learn internal priors from the given noisy image to refine the prior model. The external and internal priors are formulated as a set of orthogonal dictionaries to efficiently reconstruct the desired image. Extensive experiments are performed on several real noisy image datasets. The proposed method demonstrates highly competitive denoising performance, outperforming state-of-the-art denoising methods including those designed for real noisy images.

  17. Fast Markov chain Monte Carlo sampling for sparse Bayesian inference in high-dimensional inverse problems using L1-type priors

    International Nuclear Information System (INIS)

    Lucka, Felix

    2012-01-01

    Sparsity has become a key concept for solving of high-dimensional inverse problems using variational regularization techniques. Recently, using similar sparsity-constraints in the Bayesian framework for inverse problems by encoding them in the prior distribution has attracted attention. Important questions about the relation between regularization theory and Bayesian inference still need to be addressed when using sparsity promoting inversion. A practical obstacle for these examinations is the lack of fast posterior sampling algorithms for sparse, high-dimensional Bayesian inversion. Accessing the full range of Bayesian inference methods requires being able to draw samples from the posterior probability distribution in a fast and efficient way. This is usually done using Markov chain Monte Carlo (MCMC) sampling algorithms. In this paper, we develop and examine a new implementation of a single component Gibbs MCMC sampler for sparse priors relying on L1-norms. We demonstrate that the efficiency of our Gibbs sampler increases when the level of sparsity or the dimension of the unknowns is increased. This property is contrary to the properties of the most commonly applied Metropolis–Hastings (MH) sampling schemes. We demonstrate that the efficiency of MH schemes for L1-type priors dramatically decreases when the level of sparsity or the dimension of the unknowns is increased. Practically, Bayesian inversion for L1-type priors using MH samplers is not feasible at all. As this is commonly believed to be an intrinsic feature of MCMC sampling, the performance of our Gibbs sampler also challenges common beliefs about the applicability of sample based Bayesian inference. (paper)

  18. Use of regularized algebraic methods in tomographic reconstruction

    International Nuclear Information System (INIS)

    Koulibaly, P.M.; Darcourt, J.; Blanc-Ferraud, L.; Migneco, O.; Barlaud, M.

    1997-01-01

    The algebraic methods are used in emission tomography to facilitate the compensation of attenuation and of Compton scattering. We have tested on a phantom the use of a regularization (a priori introduction of information), as well as the taking into account of spatial resolution variation with the depth (SRVD). Hence, we have compared the performances of the two methods by back-projection filtering (BPF) and of the two algebraic methods (AM) in terms of FWHM (by means of a point source), of the reduction of background noise (σ/m) on the homogeneous part of Jaszczak's phantom and of reconstruction speed (time unit = BPF). The BPF methods make use of a grade filter (maximal resolution, no noise treatment), single or associated with a Hann's low-pass (f c = 0.4), as well as of an attenuation correction. The AM which embody attenuation and scattering corrections are, on one side, the OS EM (Ordered Subsets, partitioning and rearranging of the projection matrix; Expectation Maximization) without regularization or SRVD correction, and, on the other side, the OS MAP EM (Maximum a posteriori), regularized and embodying the SRVD correction. A table is given containing for each used method (grade, Hann, OS EM and OS MAP EM) the values of FWHM, σ/m and time, respectively. One can observe that the OS MAP EM algebraic method allows ameliorating both the resolution, by taking into account the SRVD in the reconstruction process and noise treatment by regularization. In addition, due to the OS technique the reconstruction times are acceptable

  19. 7 CFR 4290.480 - Prior approval of changes to RBIC's business plan.

    Science.gov (United States)

    2010-01-01

    ... 7 Agriculture 15 2010-01-01 2010-01-01 false Prior approval of changes to RBIC's business plan... § 4290.480 Prior approval of changes to RBIC's business plan. Without the Secretary's prior written approval, no change in your business plan, upon which you were selected and licensed as a RBIC, may take...

  20. Iterative Regularization with Minimum-Residual Methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2007-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  1. Iterative regularization with minimum-residual methods

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg; Hansen, Per Christian

    2006-01-01

    subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success......We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... as regularization methods is highly problem dependent....

  2. Multiple graph regularized protein domain ranking.

    Science.gov (United States)

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-11-19

    Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  3. Behavior of the hypsometric relationship of Araucaria angustifolia in the forest copse of the faculty of forest – Federal University of Paraná, Brazil Comportamento da relação hipsométrica de Araucaria angustifolia no capão da Engenharia Florestal da UFPR

    Directory of Open Access Journals (Sweden)

    Sebastião do Amaral Machado

    2010-03-01

    Full Text Available The objective of this research was to test and select mathematics models for estimating total heigh (ht and bole high (hb, as a function of DBH, as well as, to establish the dendrometric relationship  between ht/hb. The data came from measurements of diameters (DBH, total height and bole height of all Araucaria angustifolia trees from an Ombrophylous Mix Forest fragment of 15,24 ha situated in the Botanical Garden Campus of the UFPR, Curitiba-PR, Brazil. Thirteen  models were tested, including arithmetic, logarithmic and nonlinear  models, such as Chapman-Richards and Mitschertich or monomolecular adapted; the nonlinear models were fitted by the Levenberg-Marquart algorithm. The statistic criteria for selecting the best models were the graphic analysis of residuals, Standard error of estimate in percentage (Syx% and adjusted determination coefficient (R2 aj. The R2 aj were very low for all fitted models, characterizing an advanced and asymptotic stage of the species under study. The best equation for estimating ht was that one proposed by Stoffels & Van Soest, and for hf the Curtis equation in its logarithmic form, chosen due to its statistics values and easy utilization. The adjusted linear equation to estimate total height as a function of bole height presented R2 aj = 0.88 and Syx% = 5 %, characterizing a strong relationship between these two variables.Esta pesquisa teve como objetivos testar e selecionar os melhores modelos matemáticos para estimar
    a altura total (ht e a altura de fuste (hf, em função do diâmetro à altura do peito (DAP, bem como estabelecer a
    relação dendrométrica entre ht/hf. Os dados provieram da medição de diâmetros, altura total e altura de fuste de
    todas as araucárias existentes no fragmento de Floresta Ombrófila Mista, de 15,2 hectares, situado no Campus
    Jardim Botânico da Universidade Federal do Paraná (UFPR, Curitiba, PR. Foram testados 13 modelos, incluindo
    modelos aritm

  4. Uso regular de serviços odontológicos entre adultos: padrões de utilização e tipos de serviços Regular use of dental care services by adults: patterns of utilization and types of services

    Directory of Open Access Journals (Sweden)

    Maria Beatriz J. Camargo

    2009-09-01

    Full Text Available O objetivo deste estudo foi avaliar o uso regular de serviços odontológicos entre adultos, identificando grupos nos quais esse comportamento é mais freqüente. O estudo foi realizado em Pelotas, Rio Grande do Sul, Brasil, com desenho transversal de base populacional, envolvendo 2.961 indivíduos, que responderam um questionário estruturado. A prevalência de uso regular encontrada foi de 32,8%. Estiveram positivamente associadas ao uso de forma regular as seguintes características: ser do sexo feminino, ter idade The aim of this study was to estimate the prevalence of regular use of dental services by adults and identify groups where this behavior is more frequent. A cross-sectional population-based study was carried out in Pelotas, southern Brazil, including 2,961 individuals who answered a standardized questionnaire. Overall prevalence of regular use of dental services was 32.8%. The following variables were positively associated with regular use: female gender, age > 60 years, no partner, high educational level, high economic status, private service user, good/excellent self-rated oral health, and no perceived need for dental treatment. Those who had received orientation on prevention and expressed a favorable view towards the dentist had higher odds of being regular users. Especially among lower-income individuals, regular use was infrequent (15%. When restricting the analysis to users of public dental services, schooling was still positively associated with the outcome. Dental services, especially in the public sector, should develop strategies to increase regular and preventive use.

  5. Dynamics of coherent states in regular and chaotic regimes of the non-integrable Dicke model

    Science.gov (United States)

    Lerma-Hernández, S.; Chávez-Carlos, J.; Bastarrachea-Magnani, M. A.; López-del-Carpio, B.; Hirsch, J. G.

    2018-04-01

    The quantum dynamics of initial coherent states is studied in the Dicke model and correlated with the dynamics, regular or chaotic, of their classical limit. Analytical expressions for the survival probability, i.e. the probability of finding the system in its initial state at time t, are provided in the regular regions of the model. The results for regular regimes are compared with those of the chaotic ones. It is found that initial coherent states in regular regions have a much longer equilibration time than those located in chaotic regions. The properties of the distributions for the initial coherent states in the Hamiltonian eigenbasis are also studied. It is found that for regular states the components with no negligible contribution are organized in sequences of energy levels distributed according to Gaussian functions. In the case of chaotic coherent states, the energy components do not have a simple structure and the number of participating energy levels is larger than in the regular cases.

  6. Bilinear Regularized Locality Preserving Learning on Riemannian Graph for Motor Imagery BCI.

    Science.gov (United States)

    Xie, Xiaofeng; Yu, Zhu Liang; Gu, Zhenghui; Zhang, Jun; Cen, Ling; Li, Yuanqing

    2018-03-01

    In off-line training of motor imagery-based brain-computer interfaces (BCIs), to enhance the generalization performance of the learned classifier, the local information contained in test data could be used to improve the performance of motor imagery as well. Further considering that the covariance matrices of electroencephalogram (EEG) signal lie on Riemannian manifold, in this paper, we construct a Riemannian graph to incorporate the information of training and test data into processing. The adjacency and weight in Riemannian graph are determined by the geodesic distance of Riemannian manifold. Then, a new graph embedding algorithm, called bilinear regularized locality preserving (BRLP), is derived upon the Riemannian graph for addressing the problems of high dimensionality frequently arising in BCIs. With a proposed regularization term encoding prior information of EEG channels, the BRLP could obtain more robust performance. Finally, an efficient classification algorithm based on extreme learning machine is proposed to perform on the tangent space of learned embedding. Experimental evaluations on the BCI competition and in-house data sets reveal that the proposed algorithms could obtain significantly higher performance than many competition algorithms after using same filter process.

  7. Regular examinations for toxic maculopathy in long-term chloroquine or hydroxychloroquine users.

    Science.gov (United States)

    Nika, Melisa; Blachley, Taylor S; Edwards, Paul; Lee, Paul P; Stein, Joshua D

    2014-10-01

    According to evidence-based, expert recommendations, long-term users of chloroquine or hydroxychloroquine sulfate should undergo regular visits to eye care providers and diagnostic testing to check for maculopathy. To determine whether patients with rheumatoid arthritis (RA) or systemic lupus erythematosus (SLE) taking chloroquine or hydroxychloroquine are regularly visiting eye care providers and being screened for maculopathy. Patients with RA or SLE who were continuously enrolled in a particular managed care network for at least 5 years between January 1, 2001, and December 31, 2011, were studied. Patients' amount of chloroquine or hydroxychloroquine use in the 5 years since the initial RA or SLE diagnosis was calculated, along with their number of eye care visits and diagnostic tests for maculopathy. Those at high risk for maculopathy were identified. Logistic regression was performed to assess potential factors associated with regular eye care visits (annual visits in ≥3 of 5 years) among chloroquine or hydroxychloroquine users, including those at highest risk for maculopathy. Among chloroquine or hydroxychloroquine users and those at high risk for toxic maculopathy, the proportions with regular eye care visits and diagnostic testing, as well as the likelihood of regular eye care visits. Among 18 051 beneficiaries with RA or SLE, 6339 (35.1%) had at least 1 record of chloroquine or hydroxychloroquine use, and 1409 (7.8%) had used chloroquine or hydroxychloroquine for at least 4 years. Among those at high risk for maculopathy, 27.9% lacked regular eye care visits, 6.1% had no visits to eye care providers, and 34.5% had no diagnostic testing for maculopathy during the 5-year period. Among high-risk patients, each additional month of chloroquine or hydroxychloroquine use was associated with a 2.0% increased likelihood of regular eye care (adjusted odds ratio, 1.02; 95% CI, 1.01-1.03). High-risk patients whose SLE or RA was managed by rheumatologists had a 77

  8. Troubles detected during regular inspection of No.1 plant in Oi Power Station, Kansai Electric Power Co., Inc

    International Nuclear Information System (INIS)

    1990-01-01

    No. 1 plant in Oi Power Station, Kansai Electric Power Co., Inc. is a PWR plant with rated output of 1175 MW, and its regular inspection is carried out since August 14, 1989. When eddy current flaw detection inspection was carried out on the total number (11426 except already plugged tubes) of the heating tubes of steam generators, significant indication was observed in tube supporting plate part of 279 tubes, at the boundary of tube plate expanded part of 34 tubes, and in the tube plate expanded part of 99 tubes, 411 heating tubes in total (all on high temperature side). Consequently, it was decided to repair 367 tubes using sleeves, and to plug other 44 tubes. Besides, among the heating tubes plugged in the past, it was decided to remove plugs from 161 tubes, and by repairing them with sleeves, to use them again. Total number of heating tubes 13552 (3388 tubes x 4 steam generators), Number of plugged tubes 2009 (decrease by 117 this time), Ratio of plugging 14.8%. (K.I.)

  9. Higher derivative regularization and chiral anomaly

    International Nuclear Information System (INIS)

    Nagahama, Yoshinori.

    1985-02-01

    A higher derivative regularization which automatically leads to the consistent chiral anomaly is analyzed in detail. It explicitly breaks all the local gauge symmetry but preserves global chiral symmetry and leads to the chirally symmetric consistent anomaly. This regularization thus clarifies the physics content contained in the consistent anomaly. We also briefly comment on the application of this higher derivative regularization to massless QED. (author)

  10. Neutrino masses and their ordering: global data, priors and models

    Science.gov (United States)

    Gariazzo, S.; Archidiacono, M.; de Salas, P. F.; Mena, O.; Ternes, C. A.; Tórtola, M.

    2018-03-01

    We present a full Bayesian analysis of the combination of current neutrino oscillation, neutrinoless double beta decay and Cosmic Microwave Background observations. Our major goal is to carefully investigate the possibility to single out one neutrino mass ordering, namely Normal Ordering or Inverted Ordering, with current data. Two possible parametrizations (three neutrino masses versus the lightest neutrino mass plus the two oscillation mass splittings) and priors (linear versus logarithmic) are exhaustively examined. We find that the preference for NO is only driven by neutrino oscillation data. Moreover, the values of the Bayes factor indicate that the evidence for NO is strong only when the scan is performed over the three neutrino masses with logarithmic priors; for every other combination of parameterization and prior, the preference for NO is only weak. As a by-product of our Bayesian analyses, we are able to (a) compare the Bayesian bounds on the neutrino mixing parameters to those obtained by means of frequentist approaches, finding a very good agreement; (b) determine that the lightest neutrino mass plus the two mass splittings parametrization, motivated by the physical observables, is strongly preferred over the three neutrino mass eigenstates scan and (c) find that logarithmic priors guarantee a weakly-to-moderately more efficient sampling of the parameter space. These results establish the optimal strategy to successfully explore the neutrino parameter space, based on the use of the oscillation mass splittings and a logarithmic prior on the lightest neutrino mass, when combining neutrino oscillation data with cosmology and neutrinoless double beta decay. We also show that the limits on the total neutrino mass ∑ mν can change dramatically when moving from one prior to the other. These results have profound implications for future studies on the neutrino mass ordering, as they crucially state the need for self-consistent analyses which explore the

  11. Regularity theorem for functions that are extremal to Paley inequality ...

    African Journals Online (AJOL)

    Regularity theorem for functions that are extremal to Paley inequality. Seid Mohammed. Abstract. In this paper we study the asymptotic behavior of functions that are extremal to the inequality introduced by Paley (1932) via a normal family of subharmonic functions. SINET: Ethiopian Journal of Science Volume 24, No.

  12. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan

    2012-11-19

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  13. Multiple graph regularized protein domain ranking

    KAUST Repository

    Wang, Jim Jing-Yan; Bensmail, Halima; Gao, Xin

    2012-01-01

    Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.

  14. Multiple graph regularized protein domain ranking

    Directory of Open Access Journals (Sweden)

    Wang Jim

    2012-11-01

    Full Text Available Abstract Background Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods. Results To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods. Conclusion The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications.

  15. Processo de inclusão na escola regular: panorama de percepções

    Directory of Open Access Journals (Sweden)

    Anselmo Barce Furini

    2011-11-01

    Full Text Available O presente artigo de pesquisa tem como finalidade apresentar algumas considerações sobre o processo de inclusão de crianças com necessidades educativas especiais (NEE no ensino regular a partir da pesquisa de mestrado que se realizou na PUCRS entre 2004 e 2006. O estudo intitulado processo de inclusão: a criança com necessidade educativa especial e os envolvidos, objetivou averiguar as percepções dos envolvidos no processo de inclusão a respeito deste e das crianças com NEE no contexto das Séries Iniciais do Ensino Fundamental. Foi um estudo qualitativo, com abordagem do tipo etnográfico. Os resultados mostram que a inclusão de alunos com NEE na escola regular está fomentando mudanças na estrutura pedagógica, no currículo, no planejamento diário e no espaço físico. Reconheceu-se ainda, os aspectos que dificultam e facilitam o processo de inclusão. Identificou-se que as famílias aprovam a inclusão dos filhos na escola regular e que os educadores pensam que o processo de inclusão é algo novo e exige mudanças na escola e no olhar às diferenças. Observou-se que os comportamentos das crianças com NEE variaram da distração ao empenho à realização das atividades pedagógicas e a comunicação predominante foi a verbal e a relação destas com o grupo oscilou entre conflitos e momentos de bom relacionamento. As crianças com NEE pesquisadas evidenciaram sentir-se bem na escola regular, comunicar-se e relacionar-se com o grupo, parecendo incluídas. O processo de inclusão não é algo pronto, mas deve ser construído em cada contexto pelos participantes.Palavras-chave: Inclusão. Crianças com Necessidades Educativas Especiais. Família. Educadores. Sociedade. Diferenças.

  16. 75 FR 53966 - Regular Meeting

    Science.gov (United States)

    2010-09-02

    ... FARM CREDIT SYSTEM INSURANCE CORPORATION Regular Meeting AGENCY: Farm Credit System Insurance Corporation Board. SUMMARY: Notice is hereby given of the regular meeting of the Farm Credit System Insurance Corporation Board (Board). DATE AND TIME: The meeting of the Board will be held at the offices of the Farm...

  17. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    Science.gov (United States)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  18. EBaLM-THP - A neural network thermohydraulic prediction model of advanced nuclear system components

    International Nuclear Information System (INIS)

    Ridluan, Artit; Manic, Milos; Tokuhiro, Akira

    2009-01-01

    In lieu of the worldwide energy demand, economics and consensus concern regarding climate change, nuclear power - specifically near-term nuclear power plant designs are receiving increased engineering attention. However, as the nuclear industry is emerging from a lull in component modeling and analyses, optimization for example using ANN has received little research attention. This paper presents a neural network approach, EBaLM, based on a specific combination of two training algorithms, error-back propagation (EBP), and Levenberg-Marquardt (LM), applied to a problem of thermohydraulics predictions (THPs) of advanced nuclear heat exchangers (HXs). The suitability of the EBaLM-THP algorithm was tested on two different reference problems in thermohydraulic design analysis; that is, convective heat transfer of supercritical CO 2 through a single tube, and convective heat transfer through a printed circuit heat exchanger (PCHE) using CO 2 . Further, comparison of EBaLM-THP and a polynomial fitting approach was considered. Within the defined reference problems, the neural network approach generated good results in both cases, in spite of highly fluctuating trends in the dataset used. In fact, the neural network approach demonstrated cumulative measure of the error one to three orders of magnitude smaller than that produce via polynomial fitting of 10th order

  19. Highly efficient simultaneous ultrasonic assisted adsorption of brilliant green and eosin B onto ZnS nanoparticles loaded activated carbon: Artificial neural network modeling and central composite design optimization

    Science.gov (United States)

    Jamshidi, M.; Ghaedi, M.; Dashtian, K.; Ghaedi, A. M.; Hajati, S.; Goudarzi, A.; Alipanahpour, E.

    2016-01-01

    In this work, central composite design (CCD) combined with response surface methodology (RSM) and desirability function approach (DFA) gives useful information about operational condition and also to obtain useful information about interaction and main effect of variables concerned to simultaneous ultrasound-assisted removal of brilliant green (BG) and eosin B (EB) by zinc sulfide nanoparticles loaded on activated carbon (ZnS-NPs-AC). Spectra overlap between BG and EB dyes was extensively reduced and/or omitted by derivative spectrophotometric method, while multi-layer artificial neural network (ML-ANN) model learned with Levenberg-Marquardt (LM) algorithm was used for building up a predictive model and prediction of the BG and EB removal. The ANN efficiently was able to forecast the simultaneous BG and EB removal that was confirmed by reasonable numerical value i.e. MSE of 0.0021 and R2 of 0.9589 and MSE of 0.0022 and R2 of 0.9455 for testing data set, respectively. The results reveal acceptable agreement among experimental data and ANN predicted results. Langmuir as the best model for fitting experimental data relevant to BG and EB removal indicates high, economic and profitable adsorption capacity (258.7 and 222.2 mg g- 1) that supports and confirms its applicability for wastewater treatment.

  20. Short-Term Solar Irradiance Forecasting Model Based on Artificial Neural Network Using Statistical Feature Parameters

    Directory of Open Access Journals (Sweden)

    Hongshan Zhao

    2012-05-01

    Full Text Available Short-term solar irradiance forecasting (STSIF is of great significance for the optimal operation and power predication of grid-connected photovoltaic (PV plants. However, STSIF is very complex to handle due to the random and nonlinear characteristics of solar irradiance under changeable weather conditions. Artificial Neural Network (ANN is suitable for STSIF modeling and many research works on this topic are presented, but the conciseness and robustness of the existing models still need to be improved. After discussing the relation between weather variations and irradiance, the characteristics of the statistical feature parameters of irradiance under different weather conditions are figured out. A novel ANN model using statistical feature parameters (ANN-SFP for STSIF is proposed in this paper. The input vector is reconstructed with several statistical feature parameters of irradiance and ambient temperature. Thus sufficient information can be effectively extracted from relatively few inputs and the model complexity is reduced. The model structure is determined by cross-validation (CV, and the Levenberg-Marquardt algorithm (LMA is used for the network training. Simulations are carried out to validate and compare the proposed model with the conventional ANN model using historical data series (ANN-HDS, and the results indicated that the forecast accuracy is obviously improved under variable weather conditions.

  1. Modeling of Flexible Polyurethane Foam Shrinkage for Bra Cup Moulding Process Control

    Directory of Open Access Journals (Sweden)

    Long Wu

    2018-04-01

    Full Text Available Nowadays, moulding technology has become a remarkable manufacturing process in the intimate apparel industry. Polyurethane (PU foam sheets are used to mould three-dimensional (3D seamless bra cups of various softness and shapes, which eliminate bulky seams and reduce production costs. However, it has been challenging to accurately and effectively control the moulding process and bra cup thickness. In this study, the theoretical mechanism of heat transfer and the thermal conductivity of PU foams are first examined. Experimental studies are carried out to investigate the changes in foam materials at various moulding conditions (viz., temperatures, and lengths of dwell time in terms of surface morphology and thickness by using electron and optical microscopy. Based on the theoretical and experimental investigations of the thermal conductivity of the foam materials, empirical equations of shrinkage ratio and thermal conduction of foam materials were established. A regression model to predict flexible PU foam shrinkage during the bra cup moulding process was formulated by using the Levenberg-Marquardt method of nonlinear least squares algorithm and verified for accuracy. This study therefore provides an effective approach that optimizes control of the bra cup moulding process and assures the ultimate quality and thickness of moulded foam cups.

  2. Can a one-layer optical skin model including melanin and inhomogeneously distributed blood explain spatially resolved diffuse reflectance spectra?

    Science.gov (United States)

    Karlsson, Hanna; Pettersson, Anders; Larsson, Marcus; Strömberg, Tomas

    2011-02-01

    Model based analysis of calibrated diffuse reflectance spectroscopy can be used for determining oxygenation and concentration of skin chromophores. This study aimed at assessing the effect of including melanin in addition to hemoglobin (Hb) as chromophores and compensating for inhomogeneously distributed blood (vessel packaging), in a single-layer skin model. Spectra from four humans were collected during different provocations using a twochannel fiber optic probe with source-detector separations 0.4 and 1.2 mm. Absolute calibrated spectra using data from either a single distance or both distances were analyzed using inverse Monte Carlo for light transport and Levenberg-Marquardt for non-linear fitting. The model fitting was excellent using a single distance. However, the estimated model failed to explain spectra from the other distance. The two-distance model did not fit the data well at either distance. Model fitting was significantly improved including melanin and vessel packaging. The most prominent effect when fitting data from the larger separation compared to the smaller separation was a different light scattering decay with wavelength, while the tissue fraction of Hb and saturation were similar. For modeling spectra at both distances, we propose using either a multi-layer skin model or a more advanced model for the scattering phase function.

  3. An internal reference model-based PRF temperature mapping method with Cramer-Rao lower bound noise performance analysis.

    Science.gov (United States)

    Li, Cheng; Pan, Xinyi; Ying, Kui; Zhang, Qiang; An, Jing; Weng, Dehe; Qin, Wen; Li, Kuncheng

    2009-11-01

    The conventional phase difference method for MR thermometry suffers from disturbances caused by the presence of lipid protons, motion-induced error, and field drift. A signal model is presented with multi-echo gradient echo (GRE) sequence using a fat signal as an internal reference to overcome these problems. The internal reference signal model is fit to the water and fat signals by the extended Prony algorithm and the Levenberg-Marquardt algorithm to estimate the chemical shifts between water and fat which contain temperature information. A noise analysis of the signal model was conducted using the Cramer-Rao lower bound to evaluate the noise performance of various algorithms, the effects of imaging parameters, and the influence of the water:fat signal ratio in a sample on the temperature estimate. Comparison of the calculated temperature map and thermocouple temperature measurements shows that the maximum temperature estimation error is 0.614 degrees C, with a standard deviation of 0.06 degrees C, confirming the feasibility of this model-based temperature mapping method. The influence of sample water:fat signal ratio on the accuracy of the temperature estimate is evaluated in a water-fat mixed phantom experiment with an optimal ratio of approximately 0.66:1. (c) 2009 Wiley-Liss, Inc.

  4. Development of a Nonlinear Soft-Sensor Using a GMDH Network for a Refinery Crude Distillation Tower

    Science.gov (United States)

    Fujii, Kenzo; Yamamoto, Toru

    In atmospheric distillation processes, the stabilization of processes is required in order to optimize the crude-oil composition that corresponds to product market conditions. However, the process control systems sometimes fall into unstable states in the case where unexpected disturbances are introduced, and these unusual phenomena have had an undesirable affect on certain products. Furthermore, a useful chemical engineering model has not yet been established for these phenomena. This remains a serious problem in the atmospheric distillation process. This paper describes a new modeling scheme to predict unusual phenomena in the atmospheric distillation process using the GMDH (Group Method of Data Handling) network which is one type of network model. According to the GMDH network, the model structure can be determined systematically. However, the least squares method has been commonly utilized in determining weight coefficients (model parameters). Estimation accuracy is not entirely expected, because the sum of squared errors between the measured values and estimates is evaluated. Therefore, instead of evaluating the sum of squared errors, the sum of absolute value of errors is introduced and the Levenberg-Marquardt method is employed in order to determine model parameters. The effectiveness of the proposed method is evaluated by the foaming prediction in the crude oil switching operation in the atmospheric distillation process.

  5. Camera Calibration of Stereo Photogrammetric System with One-Dimensional Optical Reference Bar

    International Nuclear Information System (INIS)

    Xu, Q Y; Ye, D; Che, R S; Qi, X; Huang, Y

    2006-01-01

    To carry out the precise measurement of large-scale complex workpieces, accurately calibration of the stereo photogrammetric system has becoming more and more important. This paper proposed a flexible and reliable camera calibration of stereo photogrammetric system based on quaternion with one-dimensional optical reference bar, which has three small collinear infrared LED marks and the lengths between these marks have been precisely calibration. By moving the optical reference bar at a number of locations/orientations over the measurement volume, we calibrate the stereo photogrammetric systems with the geometric constraint of the optical reference bar. The extrinsic parameters calibration process consists of linear parameters estimation based on quaternion and nonlinear refinement based on the maximum likelihood criterion. Firstly, we linear estimate the extrinsic parameters of the stereo photogrameetric systems based on quaternion. Then with the quaternion results as the initial values, we refine the extrinsic parameters through maximum likelihood criterion with the Levenberg-Marquardt Algorithm. In the calibration process, we can automatically control the light intensity and optimize the exposure time to get uniform intensity profile of the image points at different distance and obtain higher S/N ratio. The experiment result proves that the calibration method proposed is flexible, valid and obtains good results in the application

  6. Prediction Model for Predicting Powdery Mildew using ANN for Medicinal Plant— Picrorhiza kurrooa

    Science.gov (United States)

    Shivling, V. D.; Ghanshyam, C.; Kumar, Rakesh; Kumar, Sanjay; Sharma, Radhika; Kumar, Dinesh; Sharma, Atul; Sharma, Sudhir Kumar

    2017-02-01

    Plant disease fore casting system is an important system as it can be used for prediction of disease, further it can be used as an alert system to warn the farmers in advance so as to protect their crop from being getting infected. Fore casting system will predict the risk of infection for crop by using the environmental factors that favor in germination of disease. In this study an artificial neural network based system for predicting the risk of powdery mildew in Picrorhiza kurrooa was developed. For development, Levenberg-Marquardt backpropagation algorithm was used having a single hidden layer of ten nodes. Temperature and duration of wetness are the major environmental factors that favor infection. Experimental data was used as a training set and some percentage of data was used for testing and validation. The performance of the system was measured in the form of the coefficient of correlation (R), coefficient of determination (R2), mean square error and root mean square error. For simulating the network an inter face was developed. Using this interface the network was simulated by putting temperature and wetness duration so as to predict the level of risk at that particular value of the input data.

  7. Artificial neural network based modeling of performance characteristics of deep well pumps with splitter blade

    International Nuclear Information System (INIS)

    Goelcue, Mustafa

    2006-01-01

    Experimental studies were made to investigate the effects of splitter blade length (25%, 35%, 50%, 60% and 80% of the main blade length) on the pump characteristics of deep well pumps for different blade numbers (z=3, 4, 5, 6 and 7). In this study, an artificial neural network (ANN) was used for modeling the performance of deep well pumps with splitter blades. Two hundred and ten experimental results were used to train and test. Forty-two patterns have been randomly selected and used as the test data. The main parameters for the experiments are the blade number (z), non-dimensional splitter blade length (L-bar ), flow rate (Q, l/s), head (H m , m), efficiency (η, %) and power (P e , kW). z, L-bar and Q have been used as the input layer, and H m and η have also been used as the output layer. The best training algorithm and number of neurons were obtained. Training of the network was performed using the Levenberg-Marquardt (LM) algorithm. To determine the effect of the transfer function, different ANN models are trained, and the results of these ANN models are compared. Some statistical methods; fraction of variance (R 2 ) and root mean squared error (RMSE) values, have been used for comparison

  8. A soft computing scheme incorporating ANN and MOV energy in fault detection, classification and distance estimation of EHV transmission line with FSC.

    Science.gov (United States)

    Khadke, Piyush; Patne, Nita; Singh, Arvind; Shinde, Gulab

    2016-01-01

    In this article, a novel and accurate scheme for fault detection, classification and fault distance estimation for a fixed series compensated transmission line is proposed. The proposed scheme is based on artificial neural network (ANN) and metal oxide varistor (MOV) energy, employing Levenberg-Marquardt training algorithm. The novelty of this scheme is the use of MOV energy signals of fixed series capacitors (FSC) as input to train the ANN. Such approach has never been used in any earlier fault analysis algorithms in the last few decades. Proposed scheme uses only single end measurement energy signals of MOV in all the 3 phases over one cycle duration from the occurrence of a fault. Thereafter, these MOV energy signals are fed as input to ANN for fault distance estimation. Feasibility and reliability of the proposed scheme have been evaluated for all ten types of fault in test power system model at different fault inception angles over numerous fault locations. Real transmission system parameters of 3-phase 400 kV Wardha-Aurangabad transmission line (400 km) with 40 % FSC at Power Grid Wardha Substation, India is considered for this research. Extensive simulation experiments show that the proposed scheme provides quite accurate results which demonstrate complete protection scheme with high accuracy, simplicity and robustness.

  9. Time series modeling with pruned multi-layer perceptron and 2-stage damped least-squares method

    International Nuclear Information System (INIS)

    Voyant, Cyril; Tamas, Wani; Paoli, Christophe; Balu, Aurélia; Muselli, Marc; Nivet, Marie-Laure; Notton, Gilles

    2014-01-01

    A Multi-Layer Perceptron (MLP) defines a family of artificial neural networks often used in TS modeling and forecasting. Because of its ''black box'' aspect, many researchers refuse to use it. Moreover, the optimization (often based on the exhaustive approach where ''all'' configurations are tested) and learning phases of this artificial intelligence tool (often based on the Levenberg-Marquardt algorithm; LMA) are weaknesses of this approach (exhaustively and local minima). These two tasks must be repeated depending on the knowledge of each new problem studied, making the process, long, laborious and not systematically robust. In this paper a pruning process is proposed. This method allows, during the training phase, to carry out an inputs selecting method activating (or not) inter-nodes connections in order to verify if forecasting is improved. We propose to use iteratively the popular damped least-squares method to activate inputs and neurons. A first pass is applied to 10% of the learning sample to determine weights significantly different from 0 and delete other. Then a classical batch process based on LMA is used with the new MLP. The validation is done using 25 measured meteorological TS and cross-comparing the prediction results of the classical LMA and the 2-stage LMA

  10. Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.

    Science.gov (United States)

    Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S

    2004-01-01

    MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.

  11. Multiobjective generalized extremal optimization algorithm for simulation of daylight illuminants

    Science.gov (United States)

    Kumar, Srividya Ravindra; Kurian, Ciji Pearl; Gomes-Borges, Marcos Eduardo

    2017-10-01

    Daylight illuminants are widely used as references for color quality testing and optical vision testing applications. Presently used daylight simulators make use of fluorescent bulbs that are not tunable and occupy more space inside the quality testing chambers. By designing a spectrally tunable LED light source with an optimal number of LEDs, cost, space, and energy can be saved. This paper describes an application of the generalized extremal optimization (GEO) algorithm for selection of the appropriate quantity and quality of LEDs that compose the light source. The multiobjective approach of this algorithm tries to get the best spectral simulation with minimum fitness error toward the target spectrum, correlated color temperature (CCT) the same as the target spectrum, high color rendering index (CRI), and luminous flux as required for testing applications. GEO is a global search algorithm based on phenomena of natural evolution and is especially designed to be used in complex optimization problems. Several simulations have been conducted to validate the performance of the algorithm. The methodology applied to model the LEDs, together with the theoretical basis for CCT and CRI calculation, is presented in this paper. A comparative result analysis of M-GEO evolutionary algorithm with the Levenberg-Marquardt conventional deterministic algorithm is also presented.

  12. An Artificial Neural Network for Analyzing Overall Uniformity in Outdoor Lighting Systems

    Directory of Open Access Journals (Sweden)

    Antonio del Corte-Valiente

    2017-02-01

    Full Text Available Street lighting installations are an essential service for modern life due to their capability of creating a welcoming feeling at nighttime. Nevertheless, several studies have highlighted that it is possible to improve the quality of the light significantly improving the uniformity of the illuminance. The main difficulty arises when trying to improve some of the installation’s characteristics based only on statistical analysis of the light distribution. This paper presents a new algorithm that is able to obtain the overall illuminance uniformity in order to improve this sort of installations. To develop this algorithm it was necessary to perform a detailed study of all the elements which are part of street lighting installations. Because classification is one of the most important tasks in the application areas of artificial neural networks, we compared the performances of six types of training algorithms in a feed forward neural network for analyzing the overall uniformity in outdoor lighting systems. We found that the best algorithm that minimizes the error is “Levenberg-Marquardt back-propagation”, which approximates the desired output of the training pattern. By means of this kind of algorithm, it is possible to help to lighting professionals optimize the quality of street lighting installations.

  13. Work and family life of childrearing women workers in Japan: comparison of non-regular employees with short working hours, non-regular employees with long working hours, and regular employees.

    Science.gov (United States)

    Seto, Masako; Morimoto, Kanehisa; Maruyama, Soichiro

    2006-05-01

    This study assessed the working and family life characteristics, and the degree of domestic and work strain of female workers with different employment statuses and weekly working hours who are rearing children. Participants were the mothers of preschoolers in a large Japanese city. We classified the women into three groups according to the hours they worked and their employment conditions. The three groups were: non-regular employees working less than 30 h a week (n=136); non-regular employees working 30 h or more per week (n=141); and regular employees working 30 h or more a week (n=184). We compared among the groups the subjective values of work, financial difficulties, childcare and housework burdens, psychological effects, and strains such as work and family strain, work-family conflict, and work dissatisfaction. Regular employees were more likely to report job pressures and inflexible work schedules and to experience more strain related to work and family than non-regular employees. Non-regular employees were more likely to be facing financial difficulties. In particular, non-regular employees working longer hours tended to encounter socioeconomic difficulties and often lacked support from family and friends. Female workers with children may have different social backgrounds and different stressors according to their working hours and work status.

  14. Efficient regularization with wavelet sparsity constraints in photoacoustic tomography

    Science.gov (United States)

    Frikel, Jürgen; Haltmeier, Markus

    2018-02-01

    In this paper, we consider the reconstruction problem of photoacoustic tomography (PAT) with a flat observation surface. We develop a direct reconstruction method that employs regularization with wavelet sparsity constraints. To that end, we derive a wavelet-vaguelette decomposition (WVD) for the PAT forward operator and a corresponding explicit reconstruction formula in the case of exact data. In the case of noisy data, we combine the WVD reconstruction formula with soft-thresholding, which yields a spatially adaptive estimation method. We demonstrate that our method is statistically optimal for white random noise if the unknown function is assumed to lie in any Besov-ball. We present generalizations of this approach and, in particular, we discuss the combination of PAT-vaguelette soft-thresholding with a total variation (TV) prior. We also provide an efficient implementation of the PAT-vaguelette transform that leads to fast image reconstruction algorithms supported by numerical results.

  15. Analysis of the iteratively regularized Gauss-Newton method under a heuristic rule

    Science.gov (United States)

    Jin, Qinian; Wang, Wei

    2018-03-01

    The iteratively regularized Gauss-Newton method is one of the most prominent regularization methods for solving nonlinear ill-posed inverse problems when the data is corrupted by noise. In order to produce a useful approximate solution, this iterative method should be terminated properly. The existing a priori and a posteriori stopping rules require accurate information on the noise level, which may not be available or reliable in practical applications. In this paper we propose a heuristic selection rule for this regularization method, which requires no information on the noise level. By imposing certain conditions on the noise, we derive a posteriori error estimates on the approximate solutions under various source conditions. Furthermore, we establish a convergence result without using any source condition. Numerical results are presented to illustrate the performance of our heuristic selection rule.

  16. Incremental projection approach of regularization for inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Souopgui, Innocent, E-mail: innocent.souopgui@usm.edu [The University of Southern Mississippi, Department of Marine Science (United States); Ngodock, Hans E., E-mail: hans.ngodock@nrlssc.navy.mil [Naval Research Laboratory (United States); Vidard, Arthur, E-mail: arthur.vidard@imag.fr; Le Dimet, François-Xavier, E-mail: ledimet@imag.fr [Laboratoire Jean Kuntzmann (France)

    2016-10-15

    This paper presents an alternative approach to the regularized least squares solution of ill-posed inverse problems. Instead of solving a minimization problem with an objective function composed of a data term and a regularization term, the regularization information is used to define a projection onto a convex subspace of regularized candidate solutions. The objective function is modified to include the projection of each iterate in the place of the regularization. Numerical experiments based on the problem of motion estimation for geophysical fluid images, show the improvement of the proposed method compared with regularization methods. For the presented test case, the incremental projection method uses 7 times less computation time than the regularization method, to reach the same error target. Moreover, at convergence, the incremental projection is two order of magnitude more accurate than the regularization method.

  17. Geometric regularizations and dual conifold transitions

    International Nuclear Information System (INIS)

    Landsteiner, Karl; Lazaroiu, Calin I.

    2003-01-01

    We consider a geometric regularization for the class of conifold transitions relating D-brane systems on noncompact Calabi-Yau spaces to certain flux backgrounds. This regularization respects the SL(2,Z) invariance of the flux superpotential, and allows for computation of the relevant periods through the method of Picard-Fuchs equations. The regularized geometry is a noncompact Calabi-Yau which can be viewed as a monodromic fibration, with the nontrivial monodromy being induced by the regulator. It reduces to the original, non-monodromic background when the regulator is removed. Using this regularization, we discuss the simple case of the local conifold, and show how the relevant field-theoretic information can be extracted in this approach. (author)

  18. Adaptive regularization

    DEFF Research Database (Denmark)

    Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.

    1994-01-01

    Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient desce...

  19. Motion-aware temporal regularization for improved 4D cone-beam computed tomography

    Science.gov (United States)

    Mory, Cyril; Janssens, Guillaume; Rit, Simon

    2016-09-01

    Four-dimensional cone-beam computed tomography (4D-CBCT) of the free-breathing thorax is a valuable tool in image-guided radiation therapy of the thorax and the upper abdomen. It allows the determination of the position of a tumor throughout the breathing cycle, while only its mean position can be extracted from three-dimensional CBCT. The classical approaches are not fully satisfactory: respiration-correlated methods allow one to accurately locate high-contrast structures in any frame, but contain strong streak artifacts unless the acquisition is significantly slowed down. Motion-compensated methods can yield streak-free, but static, reconstructions. This work proposes a 4D-CBCT method that can be seen as a trade-off between respiration-correlated and motion-compensated reconstruction. It builds upon the existing reconstruction using spatial and temporal regularization (ROOSTER) and is called motion-aware ROOSTER (MA-ROOSTER). It performs temporal regularization along curved trajectories, following the motion estimated on a prior 4D CT scan. MA-ROOSTER does not involve motion-compensated forward and back projections: the input motion is used only during temporal regularization. MA-ROOSTER is compared to ROOSTER, motion-compensated Feldkamp-Davis-Kress (MC-FDK), and two respiration-correlated methods, on CBCT acquisitions of one physical phantom and two patients. It yields streak-free reconstructions, visually similar to MC-FDK, and robust information on tumor location throughout the breathing cycle. MA-ROOSTER also allows a variation of the lung tissue density during the breathing cycle, similar to that of planning CT, which is required for quantitative post-processing.

  20. Regularizing portfolio optimization

    International Nuclear Information System (INIS)

    Still, Susanne; Kondor, Imre

    2010-01-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  1. Regularizing portfolio optimization

    Science.gov (United States)

    Still, Susanne; Kondor, Imre

    2010-07-01

    The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.

  2. Preference mapping of lemon lime carbonated beverages with regular and diet beverage consumers.

    Science.gov (United States)

    Leksrisompong, P P; Lopetcharat, K; Guthrie, B; Drake, M A

    2013-02-01

    The drivers of liking of lemon-lime carbonated beverages were investigated with regular and diet beverage consumers. Ten beverages were selected from a category survey of commercial beverages using a D-optimal procedure. Beverages were subjected to consumer testing (n = 101 regular beverage consumers, n = 100 diet beverage consumers). Segmentation of consumers was performed on overall liking scores followed by external preference mapping of selected samples. Diet beverage consumers liked 2 diet beverages more than regular beverage consumers. There were no differences in the overall liking scores between diet and regular beverage consumers for other products except for a sparkling beverage sweetened with juice which was more liked by regular beverage consumers. Three subtle but distinct consumer preference clusters were identified. Two segments had evenly distributed diet and regular beverage consumers but one segment had a greater percentage of regular beverage consumers (P beverage consumers) did not have a large impact on carbonated beverage liking. Instead, mouthfeel attributes were major drivers of liking when these beverages were tested in a blind tasting. Preference mapping of lemon-lime carbonated beverage with diet and regular beverage consumers allowed the determination of drivers of liking of both populations. The understanding of how mouthfeel attributes, aromatics, and basic tastes impact liking or disliking of products was achieved. Preference drivers established in this study provide product developers of carbonated lemon-lime beverages with additional information to develop beverages that may be suitable for different groups of consumers. © 2013 Institute of Food Technologists®

  3. Comparisons of the complementary effect on exhaled nitric oxide of salmeterol vs montelukast in asthmatic children taking regular inhaled budesonide

    DEFF Research Database (Denmark)

    Buchvald, Frederik; Bisgaard, Hans

    2003-01-01

    . OBJECTIVE: To compare the control of FeNO provided by salmeterol or montelukast add-on therapy in asthmatic children undergoing regular maintenance treatment with a daily dose of 400 microg of budesonide. METHODS: The study included children with increased FeNO despite regular treatment with budesonide, 400...... microg/d, and normal lung function. Montelukast, 5 mg/d, salmeterol, 50 microg twice daily, or placebo was compared as add-on therapy to budesonide, 400 microg, in a randomized, double-blind, double-dummy, crossover study. RESULTS: Twenty-two children completed the trial. The geometric mean FeNO level...... with placebo in this group of children taking regular budesonide, 400 microg....

  4. Decoding Skills Acquired by Low Readers Taught in Regular Classrooms Using Clinical Techniques. Research Report No. 35.

    Science.gov (United States)

    Gallistel, Elizabeth; Fischer, Phyllis

    This study evaluated the decoding skills acquired by low readers in an experimental project that taught low readers in regular class through the use of clinical procedures based on a synthetic phonic, multisensory approach. An evaluation instrument which permitted the tabulation of specific decoding skills was administered as a pretest and…

  5. Models for thermal and mechanical monitoring of power transformers

    Energy Technology Data Exchange (ETDEWEB)

    Vilaithong, Rummiya

    2011-07-01

    At present, for economic reasons, there is an increasing emphasis on keeping transformers in service for longer than in the past. A condition-based maintenance using an online monitoring and diagnostic system is one option to ensure reliability of the transformer operation. The key parameters for effectively monitoring equipment can be selected by failure statistics and estimated failure consequences. In this work, two key aspects of transformer condition monitoring are addressed in depth: thermal behaviour and behaviour of on-load tap changers. In the first part of the work, transformer thermal behaviour is studied, focussing on top-oil temperatures. Through online comparison of a measured value of the top-oil temperature and its calculated value, some rapidly developing failures in power transformers such as malfunction of the cooling unit may be detected. Predictions of top-oil temperature can be obtained by means of a mathematical model. Long-term investigations on some dynamic top-oil temperature models are presented for three different types of transformer units. The last-state top-oil temperature, load current, ambient temperature and the operating state of pumps and fans are applied as inputs of the top-oil temperature models. In the fundamental physical models presented, some constant parameters are required and can be estimated using a least-squares optimization technique. Multilayer Feed-forward and Recurrent neural network models are also proposed and investigated. The neural network models are trained with three different Backpropagation training algorithms: Levenberg-Marquardt, Scaled Conjugate Gradient and Automated Bayesian Regularization. The effect of varying operating conditions of the cooling units and the non-steady-state behaviour of loading conditions, as well as ambient temperature are noted. Results show sophisticated temperature prediction is possible using the neural network models that is generally more accurate than with the physical

  6. Tessellating the Sphere with Regular Polygons

    Science.gov (United States)

    Soto-Johnson, Hortensia; Bechthold, Dawn

    2004-01-01

    Tessellations in the Euclidean plane and regular polygons that tessellate the sphere are reviewed. The regular polygons that can possibly tesellate the sphere are spherical triangles, squares and pentagons.

  7. Accretion onto some well-known regular black holes

    International Nuclear Information System (INIS)

    Jawad, Abdul; Shahzad, M.U.

    2016-01-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  8. Accretion onto some well-known regular black holes

    Energy Technology Data Exchange (ETDEWEB)

    Jawad, Abdul; Shahzad, M.U. [COMSATS Institute of Information Technology, Department of Mathematics, Lahore (Pakistan)

    2016-03-15

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes. (orig.)

  9. Accretion onto some well-known regular black holes

    Science.gov (United States)

    Jawad, Abdul; Shahzad, M. Umair

    2016-03-01

    In this work, we discuss the accretion onto static spherically symmetric regular black holes for specific choices of the equation of state parameter. The underlying regular black holes are charged regular black holes using the Fermi-Dirac distribution, logistic distribution, nonlinear electrodynamics, respectively, and Kehagias-Sftesos asymptotically flat regular black holes. We obtain the critical radius, critical speed, and squared sound speed during the accretion process near the regular black holes. We also study the behavior of radial velocity, energy density, and the rate of change of the mass for each of the regular black holes.

  10. Diagrammatic methods in phase-space regularization

    International Nuclear Information System (INIS)

    Bern, Z.; Halpern, M.B.; California Univ., Berkeley

    1987-11-01

    Using the scalar prototype and gauge theory as the simplest possible examples, diagrammatic methods are developed for the recently proposed phase-space form of continuum regularization. A number of one-loop and all-order applications are given, including general diagrammatic discussions of the nogrowth theorem and the uniqueness of the phase-space stochastic calculus. The approach also generates an alternate derivation of the equivalence of the large-β phase-space regularization to the more conventional coordinate-space regularization. (orig.)

  11. Metric regularity and subdifferential calculus

    International Nuclear Information System (INIS)

    Ioffe, A D

    2000-01-01

    The theory of metric regularity is an extension of two classical results: the Lyusternik tangent space theorem and the Graves surjection theorem. Developments in non-smooth analysis in the 1980s and 1990s paved the way for a number of far-reaching extensions of these results. It was also well understood that the phenomena behind the results are of metric origin, not connected with any linear structure. At the same time it became clear that some basic hypotheses of the subdifferential calculus are closely connected with the metric regularity of certain set-valued maps. The survey is devoted to the metric theory of metric regularity and its connection with subdifferential calculus in Banach spaces

  12. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  13. Temporal regularity of the environment drives time perception

    OpenAIRE

    van Rijn, H; Rhodes, D; Di Luca, M

    2016-01-01

    It’s reasonable to assume that a regularly paced sequence should be perceived as regular, but here we show that perceived regularity depends on the context in which the sequence is embedded. We presented one group of participants with perceptually regularly paced sequences, and another group of participants with mostly irregularly paced sequences (75% irregular, 25% regular). The timing of the final stimulus in each sequence could be var- ied. In one experiment, we asked whether the last stim...

  14. Prior knowledge regularization in statistical medical image tasks

    DEFF Research Database (Denmark)

    Crimi, Alessandro; Sporring, Jon; de Bruijne, Marleen

    2009-01-01

    The estimation of the covariance matrix is a pivotal step inseveral statistical tasks. In particular, the estimation becomes challeng-ing for high dimensional representations of data when few samples areavailable. Using the standard Maximum Likelihood estimation (MLE)when the number of samples ar...

  15. The perception of regularity in an isochronous stimulus in zebra finches (Taeniopygia guttata) and humans.

    Science.gov (United States)

    van der Aa, Jeroen; Honing, Henkjan; ten Cate, Carel

    2015-06-01

    Perceiving temporal regularity in an auditory stimulus is considered one of the basic features of musicality. Here we examine whether zebra finches can detect regularity in an isochronous stimulus. Using a go/no go paradigm we show that zebra finches are able to distinguish between an isochronous and an irregular stimulus. However, when the tempo of the isochronous stimulus is changed, it is no longer treated as similar to the training stimulus. Training with three isochronous and three irregular stimuli did not result in improvement of the generalization. In contrast, humans, exposed to the same stimuli, readily generalized across tempo changes. Our results suggest that zebra finches distinguish the different stimuli by learning specific local temporal features of each individual stimulus rather than attending to the global structure of the stimuli, i.e., to the temporal regularity. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Risk, treatment duration, and recurrence risk of postpartum affective disorder in women with no prior psychiatric history: A population-based cohort study.

    Directory of Open Access Journals (Sweden)

    Marie-Louise H Rasmussen

    2017-09-01

    Full Text Available Some 5%-15% of all women experience postpartum depression (PPD, which for many is their first psychiatric disorder. The purpose of this study was to estimate the incidence of postpartum affective disorder (AD, duration of treatment, and rate of subsequent postpartum AD and other affective episodes in a nationwide cohort of women with no prior psychiatric history.Linking information from several Danish national registers, we constructed a cohort of 457,317 primiparous mothers with first birth (and subsequent births from 1 January 1996 to 31 December 2013 (a total of 789,068 births and no prior psychiatric hospital contacts and/or use of antidepressants. These women were followed from 1 January 1996 to 31 December 2014. Postpartum AD was defined as use of antidepressants and/or hospital contact for PPD within 6 months after childbirth. The main outcome measures were risk of postpartum AD, duration of treatment, and recurrence risk. We observed 4,550 (0.6% postpartum episodes of AD. The analyses of treatment duration showed that 1 year after the initiation of treatment for their first episode, 27.9% of women were still in treatment; after 4 years, 5.4%. The recurrence risk of postpartum AD for women with a PPD hospital contact after first birth was 55.4 per 100 person-years; for women with postpartum antidepressant medication after first birth, it was 35.0 per 100 person-years. The rate of postpartum AD after second birth for women with no history of postpartum AD was 1.2 per 100 person-years. After adjusting for year of birth and mother's age, women with PPD hospital contact after first birth had a 46.4 times higher rate (95% CI 31.5-68.4 and women with postpartum antidepressant medication after their first birth had a 26.9 times higher rate (95% CI 21.9-33.2 of a recurrent postpartum episode after their second birth compared to women with no postpartum AD history. Limitations include the use of registry data to identify cases and limited

  17. Physical examination prior to initiating hormonal contraception: a systematic review.

    Science.gov (United States)

    Tepper, Naomi K; Curtis, Kathryn M; Steenland, Maria W; Marchbanks, Polly A

    2013-05-01

    Provision of contraception is often linked with physical examination, including clinical breast examination (CBE) and pelvic examination. This review was conducted to evaluate the evidence regarding outcomes among women with and without physical examination prior to initiating hormonal contraceptives. The PubMed database was searched from database inception through March 2012 for all peer-reviewed articles in any language concerning CBE and pelvic examination prior to initiating hormonal contraceptives. The quality of each study was assessed using the United States Preventive Services Task Force grading system. The search did not identify any evidence regarding outcomes among women screened versus not screened with CBE prior to initiation of hormonal contraceptives. The search identified two case-control studies of fair quality which compared women who did or did not undergo pelvic examination prior to initiating oral contraceptives (OCs) or depot medroxyprogesterone acetate (DMPA). No differences in risk factors for cervical neoplasia, incidence of sexually transmitted infections, incidence of abnormal Pap smears or incidence of abnormal wet mount findings were observed. Although women with breast cancer should not use hormonal contraceptives, there is little utility in screening prior to initiation, due to the low incidence of breast cancer and uncertain value of CBE among women of reproductive age. Two fair quality studies demonstrated no differences between women who did or did not undergo pelvic examination prior to initiating OCs or DMPA with respect to risk factors or clinical outcomes. In addition, pelvic examination is not likely to detect any conditions for which hormonal contraceptives would be unsafe. Published by Elsevier Inc.

  18. The uniqueness of the regularization procedure

    International Nuclear Information System (INIS)

    Brzezowski, S.

    1981-01-01

    On the grounds of the BPHZ procedure, the criteria of correct regularization in perturbation calculations of QFT are given, together with the prescription for dividing the regularized formulas into the finite and infinite parts. (author)

  19. Coupling regularizes individual units in noisy populations

    International Nuclear Information System (INIS)

    Ly Cheng; Ermentrout, G. Bard

    2010-01-01

    The regularity of a noisy system can modulate in various ways. It is well known that coupling in a population can lower the variability of the entire network; the collective activity is more regular. Here, we show that diffusive (reciprocal) coupling of two simple Ornstein-Uhlenbeck (O-U) processes can regularize the individual, even when it is coupled to a noisier process. In cellular networks, the regularity of individual cells is important when a select few play a significant role. The regularizing effect of coupling surprisingly applies also to general nonlinear noisy oscillators. However, unlike with the O-U process, coupling-induced regularity is robust to different kinds of coupling. With two coupled noisy oscillators, we derive an asymptotic formula assuming weak noise and coupling for the variance of the period (i.e., spike times) that accurately captures this effect. Moreover, we find that reciprocal coupling can regularize the individual period of higher dimensional oscillators such as the Morris-Lecar and Brusselator models, even when coupled to noisier oscillators. Coupling can have a counterintuitive and beneficial effect on noisy systems. These results have implications for the role of connectivity with noisy oscillators and the modulation of variability of individual oscillators.

  20. Learning regularization parameters for general-form Tikhonov

    International Nuclear Information System (INIS)

    Chung, Julianne; Español, Malena I

    2017-01-01

    Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132–52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods. (paper)

  1. Regular extensions of some classes of grammars

    NARCIS (Netherlands)

    Nijholt, Antinus

    Culik and Cohen introduced the class of LR-regular grammars, an extension of the LR(k) grammars. In this report we consider the analogous extension of the LL(k) grammers, called the LL-regular grammars. The relations of this class of grammars to other classes of grammars are shown. Every LL-regular

  2. A New Pansharpening Method Based on Spatial and Spectral Sparsity Priors.

    Science.gov (United States)

    He, Xiyan; Condat, Laurent; Bioucas-Diaz, Jose; Chanussot, Jocelyn; Xia, Junshi

    2014-06-27

    The development of multisensor systems in recent years has led to great increase in the amount of available remote sensing data. Image fusion techniques aim at inferring high quality images of a given area from degraded versions of the same area obtained by multiple sensors. This paper focuses on pansharpening, which is the inference of a high spatial resolution multispectral image from two degraded versions with complementary spectral and spatial resolution characteristics: a) a low spatial resolution multispectral image; and b) a high spatial resolution panchromatic image. We introduce a new variational model based on spatial and spectral sparsity priors for the fusion. In the spectral domain we encourage low-rank structure, whereas in the spatial domain we promote sparsity on the local differences. Given the fact that both panchromatic and multispectral images are integrations of the underlying continuous spectra using different channel responses, we propose to exploit appropriate regularizations based on both spatial and spectral links between panchromatic and the fused multispectral images. A weighted version of the vector Total Variation (TV) norm of the data matrix is employed to align the spatial information of the fused image with that of the panchromatic image. With regard to spectral information, two different types of regularization are proposed to promote a soft constraint on the linear dependence between the panchromatic and the fused multispectral images. The first one estimates directly the linear coefficients from the observed panchromatic and low resolution multispectral images by Linear Regression (LR) while the second one employs the Principal Component Pursuit (PCP) to obtain a robust recovery of the underlying low-rank structure. We also show that the two regularizers are strongly related. The basic idea of both regularizers is that the fused image should have low-rank and preserve edge locations. We use a variation of the recently proposed

  3. Clinical utility of carotid duplex ultrasound prior to cardiac surgery.

    Science.gov (United States)

    Lin, Judith C; Kabbani, Loay S; Peterson, Edward L; Masabni, Khalil; Morgan, Jeffrey A; Brooks, Sara; Wertella, Kathleen P; Paone, Gaetano

    2016-03-01

    Clinical utility and cost-effectiveness of carotid duplex examination prior to cardiac surgery have been questioned by the multidisciplinary committee creating the 2012 Appropriate Use Criteria for Peripheral Vascular Laboratory Testing. We report the clinical outcomes and postoperative neurologic symptoms in patients who underwent carotid duplex ultrasound prior to open heart surgery at a tertiary institution. Using the combined databases from our clinical vascular laboratory and the Society of Thoracic Surgery, a retrospective analysis of all patients who underwent carotid duplex ultrasound within 13 months prior to open heart surgery from March 2005 to March 2013 was performed. The outcomes between those who underwent carotid duplex scanning (group A) and those who did not (group B) were compared. Among 3233 patients in the cohort who underwent cardiac surgery, 515 (15.9%) patients underwent a carotid duplex ultrasound preoperatively, and 2718 patients did not (84.1%). Among the patients who underwent carotid screening vs no screening, there was no statistically significant difference in the risk factors of cerebrovascular disease (10.9% vs 12.7%; P = .26), prior stroke (8.2% vs 7.2%; P = .41), and prior transient ischemic attack (2.9% vs 3.3%; P = .24). For those undergoing isolated coronary artery bypass grafting (CABG), 306 (17.8%) of 1723 patients underwent preoperative carotid duplex ultrasound. Among patients who had carotid screening prior to CABG, the incidence of carotid disease was low: 249 (81.4%) had minimal or mild stenosis (duplex scanning and those who did not. Primary outcomes of patients who underwent open heart surgery also showed no difference in the perioperative mortality (5.1% vs 6.9%; P = .14) and stroke (2.6% vs 2.4%; P = .85) between patients undergoing preoperative duplex scanning and those who did not. Operative intervention of severe carotid stenosis prior to isolated CABG occurred in 2 of the 17 patients (11.8%) identified who

  4. Are Long-Term Chloroquine or Hydroxychloroquine Users Being Checked Regularly for Toxic Maculopathy?

    Science.gov (United States)

    Nika, Melisa; Blachley, Taylor S.; Edwards, Paul; Lee, Paul P.; Stein, Joshua D.

    2014-01-01

    Importance According to evidence-based, expert recommendations, long-term users of chloroquine (CQ) or hydroxychloroquine (HCQ) should undergo regular visits to eye-care providers and diagnostic testing to check for maculopathy. Objective To determine whether patients with rheumatoid arthritis (RA) or systemic lupus erythematosus (SLE) taking CQ or HCQ are regularly visiting eye-care providers and being screened for maculopathy. Setting, Design and Participants Patients with RA or SLE who were continuously enrolled in a particular managed-care network for ≥5 years during 2001-2011 were studied. Patients' amount of CQ/HCQ use in the 5 years since initial RA/SLE diagnosis was calculated, along with their number of eye-care visits and diagnostic tests for maculopathy. Those at high risk for maculopathy were identified. Visits to eye providers and diagnostic testing for maculopathy were assessed for each enrollee over the study period. Logistic regression was performed to assess potential factors associated with regular eye-care-provider visits (≥3 in 5 years) among CQ/HCQ users, including those at greatest risk for maculopathy. Main Outcome Measures Among CQ/HCQ users and those at high risk for toxic maculopathy, the proportions with regular eye-care visits and diagnostic testing, and the likelihood of regular eye-care visits (odds ratios [ORs] with 95% confidence intervals [CI]). Results Among 18,051 beneficiaries with RA or SLE, 6,339 (35.1%) had ≥1 record of HCQ/CQ use and 1,409 (7.8%) used HCQ/CQ for ≥4 years. Among those at high risk for maculopathy, 27.9% lacked regular eye-provider visits, 6.1% had no visits to eye providers, and 34.5% had no diagnostic testing for maculopathy during the 5-year period. Among high-risk patients, each additional month of HCQ/CQ use was associated with a 2.0%-increased likelihood of regular eye care (adjusted OR=1.02, CI=1.01-1.03). High-risk patients whose SLE/RA were managed by rheumatologists had a 77%-increased

  5. Regular non-twisting S-branes

    International Nuclear Information System (INIS)

    Obregon, Octavio; Quevedo, Hernando; Ryan, Michael P.

    2004-01-01

    We construct a family of time and angular dependent, regular S-brane solutions which corresponds to a simple analytical continuation of the Zipoy-Voorhees 4-dimensional vacuum spacetime. The solutions are asymptotically flat and turn out to be free of singularities without requiring a twist in space. They can be considered as the simplest non-singular generalization of the singular S0-brane solution. We analyze the properties of a representative of this family of solutions and show that it resembles to some extent the asymptotic properties of the regular Kerr S-brane. The R-symmetry corresponds, however, to the general lorentzian symmetry. Several generalizations of this regular solution are derived which include a charged S-brane and an additional dilatonic field. (author)

  6. Brain Stroke Detection by Microwaves Using Prior Information from Clinical Databases

    Directory of Open Access Journals (Sweden)

    Natalia Irishina

    2013-01-01

    Full Text Available Microwave tomographic imaging is an inexpensive, noninvasive modality of media dielectric properties reconstruction which can be utilized as a screening method in clinical applications such as breast cancer and brain stroke detection. For breast cancer detection, the iterative algorithm of structural inversion with level sets provides well-defined boundaries and incorporates an intrinsic regularization, which permits to discover small lesions. However, in case of brain lesion, the inverse problem is much more difficult due to the skull, which causes low microwave penetration and highly noisy data. In addition, cerebral liquid has dielectric properties similar to those of blood, which makes the inversion more complicated. Nevertheless, the contrast in the conductivity and permittivity values in this situation is significant due to blood high dielectric values compared to those of surrounding grey and white matter tissues. We show that using brain MRI images as prior information about brain's configuration, along with known brain dielectric properties, and the intrinsic regularization by structural inversion, allows successful and rapid stroke detection even in difficult cases. The method has been applied to 2D slices created from a database of 3D real MRI phantom images to effectively detect lesions larger than 2.5 × 10−2 m diameter.

  7. Trouble found in regular inspection of No.1 plant in Ikata Power Station, Shikoku Electric Power Co., Inc

    International Nuclear Information System (INIS)

    1989-01-01

    Since May 2, 1989, the regular inspection of No.1 plant which is a PWR plant with the rated output of 566 MW in Ikata Power Station, Shikoku Electric Power Co., Inc. has been carried out, and eddy current flaw detection inspection was conducted on the total 6585 heating tubes of steam generators except already plugged tubes. As the result, significant indication was observed in 12 heating tubes at the expanded part of the high temperature side tube plates. As to the cause, similarly to those observed in the same plant in the past, it is considered that the residual stress caused by expanding at the time of the manufacture and the internal pressure stress during the operation were superposed, and stress corrosion cracking occurred. It was decided that these 12 defective tubes are plugged. State of plugging in steam generators. Number of total heating tubes: 6776=3388 tubes x 2 steam generators. Number of plugged tubes: 203 including the increase of 12 this time. Ratio of plugging: 3.0 %. Heating tubes: Inconel 600 tubes of φ22.7 mm x 1.27 mm thickness. (K.I.)

  8. Near-Regular Structure Discovery Using Linear Programming

    KAUST Repository

    Huang, Qixing

    2014-06-02

    Near-regular structures are common in manmade and natural objects. Algorithmic detection of such regularity greatly facilitates our understanding of shape structures, leads to compact encoding of input geometries, and enables efficient generation and manipulation of complex patterns on both acquired and synthesized objects. Such regularity manifests itself both in the repetition of certain geometric elements, as well as in the structured arrangement of the elements. We cast the regularity detection problem as an optimization and efficiently solve it using linear programming techniques. Our optimization has a discrete aspect, that is, the connectivity relationships among the elements, as well as a continuous aspect, namely the locations of the elements of interest. Both these aspects are captured by our near-regular structure extraction framework, which alternates between discrete and continuous optimizations. We demonstrate the effectiveness of our framework on a variety of problems including near-regular structure extraction, structure-preserving pattern manipulation, and markerless correspondence detection. Robustness results with respect to geometric and topological noise are presented on synthesized, real-world, and also benchmark datasets. © 2014 ACM.

  9. Does regular practice of physical activity reduce the risk of dysphonia?

    Science.gov (United States)

    Assunção, Ada Avila; de Medeiros, Adriane Mesquita; Barreto, Sandhi Maria; Gama, Ana Cristina Cortes

    2009-12-01

    The purpose of this study was to investigate the association between regular physical activity and the prevalence of dysphonia. A cross-sectional study was conducted with 3142 teachers from 129 municipal public schools in the city of Belo Horizonte, Brazil. The dependent variable, dysphonia, was classified (absent or present) according to reported symptoms (fatigue when speaking and loss of voice quality), their frequency (occasionally and daily), and duration (past 15 days). The independent variable was regular physical activity. The degree of association was estimated based on the prevalence ratio and a 95% confidence interval obtained by the Poisson regression adapted for cross-sectional studies. In the study sample, the prevalence of dysphonia in teachers was 15.63%. Nearly half (47.52%) of the teachers reported no regular practice of physical exercises. The remaining teachers (52.48%) walked and did physical exercises, sports, and other activities; 31.25% undertook these activities once or twice a week, and 21.23% exercised three or more times a week. Teachers who did not practice physical activity were more likely to present dysphonia compared to those that exercised three or more times a week. Regular physical activity was associated positively with the prevalence of dysphonia.

  10. Regular Expression Matching and Operational Semantics

    Directory of Open Access Journals (Sweden)

    Asiri Rathnayake

    2011-08-01

    Full Text Available Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lockstep construction and a machine that performs some operations in parallel, suitable for implementation on a large number of cores, such as a GPU. We formalize the parallel machine using process algebra and report some preliminary experiments with an implementation on a graphics processor using CUDA.

  11. Tetravalent one-regular graphs of order 4p2

    DEFF Research Database (Denmark)

    Feng, Yan-Quan; Kutnar, Klavdija; Marusic, Dragan

    2014-01-01

    A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified.......A graph is one-regular if its automorphism group acts regularly on the set of its arcs. In this paper tetravalent one-regular graphs of order 4p2, where p is a prime, are classified....

  12. Regularization and error assignment to unfolded distributions

    CERN Document Server

    Zech, Gunter

    2011-01-01

    The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.

  13. Low-FODMAP vs regular rye bread in irritable bowel syndrome: Randomized SmartPill® study.

    Science.gov (United States)

    Pirkola, Laura; Laatikainen, Reijo; Loponen, Jussi; Hongisto, Sanna-Maria; Hillilä, Markku; Nuora, Anu; Yang, Baoru; Linderborg, Kaisa M; Freese, Riitta

    2018-03-21

    To compare the effects of regular vs low-FODMAP rye bread on irritable bowel syndrome (IBS) symptoms and to study gastrointestinal conditions with SmartPill ® . Our aim was to evaluate if rye bread low in FODMAPs would cause reduced hydrogen excretion, lower intraluminal pressure, higher colonic pH, different transit times, and fewer IBS symptoms than regular rye bread. The study was a randomized, double-blind, controlled cross-over meal study. Female IBS patients ( n = 7) ate study breads at three consecutive meals during one day. The diet was similar for both study periods except for the FODMAP content of the bread consumed during the study day. Intraluminal pH, transit time, and pressure were measured by SmartPill, an indigestible motility capsule. Hydrogen excretion (a marker of colonic fermentation) expressed as area under the curve (AUC) (0-630 min) was [median (range)] 6300 (1785-10800) ppm∙min for low-FODMAP rye bread and 10 635 (4215-13080) ppm∙min for regular bread ( P = 0.028). Mean scores of gastrointestinal symptoms showed no statistically significant differences but suggested less flatulence after low-FODMAP bread consumption ( P = 0.063). Intraluminal pressure correlated significantly with total symptom score after regular rye bread (ρ = 0.786, P = 0.036) and nearly significantly after low-FODMAP bread consumption (ρ = 0.75, P = 0.052). We found no differences in pH, pressure, or transit times between the breads. Gastric residence of SmartPill was slower than expected. SmartPill left the stomach in less than 5 h only during one measurement (out of 14 measurements in total) and therefore did not follow on par with the rye bread bolus. Low-FODMAP rye bread reduced colonic fermentation vs regular rye bread. No difference was found in median values of intraluminal conditions of the gastrointestinal tract.

  14. Impulsivity and related neuropsychological features in regular and addictive first person shooter gaming.

    Science.gov (United States)

    Metcalf, Olivia; Pammer, Kristen

    2014-03-01

    Putative cyber addictions are of significant interest. There remains little experimental research into excessive use of first person shooter (FPS) games, despite their global popularity. Moreover, the role between excessive gaming and impulsivity remains unclear, with previous research showing conflicting findings. The current study investigated performances on a number of neuropsychological tasks (go/no-go, continuous performance task, Iowa gambling task) and a trait measure of impulsivity for a group of regular FPS gamers (n=25), addicted FPS gamers (n=22), and controls (n=22). Gamers were classified using the Addiction-Engagement Questionnaire. Addicted FPS gamers had significantly higher levels of trait impulsivity on the Barratt Impulsiveness Scale compared to controls. Addicted FPS gamers also had significantly higher levels of disinhibition in a go/no-go task and inattention in a continuous performance task compared to controls, whereas the regular FPS gamers had better decision making on the Iowa gambling task compared to controls. The results indicate impulsivity is associated with FPS gaming addiction, comparable to pathological gambling. The relationship between impulsivity and excessive gaming may be unique to the FPS genre. Furthermore, regular FPS gaming may improve decision making ability.

  15. Travel time tomography with local image regularization by sparsity constrained dictionary learning

    Science.gov (United States)

    Bianco, M.; Gerstoft, P.

    2017-12-01

    We propose a regularization approach for 2D seismic travel time tomography which models small rectangular groups of slowness pixels, within an overall or `global' slowness image, as sparse linear combinations of atoms from a dictionary. The groups of slowness pixels are referred to as patches and a dictionary corresponds to a collection of functions or `atoms' describing the slowness in each patch. These functions could for example be wavelets.The patch regularization is incorporated into the global slowness image. The global image models the broad features, while the local patch images incorporate prior information from the dictionary. Further, high resolution slowness within patches is permitted if the travel times from the global estimates support it. The proposed approach is formulated as an algorithm, which is repeated until convergence is achieved: 1) From travel times, find the global slowness image with a minimum energy constraint on the pixel variance relative to a reference. 2) Find the patch level solutions to fit the global estimate as a sparse linear combination of dictionary atoms.3) Update the reference as the weighted average of the patch level solutions.This approach relies on the redundancy of the patches in the seismic image. Redundancy means that the patches are repetitions of a finite number of patterns, which are described by the dictionary atoms. Redundancy in the earth's structure was demonstrated in previous works in seismics where dictionaries of wavelet functions regularized inversion. We further exploit redundancy of the patches by using dictionary learning algorithms, a form of unsupervised machine learning, to estimate optimal dictionaries from the data in parallel with the inversion. We demonstrate our approach on densely, but irregularly sampled synthetic seismic images.

  16. Higher order total variation regularization for EIT reconstruction.

    Science.gov (United States)

    Gong, Bo; Schullcke, Benjamin; Krueger-Ziolek, Sabine; Zhang, Fan; Mueller-Lisse, Ullrich; Moeller, Knut

    2018-01-08

    Electrical impedance tomography (EIT) attempts to reveal the conductivity distribution of a domain based on the electrical boundary condition. This is an ill-posed inverse problem; its solution is very unstable. Total variation (TV) regularization is one of the techniques commonly employed to stabilize reconstructions. However, it is well known that TV regularization induces staircase effects, which are not realistic in clinical applications. To reduce such artifacts, modified TV regularization terms considering a higher order differential operator were developed in several previous studies. One of them is called total generalized variation (TGV) regularization. TGV regularization has been successively applied in image processing in a regular grid context. In this study, we adapted TGV regularization to the finite element model (FEM) framework for EIT reconstruction. Reconstructions using simulation and clinical data were performed. First results indicate that, in comparison to TV regularization, TGV regularization promotes more realistic images. Graphical abstract Reconstructed conductivity changes located on selected vertical lines. For each of the reconstructed images as well as the ground truth image, conductivity changes located along the selected left and right vertical lines are plotted. In these plots, the notation GT in the legend stands for ground truth, TV stands for total variation method, and TGV stands for total generalized variation method. Reconstructed conductivity distributions from the GREIT algorithm are also demonstrated.

  17. Application of Turchin's method of statistical regularization

    Science.gov (United States)

    Zelenyi, Mikhail; Poliakova, Mariia; Nozik, Alexander; Khudyakov, Alexey

    2018-04-01

    During analysis of experimental data, one usually needs to restore a signal after it has been convoluted with some kind of apparatus function. According to Hadamard's definition this problem is ill-posed and requires regularization to provide sensible results. In this article we describe an implementation of the Turchin's method of statistical regularization based on the Bayesian approach to the regularization strategy.

  18. On the regularized fermionic projector of the vacuum

    Science.gov (United States)

    Finster, Felix

    2008-03-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed.

  19. On the regularized fermionic projector of the vacuum

    International Nuclear Information System (INIS)

    Finster, Felix

    2008-01-01

    We construct families of fermionic projectors with spherically symmetric regularization, which satisfy the condition of a distributional MP-product. The method is to analyze regularization tails with a power law or logarithmic scaling in composite expressions in the fermionic projector. The resulting regularizations break the Lorentz symmetry and give rise to a multilayer structure of the fermionic projector near the light cone. Furthermore, we construct regularizations which go beyond the distributional MP-product in that they yield additional distributional contributions supported at the origin. The remaining freedom for the regularization parameters and the consequences for the normalization of the fermionic states are discussed

  20. Labor Dystocia and the Risk of Uterine Rupture in Women with Prior Cesarean.

    Science.gov (United States)

    Vachon-Marceau, Chantale; Demers, Suzanne; Goyet, Martine; Gauthier, Robert; Roberge, Stéphanie; Chaillet, Nils; Laroche, Jasmin; Bujold, Emmanuel

    2016-05-01

    Objective The objective of this study was to evaluate the association between labor dystocia and uterine rupture. Methods We performed a secondary analysis of a multicenter case-control study that included women with single, prior, low-transverse cesarean section who experienced complete uterine rupture during a trial of labor (TOL). For each case, three women who underwent a TOL without uterine rupture were selected as controls. Data were collected on cervical dilatations from admission to delivery. We evaluated the relationship between uterine rupture and labor dystocia according to several criteria, including the World Health Organization's (WHO's) partogram. Results Data were available for 90 cases and 260 controls. Compared with the controls, uterine rupture was associated with less cervical dilatation on admission, slower cervical dilatation in the first stage of labor and longer second stage of labor (all with p dystocia is a significant risk factor for uterine rupture. Labor progression should be assessed regularly in women with prior cesarean. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.

  1. Statistical regularities in art: Relations with visual coding and perception.

    Science.gov (United States)

    Graham, Daniel J; Redies, Christoph

    2010-07-21

    Since at least 1935, vision researchers have used art stimuli to test human response to complex scenes. This is sensible given the "inherent interestingness" of art and its relation to the natural visual world. The use of art stimuli has remained popular, especially in eye tracking studies. Moreover, stimuli in common use by vision scientists are inspired by the work of famous artists (e.g., Mondrians). Artworks are also popular in vision science as illustrations of a host of visual phenomena, such as depth cues and surface properties. However, until recently, there has been scant consideration of the spatial, luminance, and color statistics of artwork, and even less study of ways that regularities in such statistics could affect visual processing. Furthermore, the relationship between regularities in art images and those in natural scenes has received little or no attention. In the past few years, there has been a concerted effort to study statistical regularities in art as they relate to neural coding and visual perception, and art stimuli have begun to be studied in rigorous ways, as natural scenes have been. In this minireview, we summarize quantitative studies of links between regular statistics in artwork and processing in the visual stream. The results of these studies suggest that art is especially germane to understanding human visual coding and perception, and it therefore warrants wider study. Copyright 2010 Elsevier Ltd. All rights reserved.

  2. Gift from statistical learning: Visual statistical learning enhances memory for sequence elements and impairs memory for items that disrupt regularities.

    Science.gov (United States)

    Otsuka, Sachio; Saiki, Jun

    2016-02-01

    Prior studies have shown that visual statistical learning (VSL) enhances familiarity (a type of memory) of sequences. How do statistical regularities influence the processing of each triplet element and inserted distractors that disrupt the regularity? Given that increased attention to triplets induced by VSL and inhibition of unattended triplets, we predicted that VSL would promote memory for each triplet constituent, and degrade memory for inserted stimuli. Across the first two experiments, we found that objects from structured sequences were more likely to be remembered than objects from random sequences, and that letters (Experiment 1) or objects (Experiment 2) inserted into structured sequences were less likely to be remembered than those inserted into random sequences. In the subsequent two experiments, we examined an alternative account for our results, whereby the difference in memory for inserted items between structured and random conditions is due to individuation of items within random sequences. Our findings replicated even when control letters (Experiment 3A) or objects (Experiment 3B) were presented before or after, rather than inserted into, random sequences. Our findings suggest that statistical learning enhances memory for each item in a regular set and impairs memory for items that disrupt the regularity. Copyright © 2015 Elsevier B.V. All rights reserved.

  3. Regularization modeling for large-eddy simulation

    NARCIS (Netherlands)

    Geurts, Bernardus J.; Holm, D.D.

    2003-01-01

    A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of

  4. Spatially-Variant Tikhonov Regularization for Double-Difference Waveform Inversion

    Energy Technology Data Exchange (ETDEWEB)

    Lin, Youzuo [Los Alamos National Laboratory; Huang, Lianjie [Los Alamos National Laboratory; Zhang, Zhigang [Los Alamos National Laboratory

    2011-01-01

    Double-difference waveform inversion is a potential tool for quantitative monitoring for geologic carbon storage. It jointly inverts time-lapse seismic data for changes in reservoir geophysical properties. Due to the ill-posedness of waveform inversion, it is a great challenge to obtain reservoir changes accurately and efficiently, particularly when using time-lapse seismic reflection data. Regularization techniques can be utilized to address the issue of ill-posedness. The regularization parameter controls the smoothness of inversion results. A constant regularization parameter is normally used in waveform inversion, and an optimal regularization parameter has to be selected. The resulting inversion results are a trade off among regions with different smoothness or noise levels; therefore the images are either over regularized in some regions while under regularized in the others. In this paper, we employ a spatially-variant parameter in the Tikhonov regularization scheme used in double-difference waveform tomography to improve the inversion accuracy and robustness. We compare the results obtained using a spatially-variant parameter with those obtained using a constant regularization parameter and those produced without any regularization. We observe that, utilizing a spatially-variant regularization scheme, the target regions are well reconstructed while the noise is reduced in the other regions. We show that the spatially-variant regularization scheme provides the flexibility to regularize local regions based on the a priori information without increasing computational costs and the computer memory requirement.

  5. No. 127-The Evaluation of Stress Incontinence Prior to Primary Surgery.

    Science.gov (United States)

    Farrell, Scott A

    2018-02-01

    To provide clinical guidelines for the evaluation of women with stress urinary incontinence prior to primary anti-incontinence surgery. The modalities of evaluation range from basic pelvic examination through to the use of adjuncts including ultrasound and urodynamic testing. These guidelines provide a comprehensive approach to the preoperative evaluation of urinary incontinence to ensure that excessive evaluation is avoided without sacrificing diagnostic accuracy. Published opinions of experts, supplemented by evidence from clinical trials, where appropriate. The quality of the evidence is rated using the criteria described by the Canadian Task Force on the Periodic Health Examination. Comprehensive evaluation of women considering surgery to treat urinary incontinence is essential to rule out causes of incontinence that may not be amenable to surgical treatment. Simplifying the evaluation minimizes the discomfort and embarrassment potentially experienced by women. VALIDATION: These guidelines have been approved by the Urogynaecology Committee and the Executive and Council of The Society of Obstetricians and Gynaecologists of Canada. Copyright © 2018. Published by Elsevier Inc.

  6. Manifold Regularized Correlation Object Tracking.

    Science.gov (United States)

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2018-05-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped from both target and nontarget regions. Thus, the final classifier in our method is trained with positive, negative, and unlabeled base samples, which is a semisupervised learning framework. A block optimization strategy is further introduced to learn a manifold regularization-based correlation filter for efficient online tracking. Experiments on two public tracking data sets demonstrate the superior performance of our tracker compared with the state-of-the-art tracking approaches.

  7. From recreational to regular drug use

    DEFF Research Database (Denmark)

    Järvinen, Margaretha; Ravn, Signe

    2011-01-01

    This article analyses the process of going from recreational use to regular and problematic use of illegal drugs. We present a model containing six career contingencies relevant for young people’s progress from recreational to regular drug use: the closing of social networks, changes in forms...

  8. Regular variation on measure chains

    Czech Academy of Sciences Publication Activity Database

    Řehák, Pavel; Vitovec, J.

    2010-01-01

    Roč. 72, č. 1 (2010), s. 439-448 ISSN 0362-546X R&D Projects: GA AV ČR KJB100190701 Institutional research plan: CEZ:AV0Z10190503 Keywords : regularly varying function * regularly varying sequence * measure chain * time scale * embedding theorem * representation theorem * second order dynamic equation * asymptotic properties Subject RIV: BA - General Mathematics Impact factor: 1.279, year: 2010 http://www.sciencedirect.com/science/article/pii/S0362546X09008475

  9. New regular black hole solutions

    International Nuclear Information System (INIS)

    Lemos, Jose P. S.; Zanchin, Vilson T.

    2011-01-01

    In the present work we consider general relativity coupled to Maxwell's electromagnetism and charged matter. Under the assumption of spherical symmetry, there is a particular class of solutions that correspond to regular charged black holes whose interior region is de Sitter, the exterior region is Reissner-Nordstroem and there is a charged thin-layer in-between the two. The main physical and geometrical properties of such charged regular black holes are analyzed.

  10. Manifold Regularized Correlation Object Tracking

    OpenAIRE

    Hu, Hongwei; Ma, Bo; Shen, Jianbing; Shao, Ling

    2017-01-01

    In this paper, we propose a manifold regularized correlation tracking method with augmented samples. To make better use of the unlabeled data and the manifold structure of the sample space, a manifold regularization-based correlation filter is introduced, which aims to assign similar labels to neighbor samples. Meanwhile, the regression model is learned by exploiting the block-circulant structure of matrices resulting from the augmented translated samples over multiple base samples cropped fr...

  11. On geodesics in low regularity

    Science.gov (United States)

    Sämann, Clemens; Steinbauer, Roland

    2018-02-01

    We consider geodesics in both Riemannian and Lorentzian manifolds with metrics of low regularity. We discuss existence of extremal curves for continuous metrics and present several old and new examples that highlight their subtle interrelation with solutions of the geodesic equations. Then we turn to the initial value problem for geodesics for locally Lipschitz continuous metrics and generalize recent results on existence, regularity and uniqueness of solutions in the sense of Filippov.

  12. Ten years of MIPAS measurements with ESA Level 2 processor V6 – Part 1: Retrieval algorithm and diagnostics of the products

    Directory of Open Access Journals (Sweden)

    P. Raspollini

    2013-09-01

    the inversion. An expression specifically designed for the regularizing Levenberg–Marquardt method has been implemented for the computation of the covariance matrices and averaging kernels of the retrieved products. The regularization of the Levenberg–Marquardt method is controlled by the convergence criteria and is deliberately kept weak. The resulting oscillations of the retrieved profile are a posteriori damped by an innovative self-adapting Tikhonov regularization. The convergence criteria and the weakness of the self-adapting regularization ensure that minimum constraints are used and the best vertical resolution obtainable from the measurements is achieved in all atmospheric conditions. Random and systematic errors, as well as vertical and horizontal resolution are compared in the two phases of the mission for all products, namely: temperature, H2O, O3, HNO3, CH4, N2O, NO2, CFC-11, CFC-12, N2O5 and ClONO2. The use in the two phases of the mission of different optimized sets of spectral intervals ensures that, despite the different spectral resolutions, comparable performances are obtained in the whole MIPAS mission in terms of random and systematic errors, while the vertical resolution and the horizontal resolution are significantly better in the case of the optimized resolution measurements.

  13. Manifold Regularized Reinforcement Learning.

    Science.gov (United States)

    Li, Hongliang; Liu, Derong; Wang, Ding

    2018-04-01

    This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.

  14. Laplacian manifold regularization method for fluorescence molecular tomography

    Science.gov (United States)

    He, Xuelei; Wang, Xiaodong; Yi, Huangjian; Chen, Yanrong; Zhang, Xu; Yu, Jingjing; He, Xiaowei

    2017-04-01

    Sparse regularization methods have been widely used in fluorescence molecular tomography (FMT) for stable three-dimensional reconstruction. Generally, ℓ1-regularization-based methods allow for utilizing the sparsity nature of the target distribution. However, in addition to sparsity, the spatial structure information should be exploited as well. A joint ℓ1 and Laplacian manifold regularization model is proposed to improve the reconstruction performance, and two algorithms (with and without Barzilai-Borwein strategy) are presented to solve the regularization model. Numerical studies and in vivo experiment demonstrate that the proposed Gradient projection-resolved Laplacian manifold regularization method for the joint model performed better than the comparative algorithm for ℓ1 minimization method in both spatial aggregation and location accuracy.

  15. Learning Sparse Visual Representations with Leaky Capped Norm Regularizers

    OpenAIRE

    Wangni, Jianqiao; Lin, Dahua

    2017-01-01

    Sparsity inducing regularization is an important part for learning over-complete visual representations. Despite the popularity of $\\ell_1$ regularization, in this paper, we investigate the usage of non-convex regularizations in this problem. Our contribution consists of three parts. First, we propose the leaky capped norm regularization (LCNR), which allows model weights below a certain threshold to be regularized more strongly as opposed to those above, therefore imposes strong sparsity and...

  16. Analysis of Gafchromic EBT3 film calibration irradiated with gamma rays from different systems: Gamma Knife and Cobalt-60 unit.

    Science.gov (United States)

    Najafi, Mohsen; Geraily, Ghazale; Shirazi, Alireza; Esfahani, Mahbod; Teimouri, Javad

    2017-01-01

    In recent years, Gafchromic films are used as an advanced instrument for dosimetry systems. The EBT3 films are a new generation of Gafchromic films. Our main interest is to compare the response of the EBT3 films exposed to gamma rays provided by the Theratron 780C as a conventional radiotherapy system and the Leksell Gamma Knife as a stereotactic radiotherapy system (SRS). Both systems use Cobalt-60 sources, thus using the same energy. However, other factors such as source-to-axis distance, number of sources, dose rate, direction of irradiation, shape of phantom, the field shape of radiation, and different scatter contribution may influence the calibration curve. Calibration curves for the 2 systems were measured and plotted for doses ranging from 0 to 40 Gy at the red and green channels. The best fitting curve was obtained with the Levenberg-Marquardt algorithm. Also, the component of dose uncertainty was obtained for any calibration curve. With the best fitting curve for the EBT3 films, we can use the calibration curve to measure the absolute dose in radiation therapy. Although there is a small deviation between the 2 curves, the p-value at any channel shows no significant difference between the 2 calibration curves. Therefore, the calibration curve for each system can be the same because of minor differences. The results show that with the best fitting curve from measured data, while considering the measurement uncertainties related to them, the EBT3 calibration curve can be used to measure the unknown dose both in SRS and in conventional radiotherapy. Copyright © 2017. Published by Elsevier Inc.

  17. Modeling plant density and ponding water effects on flooded rice evapotranspiration and crop coefficients: critical discussion about the concepts used in current methods

    Science.gov (United States)

    Aschonitis, Vassilis; Diamantopoulou, Maria; Papamichail, Dimitris

    2018-05-01

    The aim of the study is to propose new modeling approaches for daily estimations of crop coefficient K c for flooded rice ( Oryza sativa L., ssp. indica) under various plant densities. Non-linear regression (NLR) and artificial neural networks (ANN) were used to predict K c based on leaf area index LAI, crop height, wind speed, water albedo, and ponding water depth. Two years of evapotranspiration ET c measurements from lysimeters located in a Mediterranean environment were used in this study. The NLR approach combines bootstrapping and Bayesian sensitivity analysis based on a semi-empirical formula. This approach provided significant information about the hidden role of the same predictor variables in the Levenberg-Marquardt ANN approach, which improved K c predictions. Relationships of production versus ET c were also built and verified by data obtained from Australia. The results of the study showed that the daily K c values, under extremely high plant densities (e.g., for LAI max > 10), can reach extremely high values ( K c > 3) during the reproductive stage. Justifications given in the discussion question both the K c values given by FAO and the energy budget approaches, which assume that ET c cannot exceed a specific threshold defined by the net radiation. These approaches can no longer explain the continuous increase of global rice yields (currently are more than double in comparison to the 1960s) due to the improvement of cultivars and agriculture intensification. The study suggests that the safest method to verify predefined or modeled K c values is through preconstructed relationships of production versus ET c using field measurements.

  18. Robust Single Image Super-Resolution via Deep Networks With Sparse Prior.

    Science.gov (United States)

    Liu, Ding; Wang, Zhaowen; Wen, Bihan; Yang, Jianchao; Han, Wei; Huang, Thomas S

    2016-07-01

    Single image super-resolution (SR) is an ill-posed problem, which tries to recover a high-resolution image from its low-resolution observation. To regularize the solution of the problem, previous methods have focused on designing good priors for natural images, such as sparse representation, or directly learning the priors from a large data set with models, such as deep neural networks. In this paper, we argue that domain expertise from the conventional sparse coding model can be combined with the key ingredients of deep learning to achieve further improved results. We demonstrate that a sparse coding model particularly designed for SR can be incarnated as a neural network with the merit of end-to-end optimization over training data. The network has a cascaded structure, which boosts the SR performance for both fixed and incremental scaling factors. The proposed training and testing schemes can be extended for robust handling of images with additional degradation, such as noise and blurring. A subjective assessment is conducted and analyzed in order to thoroughly evaluate various SR techniques. Our proposed model is tested on a wide range of images, and it significantly outperforms the existing state-of-the-art methods for various scaling factors both quantitatively and perceptually.

  19. Learning Errors by Radial Basis Function Neural Networks and Regularization Networks

    Czech Academy of Sciences Publication Activity Database

    Neruda, Roman; Vidnerová, Petra

    2009-01-01

    Roč. 1, č. 2 (2009), s. 49-57 ISSN 2005-4262 R&D Projects: GA MŠk(CZ) 1M0567 Institutional research plan: CEZ:AV0Z10300504 Keywords : neural network * RBF networks * regularization * learning Subject RIV: IN - Informatics, Computer Science http://www.sersc.org/journals/IJGDC/vol2_no1/5.pdf

  20. Exclusion of children with intellectual disabilities from regular ...

    African Journals Online (AJOL)

    Study investigated why teachers exclude children with intellectual disability from the regular classrooms in Nigeria. Participants were, 169 regular teachers randomly selected from Oyo and Ogun states. Questionnaire was used to collect data result revealed that 57.4% regular teachers could not cope with children with ID ...

  1. Challenges of the New Zealand healthcare disaster preparedness prior to the Canterbury earthquakes: a qualitative analysis.

    Science.gov (United States)

    Al-Shaqsi, Sultan; Gauld, Robin; Lovell, Sarah; McBride, David; Al-Kashmiri, Ammar; Al-Harthy, Abdullah

    2013-03-15

    Disasters are a growing global phenomenon. New Zealand has suffered several major disasters in recent times. The state of healthcare disaster preparedness in New Zealand prior to the Canterbury earthquakes is not well documented. To investigate the challenges of the New Zealand healthcare disaster preparedness prior to the Canterbury earthquakes. Semi-structured interviews with emergency planners in all the District Health Boards (DHBs) in New Zealand in the period between January and March 2010. The interview protocol revolved around the domains of emergency planning adopted by the World Health Organization. Seventeen interviews were conducted. The main themes included disinterest of clinical personnel in emergency planning, the need for communication backup, the integration of private services in disaster preparedness, the value of volunteers, the requirement for regular disaster training, and the need to enhance surge capability of the New Zealand healthcare system to respond to disasters. Prior to the Canterbury earthquakes, healthcare disaster preparedness faced multiple challenges. Despite these challenges, New Zealand's healthcare response was adequate. Future preparedness has to consider the lessons learnt from the 2011 earthquakes to improve healthcare disaster planning in New Zealand.

  2. Regular use of alcohol and tobacco in India and its association with age, gender, and poverty.

    Science.gov (United States)

    Neufeld, K J; Peters, D H; Rani, M; Bonu, S; Brooner, R K

    2005-03-07

    This study provides national estimates of regular tobacco and alcohol use in India and their associations with gender, age, and economic group obtained from a representative survey of 471,143 people over the age of 10 years in 1995-96, the National Sample Survey. The national prevalence of regular use of smoking tobacco is estimated to be 16.2%, chewing tobacco 14.0%, and alcohol 4.5%. Men were 25.5 times more likely than women to report regular smoking, 3.7 times more likely to regularly chew tobacco, and 9.7 times more likely to regularly use alcohol. Respondents belonging to scheduled castes and tribes (recognized disadvantaged groups) were significantly more likely to report regular use of alcohol as well as smoking and chewing tobacco. People from rural areas had higher rates compared to urban dwellers, as did those with no formal education. Individuals with incomes below the poverty line had higher relative odds of use of chewing tobacco and alcohol compared to those above the poverty line. The regular use of both tobacco and alcohol also increased significantly with each diminishing income quintile. Comparisons are made between these results and those found in the United States and elsewhere, highlighting the need to address control of these substances on the public health agenda.

  3. Aspirin and the risk of cardiovascular events in atherosclerosis patients with and without prior ischemic events.

    Science.gov (United States)

    Bavry, Anthony A; Elgendy, Islam Y; Elbez, Yedid; Mahmoud, Ahmed N; Sorbets, Emmanuel; Steg, Philippe Gabriel; Bhatt, Deepak L

    2017-09-01

    The benefit of aspirin among patients with stable atherosclerosis without a prior ischemic event is not well defined. Aspirin would be of benefit in outpatients with atherosclerosis with prior ischemic events, but not in those without ischemic events. Subjects from the Reduction of Atherothrombosis for Continued Health registry were divided according to prior ischemic event (n =21 724) vs stable atherosclerosis, but no prior ischemic event (n = 11 872). Analyses were propensity score matched. Aspirin use was updated at each clinic visit and considered as a time-varying covariate. The primary outcome was the first occurrence of cardiovascular death, myocardial infarction, or stroke. In the group with a prior ischemic event, aspirin use was associated with a marginally lower risk of the primary outcome at a median of 41 months (hazard ratio [HR]: 0.81, 95% confidence interval [CI]: 0.65-1.01, P = 0.06). In the group without a prior ischemic event, aspirin use was not associated with a lower risk of the primary outcome at a median of 36 months (HR: 1.03, 95% CI: 0.73-1.45, P = 0.86). In this observational analysis of outpatients with stable atherosclerosis, aspirin was marginally beneficial among patients with a prior ischemic event; however, there was no apparent benefit among those with no prior ischemic event. © 2017 Wiley Periodicals, Inc.

  4. On infinite regular and chiral maps

    OpenAIRE

    Arredondo, John A.; Valdez, Camilo Ramírez y Ferrán

    2015-01-01

    We prove that infinite regular and chiral maps take place on surfaces with at most one end. Moreover, we prove that an infinite regular or chiral map on an orientable surface with genus can only be realized on the Loch Ness monster, that is, the topological surface of infinite genus with one end.

  5. 29 CFR 779.18 - Regular rate.

    Science.gov (United States)

    2010-07-01

    ... employee under subsection (a) or in excess of the employee's normal working hours or regular working hours... Relating to Labor (Continued) WAGE AND HOUR DIVISION, DEPARTMENT OF LABOR STATEMENTS OF GENERAL POLICY OR... not less than one and one-half times their regular rates of pay. Section 7(e) of the Act defines...

  6. Continuum regularized Yang-Mills theory

    International Nuclear Information System (INIS)

    Sadun, L.A.

    1987-01-01

    Using the machinery of stochastic quantization, Z. Bern, M. B. Halpern, C. Taubes and I recently proposed a continuum regularization technique for quantum field theory. This regularization may be implemented by applying a regulator to either the (d + 1)-dimensional Parisi-Wu Langevin equation or, equivalently, to the d-dimensional second order Schwinger-Dyson (SD) equations. This technique is non-perturbative, respects all gauge and Lorentz symmetries, and is consistent with a ghost-free gauge fixing (Zwanziger's). This thesis is a detailed study of this regulator, and of regularized Yang-Mills theory, using both perturbative and non-perturbative techniques. The perturbative analysis comes first. The mechanism of stochastic quantization is reviewed, and a perturbative expansion based on second-order SD equations is developed. A diagrammatic method (SD diagrams) for evaluating terms of this expansion is developed. We apply the continuum regulator to a scalar field theory. Using SD diagrams, we show that all Green functions can be rendered finite to all orders in perturbation theory. Even non-renormalizable theories can be regularized. The continuum regulator is then applied to Yang-Mills theory, in conjunction with Zwanziger's gauge fixing. A perturbative expansion of the regulator is incorporated into the diagrammatic method. It is hoped that the techniques discussed in this thesis will contribute to the construction of a renormalized Yang-Mills theory is 3 and 4 dimensions

  7. Deformations in closed string theory: canonical formulation and regularization

    International Nuclear Information System (INIS)

    Cederwall, M.; Von Gussich, A.; Sundell, P.

    1996-01-01

    We study deformations of closed string theory by primary fields of conformal weight (1,1), using conformal techniques on the complex plane. A canonical surface integral formalism for computing commutators in a non-holomorphic theory is constructed, and explicit formulae for deformations of operators are given. We identify the unique regularization of the arising divergences that respects conformal invariance, and consider the corresponding parallel transport. The associated connection is metric compatible and carries no curvature. (orig.)

  8. Regular use of aspirin and pancreatic cancer risk

    Directory of Open Access Journals (Sweden)

    Mahoney Martin C

    2002-09-01

    Full Text Available Abstract Background Regular use of aspirin and other non-steroidal anti-inflammatory drugs (NSAIDs has been consistently associated with reduced risk of colorectal cancer and adenoma, and there is some evidence for a protective effect for other types of cancer. As experimental studies reveal a possible role for NSAIDs is reducing the risk of pancreatic cancer, epidemiological studies examining similar associations in human populations become more important. Methods In this hospital-based case-control study, 194 patients with pancreatic cancer were compared to 582 age and sex-matched patients with non-neoplastic conditions to examine the association between aspirin use and risk of pancreatic cancer. All participants received medical services at the Roswell Park Cancer Institute in Buffalo, NY and completed a comprehensive epidemiologic questionnaire that included information on demographics, lifestyle factors and medical history as well as frequency and duration of aspirin use. Patients using at least one tablet per week for at least six months were classified as regular aspirin users. Unconditional logistic regression was used to compute crude and adjusted odds ratios (ORs with 95% confidence intervals (CIs. Results Pancreatic cancer risk in aspirin users was not changed relative to non-users (adjusted OR = 1.00; 95% CI 0.72–1.39. No significant change in risk was found in relation to greater frequency or prolonged duration of use, in the total sample or in either gender. Conclusions These data suggest that regular aspirin use may not be associated with lower risk of pancreatic cancer.

  9. Higher covariant derivative Pauli-Villars regularization does not lead to a consistent QCD

    Energy Technology Data Exchange (ETDEWEB)

    Martin, C P [Universidad Autonoma de Madrid (Spain). Dept. de Fisica Teorica; Ruiz Ruiz, F [Nationaal Inst. voor Kernfysica en Hoge-Energiefysica (NIKHEF), Amsterdam (Netherlands). Sectie H

    1994-12-31

    We compute the beta function at one loop for Yang-Mills theory using as regulator the combination of higher covariant derivatives and Pauli-Villars determinants proposed by Faddeev and Slavnov. This regularization prescription has the appealing feature that it is manifestly gauge invariant and essentially four-dimensional. It happens however that the one-loop coefficient in the beta function that it yields is not -11/3, as it should be, but -23/6. The difference is due to unphysical logarithmic radiative corrections generated by the Pauli-Villars determinants on which the regularization method is based. This no-go result discards the prescription as a viable gauge invariant regularization, thus solving a long-standing open question in the literature. We also observe that the precsription can be modified so as to not generate unphysical logarithmic corrections, but at the expense of losing manifest gauge invariance. (orig.).

  10. Higher covariant derivative Pauli-Villars regularization does not lead to a consistent QCD

    International Nuclear Information System (INIS)

    Martin, C.P.; Ruiz Ruiz, F.

    1994-01-01

    We compute the beta function at one loop for Yang-Mills theory using as regulator the combination of higher covariant derivatives and Pauli-Villars determinants proposed by Faddeev and Slavnov. This regularization prescription has the appealing feature that it is manifestly gauge invariant and essentially four-dimensional. It happens however that the one-loop coefficient in the beta function that it yields is not -11/3, as it should be, but -23/6. The difference is due to unphysical logarithmic radiative corrections generated by the Pauli-Villars determinants on which the regularization method is based. This no-go result discards the prescription as a viable gauge invariant regularization, thus solving a long-standing open question in the literature. We also observe that the precsription can be modified so as to not generate unphysical logarithmic corrections, but at the expense of losing manifest gauge invariance. (orig.)

  11. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla

    2015-10-26

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  12. Convergence and fluctuations of Regularized Tyler estimators

    KAUST Repository

    Kammoun, Abla; Couillet, Romain; Pascal, Frederic; Alouini, Mohamed-Slim

    2015-01-01

    This article studies the behavior of regularized Tyler estimators (RTEs) of scatter matrices. The key advantages of these estimators are twofold. First, they guarantee by construction a good conditioning of the estimate and second, being a derivative of robust Tyler estimators, they inherit their robustness properties, notably their resilience to the presence of outliers. Nevertheless, one major problem that poses the use of RTEs in practice is represented by the question of setting the regularization parameter p. While a high value of p is likely to push all the eigenvalues away from zero, it comes at the cost of a larger bias with respect to the population covariance matrix. A deep understanding of the statistics of RTEs is essential to come up with appropriate choices for the regularization parameter. This is not an easy task and might be out of reach, unless one considers asymptotic regimes wherein the number of observations n and/or their size N increase together. First asymptotic results have recently been obtained under the assumption that N and n are large and commensurable. Interestingly, no results concerning the regime of n going to infinity with N fixed exist, even though the investigation of this assumption has usually predated the analysis of the most difficult N and n large case. This motivates our work. In particular, we prove in the present paper that the RTEs converge to a deterministic matrix when n → ∞ with N fixed, which is expressed as a function of the theoretical covariance matrix. We also derive the fluctuations of the RTEs around this deterministic matrix and establish that these fluctuations converge in distribution to a multivariate Gaussian distribution with zero mean and a covariance depending on the population covariance and the parameter.

  13. Regularity effect in prospective memory during aging

    OpenAIRE

    Blondelle, Geoffrey; Hainselin, Mathieu; Gounden, Yannick; Heurley, Laurent; Voisin, Hélène; Megalakaki, Olga; Bressous, Estelle; Quaglino, Véronique

    2016-01-01

    Background: Regularity effect can affect performance in prospective memory (PM), but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults.Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30), 1...

  14. 20 CFR 226.14 - Employee regular annuity rate.

    Science.gov (United States)

    2010-04-01

    ... 20 Employees' Benefits 1 2010-04-01 2010-04-01 false Employee regular annuity rate. 226.14 Section... COMPUTING EMPLOYEE, SPOUSE, AND DIVORCED SPOUSE ANNUITIES Computing an Employee Annuity § 226.14 Employee regular annuity rate. The regular annuity rate payable to the employee is the total of the employee tier I...

  15. Regular algebra and finite machines

    CERN Document Server

    Conway, John Horton

    2012-01-01

    World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg

  16. 39 CFR 6.1 - Regular meetings, annual meeting.

    Science.gov (United States)

    2010-07-01

    ... 39 Postal Service 1 2010-07-01 2010-07-01 false Regular meetings, annual meeting. 6.1 Section 6.1 Postal Service UNITED STATES POSTAL SERVICE THE BOARD OF GOVERNORS OF THE U.S. POSTAL SERVICE MEETINGS (ARTICLE VI) § 6.1 Regular meetings, annual meeting. The Board shall meet regularly on a schedule...

  17. Bayesian 3D X-ray computed tomography image reconstruction with a scaled Gaussian mixture prior model

    International Nuclear Information System (INIS)

    Wang, Li; Gac, Nicolas; Mohammad-Djafari, Ali

    2015-01-01

    In order to improve quality of 3D X-ray tomography reconstruction for Non Destructive Testing (NDT), we investigate in this paper hierarchical Bayesian methods. In NDT, useful prior information on the volume like the limited number of materials or the presence of homogeneous area can be included in the iterative reconstruction algorithms. In hierarchical Bayesian methods, not only the volume is estimated thanks to the prior model of the volume but also the hyper parameters of this prior. This additional complexity in the reconstruction methods when applied to large volumes (from 512 3 to 8192 3 voxels) results in an increasing computational cost. To reduce it, the hierarchical Bayesian methods investigated in this paper lead to an algorithm acceleration by Variational Bayesian Approximation (VBA) [1] and hardware acceleration thanks to projection and back-projection operators paralleled on many core processors like GPU [2]. In this paper, we will consider a Student-t prior on the gradient of the image implemented in a hierarchical way [3, 4, 1]. Operators H (forward or projection) and H t (adjoint or back-projection) implanted in multi-GPU [2] have been used in this study. Different methods will be evalued on synthetic volume 'Shepp and Logan' in terms of quality and time of reconstruction. We used several simple regularizations of order 1 and order 2. Other prior models also exists [5]. Sometimes for a discrete image, we can do the segmentation and reconstruction at the same time, then the reconstruction can be done with less projections

  18. A relevância das metáforas como conceitualização das experiências: uma reflexão sobre o ensino/aprendizagem de inglês no ensino regular

    Directory of Open Access Journals (Sweden)

    Gabriela da Cunha Barbosa Saldanha

    2016-12-01

    Full Text Available Concebendo a metáfora como um meio de conceitualização do mundo a partir de nossas experiências cotidianas, o presente estudo tem como objetivo central identificar as metáforas de estudantes do ensino médio acerca de sua aprendizagem de inglês no ensino fundamental. A pesquisa, realizada em agosto de 2015, apresenta uma natureza mista, utilizando dados qualitativos e quantitativos. A análise do corpus se apoiou nos pressupostos da Teoria da Metáfora Conceitual, em trabalhos já realizados por autores renomados nesse campo, bem como no marco de referência de experiências. Os resultados revelam que, apesar de haver um número maior de metáforas sobre experiências de insucesso relativas à aprendizagem e ao professor, também foram identificadas metáforas acerca de experiências bem-sucedidas, o que demonstra um rompimento com a lógica do “inglês de colégio” e corrobora a tese de que é possível, sim, aprender inglês na escola regular.Palavras-chave: Metáforas. Experiências. Ensino/aprendizagem de inglês. Escola regular.

  19. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-01-01

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.

  20. A regularized stationary mean-field game

    KAUST Repository

    Yang, Xianjin

    2016-04-19

    In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.