WorldWideScience

Sample records for function method based

  1. Numerical methods for characterization of synchrotron radiation based on the Wigner function method

    Directory of Open Access Journals (Sweden)

    Takashi Tanaka

    2014-06-01

    Full Text Available Numerical characterization of synchrotron radiation based on the Wigner function method is explored in order to accurately evaluate the light source performance. A number of numerical methods to compute the Wigner functions for typical synchrotron radiation sources such as bending magnets, undulators and wigglers, are presented, which significantly improve the computation efficiency and reduce the total computation time. As a practical example of the numerical characterization, optimization of betatron functions to maximize the brilliance of undulator radiation is discussed.

  2. Based on Penalty Function Method

    Directory of Open Access Journals (Sweden)

    Ishaq Baba

    2015-01-01

    Full Text Available The dual response surface for simultaneously optimizing the mean and variance models as separate functions suffers some deficiencies in handling the tradeoffs between bias and variance components of mean squared error (MSE. In this paper, the accuracy of the predicted response is given a serious attention in the determination of the optimum setting conditions. We consider four different objective functions for the dual response surface optimization approach. The essence of the proposed method is to reduce the influence of variance of the predicted response by minimizing the variability relative to the quality characteristics of interest and at the same time achieving the specific target output. The basic idea is to convert the constraint optimization function into an unconstraint problem by adding the constraint to the original objective function. Numerical examples and simulations study are carried out to compare performance of the proposed method with some existing procedures. Numerical results show that the performance of the proposed method is encouraging and has exhibited clear improvement over the existing approaches.

  3. Functional connectivity analysis of the neural bases of emotion regulation: A comparison of independent component method with density-based k-means clustering method.

    Science.gov (United States)

    Zou, Ling; Guo, Qian; Xu, Yi; Yang, Biao; Jiao, Zhuqing; Xiang, Jianbo

    2016-04-29

    Functional magnetic resonance imaging (fMRI) is an important tool in neuroscience for assessing connectivity and interactions between distant areas of the brain. To find and characterize the coherent patterns of brain activity as a means of identifying brain systems for the cognitive reappraisal of the emotion task, both density-based k-means clustering and independent component analysis (ICA) methods can be applied to characterize the interactions between brain regions involved in cognitive reappraisal of emotion. Our results reveal that compared with the ICA method, the density-based k-means clustering method provides a higher sensitivity of polymerization. In addition, it is more sensitive to those relatively weak functional connection regions. Thus, the study concludes that in the process of receiving emotional stimuli, the relatively obvious activation areas are mainly distributed in the frontal lobe, cingulum and near the hypothalamus. Furthermore, density-based k-means clustering method creates a more reliable method for follow-up studies of brain functional connectivity.

  4. Cross-Correlation-Function-Based Multipath Mitigation Method for Sine-BOC Signals

    Directory of Open Access Journals (Sweden)

    H. H. Chen

    2012-06-01

    Full Text Available Global Navigation Satellite Systems (GNSS positioning accuracy indoor and urban canyons environments are greatly affected by multipath due to distortions in its autocorrelation function. In this paper, a cross-correlation function between the received sine phased Binary Offset Carrier (sine-BOC modulation signal and the local signal is studied firstly, and a new multipath mitigation method based on cross-correlation function for sine-BOC signal is proposed. This method is implemented to create a cross-correlation function by designing the modulated symbols of the local signal. The theoretical analysis and simulation results indicate that the proposed method exhibits better multipath mitigation performance compared with the traditional Double Delta Correlator (DDC techniques, especially the medium/long delay multipath signals, and it is also convenient and flexible to implement by using only one correlator, which is the case of low-cost mass-market receivers.

  5. Modulating Function-Based Method for Parameter and Source Estimation of Partial Differential Equations

    KAUST Repository

    Asiri, Sharefa M.

    2017-10-08

    Partial Differential Equations (PDEs) are commonly used to model complex systems that arise for example in biology, engineering, chemistry, and elsewhere. The parameters (or coefficients) and the source of PDE models are often unknown and are estimated from available measurements. Despite its importance, solving the estimation problem is mathematically and numerically challenging and especially when the measurements are corrupted by noise, which is often the case. Various methods have been proposed to solve estimation problems in PDEs which can be classified into optimization methods and recursive methods. The optimization methods are usually heavy computationally, especially when the number of unknowns is large. In addition, they are sensitive to the initial guess and stop condition, and they suffer from the lack of robustness to noise. Recursive methods, such as observer-based approaches, are limited by their dependence on some structural properties such as observability and identifiability which might be lost when approximating the PDE numerically. Moreover, most of these methods provide asymptotic estimates which might not be useful for control applications for example. An alternative non-asymptotic approach with less computational burden has been proposed in engineering fields based on the so-called modulating functions. In this dissertation, we propose to mathematically and numerically analyze the modulating functions based approaches. We also propose to extend these approaches to different situations. The contributions of this thesis are as follows. (i) Provide a mathematical analysis of the modulating function-based method (MFBM) which includes: its well-posedness, statistical properties, and estimation errors. (ii) Provide a numerical analysis of the MFBM through some estimation problems, and study the sensitivity of the method to the modulating functions\\' parameters. (iii) Propose an effective algorithm for selecting the method\\'s design parameters

  6. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.

    2016-10-20

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear in unknown parameters. The well-posedness of the modulating functions-based solution is proved. The wave and the fifth-order KdV equations are used as examples to show the effectiveness of the proposed method in both noise-free and noisy cases.

  7. A point-value enhanced finite volume method based on approximate delta functions

    Science.gov (United States)

    Xuan, Li-Jun; Majdalani, Joseph

    2018-02-01

    We revisit the concept of an approximate delta function (ADF), introduced by Huynh (2011) [1], in the form of a finite-order polynomial that holds identical integral properties to the Dirac delta function when used in conjunction with a finite-order polynomial integrand over a finite domain. We show that the use of generic ADF polynomials can be effective at recovering and generalizing several high-order methods, including Taylor-based and nodal-based Discontinuous Galerkin methods, as well as the Correction Procedure via Reconstruction. Based on the ADF concept, we then proceed to formulate a Point-value enhanced Finite Volume (PFV) method, which stores and updates the cell-averaged values inside each element as well as the unknown quantities and, if needed, their derivatives on nodal points. The sharing of nodal information with surrounding elements saves the number of degrees of freedom compared to other compact methods at the same order. To ensure conservation, cell-averaged values are updated using an identical approach to that adopted in the finite volume method. Here, the updating of nodal values and their derivatives is achieved through an ADF concept that leverages all of the elements within the domain of integration that share the same nodal point. The resulting scheme is shown to be very stable at successively increasing orders. Both accuracy and stability of the PFV method are verified using a Fourier analysis and through applications to the linear wave and nonlinear Burgers' equations in one-dimensional space.

  8. New Method for Mesh Moving Based on Radial Basis Function Interpolation

    NARCIS (Netherlands)

    De Boer, A.; Van der Schoot, M.S.; Bijl, H.

    2006-01-01

    A new point-by-point mesh movement algorithm is developed for the deformation of unstructured grids. The method is based on using radial basis function, RBFs, to interpolate the displacements of the boundary nodes to the whole flow mesh. A small system of equations has to be solved, only involving

  9. [Cardiac Synchronization Function Estimation Based on ASM Level Set Segmentation Method].

    Science.gov (United States)

    Zhang, Yaonan; Gao, Yuan; Tang, Liang; He, Ying; Zhang, Huie

    At present, there is no accurate and quantitative methods for the determination of cardiac mechanical synchronism, and quantitative determination of the synchronization function of the four cardiac cavities with medical images has a great clinical value. This paper uses the whole heart ultrasound image sequence, and segments the left & right atriums and left & right ventricles of each frame. After the segmentation, the number of pixels in each cavity and in each frame is recorded, and the areas of the four cavities of the image sequence are therefore obtained. The area change curves of the four cavities are further extracted, and the synchronous information of the four cavities is obtained. Because of the low SNR of Ultrasound images, the boundary lines of cardiac cavities are vague, so the extraction of cardiac contours is still a challenging problem. Therefore, the ASM model information is added to the traditional level set method to force the curve evolution process. According to the experimental results, the improved method improves the accuracy of the segmentation. Furthermore, based on the ventricular segmentation, the right and left ventricular systolic functions are evaluated, mainly according to the area changes. The synchronization of the four cavities of the heart is estimated based on the area changes and the volume changes.

  10. Modulation transfer function (MTF) measurement method based on support vector machine (SVM)

    Science.gov (United States)

    Zhang, Zheng; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2016-03-01

    An imaging system's spatial quality can be expressed by the system's modulation spread function (MTF) as a function of spatial frequency in terms of the linear response theory. Methods have been proposed to assess the MTF of an imaging system using point, slit or edge techniques. The edge method is widely used for the low requirement of targets. However, the traditional edge methods are limited by the edge angle. Besides, image noise will impair the measurement accuracy, making the measurement result unstable. In this paper, a novel measurement method based on the support vector machine (SVM) is proposed. Image patches with different edge angles and MTF levels are generated as the training set. Parameters related with MTF and image structure are extracted from the edge images. Trained with image parameters and the corresponding MTF, the SVM classifier can assess the MTF of any edge image. The result shows that the proposed method has an excellent performance on measuring accuracy and stability.

  11. A new CFD based non-invasive method for functional diagnosis of coronary stenosis.

    Science.gov (United States)

    Xie, Xinzhou; Zheng, Minwen; Wen, Didi; Li, Yabing; Xie, Songyun

    2018-03-22

    Accurate functional diagnosis of coronary stenosis is vital for decision making in coronary revascularization. With recent advances in computational fluid dynamics (CFD), fractional flow reserve (FFR) can be derived non-invasively from coronary computed tomography angiography images (FFR CT ) for functional measurement of stenosis. However, the accuracy of FFR CT is limited due to the approximate modeling approach of maximal hyperemia conditions. To overcome this problem, a new CFD based non-invasive method is proposed. Instead of modeling maximal hyperemia condition, a series of boundary conditions are specified and those simulated results are combined to provide a pressure-flow curve for a stenosis. Then, functional diagnosis of stenosis is assessed based on parameters derived from the obtained pressure-flow curve. The proposed method is applied to both idealized and patient-specific models, and validated with invasive FFR in six patients. Results show that additional hemodynamic information about the flow resistances of a stenosis is provided, which cannot be directly obtained from anatomy information. Parameters derived from the simulated pressure-flow curve show a linear and significant correlations with invasive FFR (r > 0.95, P < 0.05). The proposed method can assess flow resistances by the pressure-flow curve derived parameters without modeling of maximal hyperemia condition, which is a new promising approach for non-invasive functional assessment of coronary stenosis.

  12. Wave resistance calculation method combining Green functions based on Rankine and Kelvin source

    Directory of Open Access Journals (Sweden)

    LI Jingyu

    2017-12-01

    Full Text Available [Ojectives] At present, the Boundary Element Method(BEM of wave-making resistance mostly uses a model in which the velocity distribution near the hull is solved first, and the pressure integral is then calculated using the Bernoulli equation. However,the process of this model of wave-making resistance is complex and has low accuracy.[Methods] To address this problem, the present paper deduces a compound method for the quick calculation of ship wave resistance using the Rankine source Green function to solve the hull surface's source density, and combining the Lagally theorem concerning source point force calculation based on the Kelvin source Green function so as to solve the wave resistance. A case for the Wigley model is given.[Results] The results show that in contrast to the thin ship method of the linear wave resistance theorem, this method has higher precision, and in contrast to the method which completely uses the Kelvin source Green function, this method has better computational efficiency.[Conclusions] In general, the algorithm in this paper provides a compromise between precision and efficiency in wave-making resistance calculation.

  13. Study on Feasibility of Applying Function Approximation Moment Method to Achieve Reliability-Based Design Optimization

    International Nuclear Information System (INIS)

    Huh, Jae Sung; Kwak, Byung Man

    2011-01-01

    Robust optimization or reliability-based design optimization are some of the methodologies that are employed to take into account the uncertainties of a system at the design stage. For applying such methodologies to solve industrial problems, accurate and efficient methods for estimating statistical moments and failure probability are required, and further, the results of sensitivity analysis, which is needed for searching direction during the optimization process, should also be accurate. The aim of this study is to employ the function approximation moment method into the sensitivity analysis formulation, which is expressed as an integral form, to verify the accuracy of the sensitivity results, and to solve a typical problem of reliability-based design optimization. These results are compared with those of other moment methods, and the feasibility of the function approximation moment method is verified. The sensitivity analysis formula with integral form is the efficient formulation for evaluating sensitivity because any additional function calculation is not needed provided the failure probability or statistical moments are calculated

  14. New deconvolution method for microscopic images based on the continuous Gaussian radial basis function interpolation model.

    Science.gov (United States)

    Chen, Zhaoxue; Chen, Hao

    2014-01-01

    A deconvolution method based on the Gaussian radial basis function (GRBF) interpolation is proposed. Both the original image and Gaussian point spread function are expressed as the same continuous GRBF model, thus image degradation is simplified as convolution of two continuous Gaussian functions, and image deconvolution is converted to calculate the weighted coefficients of two-dimensional control points. Compared with Wiener filter and Lucy-Richardson algorithm, the GRBF method has an obvious advantage in the quality of restored images. In order to overcome such a defect of long-time computing, the method of graphic processing unit multithreading or increasing space interval of control points is adopted, respectively, to speed up the implementation of GRBF method. The experiments show that based on the continuous GRBF model, the image deconvolution can be efficiently implemented by the method, which also has a considerable reference value for the study of three-dimensional microscopic image deconvolution.

  15. Modulating functions-based method for parameters and source estimation in one-dimensional partial differential equations

    KAUST Repository

    Asiri, Sharefa M.; Laleg-Kirati, Taous-Meriem

    2016-01-01

    In this paper, modulating functions-based method is proposed for estimating space–time-dependent unknowns in one-dimensional partial differential equations. The proposed method simplifies the problem into a system of algebraic equations linear

  16. A postprocessing method in the HMC framework for predicting gene function based on biological instrumental data

    Science.gov (United States)

    Feng, Shou; Fu, Ping; Zheng, Wenbin

    2018-03-01

    Predicting gene function based on biological instrumental data is a complicated and challenging hierarchical multi-label classification (HMC) problem. When using local approach methods to solve this problem, a preliminary results processing method is usually needed. This paper proposed a novel preliminary results processing method called the nodes interaction method. The nodes interaction method revises the preliminary results and guarantees that the predictions are consistent with the hierarchy constraint. This method exploits the label dependency and considers the hierarchical interaction between nodes when making decisions based on the Bayesian network in its first phase. In the second phase, this method further adjusts the results according to the hierarchy constraint. Implementing the nodes interaction method in the HMC framework also enhances the HMC performance for solving the gene function prediction problem based on the Gene Ontology (GO), the hierarchy of which is a directed acyclic graph that is more difficult to tackle. The experimental results validate the promising performance of the proposed method compared to state-of-the-art methods on eight benchmark yeast data sets annotated by the GO.

  17. A SVM-based quantitative fMRI method for resting-state functional network detection.

    Science.gov (United States)

    Song, Xiaomu; Chen, Nan-kuei

    2014-09-01

    Resting-state functional magnetic resonance imaging (fMRI) aims to measure baseline neuronal connectivity independent of specific functional tasks and to capture changes in the connectivity due to neurological diseases. Most existing network detection methods rely on a fixed threshold to identify functionally connected voxels under the resting state. Due to fMRI non-stationarity, the threshold cannot adapt to variation of data characteristics across sessions and subjects, and generates unreliable mapping results. In this study, a new method is presented for resting-state fMRI data analysis. Specifically, the resting-state network mapping is formulated as an outlier detection process that is implemented using one-class support vector machine (SVM). The results are refined by using a spatial-feature domain prototype selection method and two-class SVM reclassification. The final decision on each voxel is made by comparing its probabilities of functionally connected and unconnected instead of a threshold. Multiple features for resting-state analysis were extracted and examined using an SVM-based feature selection method, and the most representative features were identified. The proposed method was evaluated using synthetic and experimental fMRI data. A comparison study was also performed with independent component analysis (ICA) and correlation analysis. The experimental results show that the proposed method can provide comparable or better network detection performance than ICA and correlation analysis. The method is potentially applicable to various resting-state quantitative fMRI studies. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. A time-frequency analysis method to obtain stable estimates of magnetotelluric response function based on Hilbert-Huang transform

    Science.gov (United States)

    Cai, Jianhua

    2017-05-01

    The time-frequency analysis method represents signal as a function of time and frequency, and it is considered a powerful tool for handling arbitrary non-stationary time series by using instantaneous frequency and instantaneous amplitude. It also provides a possible alternative to the analysis of the non-stationary magnetotelluric (MT) signal. Based on the Hilbert-Huang transform (HHT), a time-frequency analysis method is proposed to obtain stable estimates of the magnetotelluric response function. In contrast to conventional methods, the response function estimation is performed in the time-frequency domain using instantaneous spectra rather than in the frequency domain, which allows for imaging the response parameter content as a function of time and frequency. The theory of the method is presented and the mathematical model and calculation procedure, which are used to estimate response function based on HHT time-frequency spectrum, are discussed. To evaluate the results, response function estimates are compared with estimates from a standard MT data processing method based on the Fourier transform. All results show that apparent resistivities and phases, which are calculated from the HHT time-frequency method, are generally more stable and reliable than those determined from the simple Fourier analysis. The proposed method overcomes the drawbacks of the traditional Fourier methods, and the resulting parameter minimises the estimation bias caused by the non-stationary characteristics of the MT data.

  19. Hybrid ICA-Seed-Based Methods for fMRI Functional Connectivity Assessment: A Feasibility Study

    Directory of Open Access Journals (Sweden)

    Robert E. Kelly

    2010-01-01

    Full Text Available Brain functional connectivity (FC is often assessed from fMRI data using seed-based methods, such as those of detecting temporal correlation between a predefined region (seed and all other regions in the brain; or using multivariate methods, such as independent component analysis (ICA. ICA is a useful data-driven tool, but reproducibility issues complicate group inferences based on FC maps derived with ICA. These reproducibility issues can be circumvented with hybrid methods that use information from ICA-derived spatial maps as seeds to produce seed-based FC maps. We report results from five experiments to demonstrate the potential advantages of hybrid ICA-seed-based FC methods, comparing results from regressing fMRI data against task-related a priori time courses, with “back-reconstruction” from a group ICA, and with five hybrid ICA-seed-based FC methods: ROI-based with (1 single-voxel, (2 few-voxel, and (3 many-voxel seed; and dual-regression-based with (4 single ICA map and (5 multiple ICA map seed.

  20. Dominant partition method. [based on a wave function formalism

    Science.gov (United States)

    Dixon, R. M.; Redish, E. F.

    1979-01-01

    By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.

  1. O-hydroxy-functionalized diamines, polymides, methods of making each, and methods of use

    KAUST Repository

    Ma, Xiaohua; Ghanem, Bader S.; Pinnau, Ingo

    2016-01-01

    Embodiments of the present disclosure provide for an ortho (o)-hydroxy-functionalized diamine, a method of making an o-hydroxy-functionalized diamine, an o-hydroxy-functionalized diamine-based polyimide, a method of making an o-hydroxy-functionalized diamine imide, methods of gas separation, and the like.

  2. O-hydroxy-functionalized diamines, polymides, methods of making each, and methods of use

    KAUST Repository

    Ma, Xiaohua

    2016-01-21

    Embodiments of the present disclosure provide for an ortho (o)-hydroxy-functionalized diamine, a method of making an o-hydroxy-functionalized diamine, an o-hydroxy-functionalized diamine-based polyimide, a method of making an o-hydroxy-functionalized diamine imide, methods of gas separation, and the like.

  3. Estimation of functional failure probability of passive systems based on subset simulation method

    International Nuclear Information System (INIS)

    Wang Dongqing; Wang Baosheng; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to solve the problem of multi-dimensional epistemic uncertainties and small functional failure probability of passive systems, an innovative reliability analysis algorithm called subset simulation based on Markov chain Monte Carlo was presented. The method is found on the idea that a small failure probability can be expressed as a product of larger conditional failure probabilities by introducing a proper choice of intermediate failure events. Markov chain Monte Carlo simulation was implemented to efficiently generate conditional samples for estimating the conditional failure probabilities. Taking the AP1000 passive residual heat removal system, for example, the uncertainties related to the model of a passive system and the numerical values of its input parameters were considered in this paper. And then the probability of functional failure was estimated with subset simulation method. The numerical results demonstrate that subset simulation method has the high computing efficiency and excellent computing accuracy compared with traditional probability analysis methods. (authors)

  4. Method of Fusion Diagnosis for Dam Service Status Based on Joint Distribution Function of Multiple Points

    Directory of Open Access Journals (Sweden)

    Zhenxiang Jiang

    2016-01-01

    Full Text Available The traditional methods of diagnosing dam service status are always suitable for single measuring point. These methods also reflect the local status of dams without merging multisource data effectively, which is not suitable for diagnosing overall service. This study proposes a new method involving multiple points to diagnose dam service status based on joint distribution function. The function, including monitoring data of multiple points, can be established with t-copula function. Therefore, the possibility, which is an important fusing value in different measuring combinations, can be calculated, and the corresponding diagnosing criterion is established with typical small probability theory. Engineering case study indicates that the fusion diagnosis method can be conducted in real time and the abnormal point can be detected, thereby providing a new early warning method for engineering safety.

  5. An Interval-Valued Intuitionistic Fuzzy TOPSIS Method Based on an Improved Score Function

    Directory of Open Access Journals (Sweden)

    Zhi-yong Bai

    2013-01-01

    Full Text Available This paper proposes an improved score function for the effective ranking order of interval-valued intuitionistic fuzzy sets (IVIFSs and an interval-valued intuitionistic fuzzy TOPSIS method based on the score function to solve multicriteria decision-making problems in which all the preference information provided by decision-makers is expressed as interval-valued intuitionistic fuzzy decision matrices where each of the elements is characterized by IVIFS value and the information about criterion weights is known. We apply the proposed score function to calculate the separation measures of each alternative from the positive and negative ideal solutions to determine the relative closeness coefficients. According to the values of the closeness coefficients, the alternatives can be ranked and the most desirable one(s can be selected in the decision-making process. Finally, two illustrative examples for multicriteria fuzzy decision-making problems of alternatives are used as a demonstration of the applications and the effectiveness of the proposed decision-making method.

  6. Formal Analysis of SET and NSL Protocols Using the Interpretation Functions-Based Method

    Directory of Open Access Journals (Sweden)

    Hanane Houmani

    2012-01-01

    Full Text Available Most applications in the Internet such as e-banking and e-commerce use the SET and the NSL protocols to protect the communication channel between the client and the server. Then, it is crucial to ensure that these protocols respect some security properties such as confidentiality, authentication, and integrity. In this paper, we analyze the SET and the NSL protocols with respect to the confidentiality (secrecy property. To perform this analysis, we use the interpretation functions-based method. The main idea behind the interpretation functions-based technique is to give sufficient conditions that allow to guarantee that a cryptographic protocol respects the secrecy property. The flexibility of the proposed conditions allows the verification of daily-life protocols such as SET and NSL. Also, this method could be used under different assumptions such as a variety of intruder abilities including algebraic properties of cryptographic primitives. The NSL protocol, for instance, is analyzed with and without the homomorphism property. We show also, using the SET protocol, the usefulness of this approach to correct weaknesses and problems discovered during the analysis.

  7. Characteristics and functions for place brands based on a Delphi method

    Directory of Open Access Journals (Sweden)

    J de San Eugenio Vela

    2013-10-01

    Full Text Available Introduction. Representation of territories through brands is a recurring issue in today’s modern society. The aim of this article is to establish certain characteristics and functions pertaining to brands linked to geographical areas. Methodology. The decision was made to conduct qualitative research based on a Delphi method comprising a panel of fourteen place branding experts. Results. In relation to commercial brands, it is found that, since they are publicly owned, place brands call for more complex management, preferably on three levels: public administration, private organisations and citizens. Conclusions. Based on the results obtained, it is concluded that management of places centres on the projection of unique, spatial identities on the context of increasing competition between territories.

  8. A Statistical Method of Identifying Interactions in Neuron–Glia Systems Based on Functional Multicell Ca2+ Imaging

    Science.gov (United States)

    Nakae, Ken; Ikegaya, Yuji; Ishikawa, Tomoe; Oba, Shigeyuki; Urakubo, Hidetoshi; Koyama, Masanori; Ishii, Shin

    2014-01-01

    Crosstalk between neurons and glia may constitute a significant part of information processing in the brain. We present a novel method of statistically identifying interactions in a neuron–glia network. We attempted to identify neuron–glia interactions from neuronal and glial activities via maximum-a-posteriori (MAP)-based parameter estimation by developing a generalized linear model (GLM) of a neuron–glia network. The interactions in our interest included functional connectivity and response functions. We evaluated the cross-validated likelihood of GLMs that resulted from the addition or removal of connections to confirm the existence of specific neuron-to-glia or glia-to-neuron connections. We only accepted addition or removal when the modification improved the cross-validated likelihood. We applied the method to a high-throughput, multicellular in vitro Ca2+ imaging dataset obtained from the CA3 region of a rat hippocampus, and then evaluated the reliability of connectivity estimates using a statistical test based on a surrogate method. Our findings based on the estimated connectivity were in good agreement with currently available physiological knowledge, suggesting our method can elucidate undiscovered functions of neuron–glia systems. PMID:25393874

  9. A MAP-based image interpolation method via Viterbi decoding of Markov chains of interpolation functions.

    Science.gov (United States)

    Vedadi, Farhang; Shirani, Shahram

    2014-01-01

    A new method of image resolution up-conversion (image interpolation) based on maximum a posteriori sequence estimation is proposed. Instead of making a hard decision about the value of each missing pixel, we estimate the missing pixels in groups. At each missing pixel of the high resolution (HR) image, we consider an ensemble of candidate interpolation methods (interpolation functions). The interpolation functions are interpreted as states of a Markov model. In other words, the proposed method undergoes state transitions from one missing pixel position to the next. Accordingly, the interpolation problem is translated to the problem of estimating the optimal sequence of interpolation functions corresponding to the sequence of missing HR pixel positions. We derive a parameter-free probabilistic model for this to-be-estimated sequence of interpolation functions. Then, we solve the estimation problem using a trellis representation and the Viterbi algorithm. Using directional interpolation functions and sequence estimation techniques, we classify the new algorithm as an adaptive directional interpolation using soft-decision estimation techniques. Experimental results show that the proposed algorithm yields images with higher or comparable peak signal-to-noise ratios compared with some benchmark interpolation methods in the literature while being efficient in terms of implementation and complexity considerations.

  10. Predictive equation of state method for heavy materials based on the Dirac equation and density functional theory

    Science.gov (United States)

    Wills, John M.; Mattsson, Ann E.

    2012-02-01

    Density functional theory (DFT) provides a formally predictive base for equation of state properties. Available approximations to the exchange/correlation functional provide accurate predictions for many materials in the periodic table. For heavy materials however, DFT calculations, using available functionals, fail to provide quantitative predictions, and often fail to be even qualitative. This deficiency is due both to the lack of the appropriate confinement physics in the exchange/correlation functional and to approximations used to evaluate the underlying equations. In order to assess and develop accurate functionals, it is essential to eliminate all other sources of error. In this talk we describe an efficient first-principles electronic structure method based on the Dirac equation and compare the results obtained with this method with other methods generally used. Implications for high-pressure equation of state of relativistic materials are demonstrated in application to Ce and the light actinides. Sandia National Laboratories is a multi-program laboratory managed andoperated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  11. A Matrix Method Based on the Fibonacci Polynomials to the Generalized Pantograph Equations with Functional Arguments

    Directory of Open Access Journals (Sweden)

    Ayşe Betül Koç

    2014-01-01

    Full Text Available A pseudospectral method based on the Fibonacci operational matrix is proposed to solve generalized pantograph equations with linear functional arguments. By using this method, approximate solutions of the problems are easily obtained in form of the truncated Fibonacci series. Some illustrative examples are given to verify the efficiency and effectiveness of the proposed method. Then, the numerical results are compared with other methods.

  12. Parameter Selection Method for Support Vector Regression Based on Adaptive Fusion of the Mixed Kernel Function

    Directory of Open Access Journals (Sweden)

    Hailun Wang

    2017-01-01

    Full Text Available Support vector regression algorithm is widely used in fault diagnosis of rolling bearing. A new model parameter selection method for support vector regression based on adaptive fusion of the mixed kernel function is proposed in this paper. We choose the mixed kernel function as the kernel function of support vector regression. The mixed kernel function of the fusion coefficients, kernel function parameters, and regression parameters are combined together as the parameters of the state vector. Thus, the model selection problem is transformed into a nonlinear system state estimation problem. We use a 5th-degree cubature Kalman filter to estimate the parameters. In this way, we realize the adaptive selection of mixed kernel function weighted coefficients and the kernel parameters, the regression parameters. Compared with a single kernel function, unscented Kalman filter (UKF support vector regression algorithms, and genetic algorithms, the decision regression function obtained by the proposed method has better generalization ability and higher prediction accuracy.

  13. A numerical method to solve the 1D and the 2D reaction diffusion equation based on Bessel functions and Jacobian free Newton-Krylov subspace methods

    Science.gov (United States)

    Parand, K.; Nikarya, M.

    2017-11-01

    In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.

  14. Introducing trimming and function ranking to Solid Works based on function analysis

    NARCIS (Netherlands)

    Chechurin, Leonid S.; Wits, Wessel Willems; Bakker, Hans M.; Cascini, G.; Vaneker, Thomas H.J.

    2011-01-01

    TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model

  15. Introducing Trimming and Function Ranking to SolidWorks based on Function Analysis

    NARCIS (Netherlands)

    Chechurin, L.S.; Wits, Wessel Willems; Bakker, Hans M.; Vaneker, Thomas H.J.

    2015-01-01

    TRIZ based Function Analysis models existing products based on functional interactions between product parts. Such a function model description is the ideal starting point for product innovation. Design engineers can apply (TRIZ) methods such as trimming and function ranking to this function model

  16. Finding function: evaluation methods for functional genomic data

    Directory of Open Access Journals (Sweden)

    Barrett Daniel R

    2006-07-01

    Full Text Available Abstract Background Accurate evaluation of the quality of genomic or proteomic data and computational methods is vital to our ability to use them for formulating novel biological hypotheses and directing further experiments. There is currently no standard approach to evaluation in functional genomics. Our analysis of existing approaches shows that they are inconsistent and contain substantial functional biases that render the resulting evaluations misleading both quantitatively and qualitatively. These problems make it essentially impossible to compare computational methods or large-scale experimental datasets and also result in conclusions that generalize poorly in most biological applications. Results We reveal issues with current evaluation methods here and suggest new approaches to evaluation that facilitate accurate and representative characterization of genomic methods and data. Specifically, we describe a functional genomics gold standard based on curation by expert biologists and demonstrate its use as an effective means of evaluation of genomic approaches. Our evaluation framework and gold standard are freely available to the community through our website. Conclusion Proper methods for evaluating genomic data and computational approaches will determine how much we, as a community, are able to learn from the wealth of available data. We propose one possible solution to this problem here but emphasize that this topic warrants broader community discussion.

  17. Structural properties of metal-organic frameworks within the density-functional based tight-binding method

    Energy Technology Data Exchange (ETDEWEB)

    Lukose, Binit; Supronowicz, Barbara; Kuc, Agnieszka B.; Heine, Thomas [School of Engineering and Science, Jacobs University Bremen (Germany); Petkov, Petko S.; Vayssilov, Georgi N. [Faculty of Chemistry, University of Sofia (Bulgaria); Frenzel, Johannes [Lehrstuhl fuer Theoretische Chemie, Ruhr-Universitaet Bochum (Germany); Seifert, Gotthard [Physikalische Chemie, Technische Universitaet Dresden (Germany)

    2012-02-15

    Density-functional based tight-binding (DFTB) is a powerful method to describe large molecules and materials. Metal-organic frameworks (MOFs), materials with interesting catalytic properties and with very large surface areas, have been developed and have become commercially available. Unit cells of MOFs typically include hundreds of atoms, which make the application of standard density-functional methods computationally very expensive, sometimes even unfeasible. The aim of this paper is to prepare and to validate the self-consistent charge-DFTB (SCC-DFTB) method for MOFs containing Cu, Zn, and Al metal centers. The method has been validated against full hybrid density-functional calculations for model clusters, against gradient corrected density-functional calculations for supercells, and against experiment. Moreover, the modular concept of MOF chemistry has been discussed on the basis of their electronic properties. We concentrate on MOFs comprising three common connector units: copper paddlewheels (HKUST-1), zinc oxide Zn{sub 4}O tetrahedron (MOF-5, MOF-177, DUT-6 (MOF-205)), and aluminum oxide AlO{sub 4}(OH){sub 2} octahedron (MIL-53). We show that SCC-DFTB predicts structural parameters with a very good accuracy (with less than 5% deviation, even for adsorbed CO and H{sub 2}O on HKUST-1), while adsorption energies differ by 12 kJ mol{sup -1} or less for CO and water compared to DFT benchmark calculations. (Copyright copyright 2012 WILEY-VCH Verlag GmbH and Co. KGaA, Weinheim)

  18. In-depth performance evaluation of PFP and ESG sequence-based function prediction methods in CAFA 2011 experiment

    Directory of Open Access Journals (Sweden)

    Chitale Meghana

    2013-02-01

    Full Text Available Abstract Background Many Automatic Function Prediction (AFP methods were developed to cope with an increasing growth of the number of gene sequences that are available from high throughput sequencing experiments. To support the development of AFP methods, it is essential to have community wide experiments for evaluating performance of existing AFP methods. Critical Assessment of Function Annotation (CAFA is one such community experiment. The meeting of CAFA was held as a Special Interest Group (SIG meeting at the Intelligent Systems in Molecular Biology (ISMB conference in 2011. Here, we perform a detailed analysis of two sequence-based function prediction methods, PFP and ESG, which were developed in our lab, using the predictions submitted to CAFA. Results We evaluate PFP and ESG using four different measures in comparison with BLAST, Prior, and GOtcha. In addition to the predictions submitted to CAFA, we further investigate performance of a different scoring function to rank order predictions by PFP as well as PFP/ESG predictions enriched with Priors that simply adds frequently occurring Gene Ontology terms as a part of predictions. Prediction accuracies of each method were also evaluated separately for different functional categories. Successful and unsuccessful predictions by PFP and ESG are also discussed in comparison with BLAST. Conclusion The in-depth analysis discussed here will complement the overall assessment by the CAFA organizers. Since PFP and ESG are based on sequence database search results, our analyses are not only useful for PFP and ESG users but will also shed light on the relationship of the sequence similarity space and functions that can be inferred from the sequences.

  19. A noise level prediction method based on electro-mechanical frequency response function for capacitors.

    Science.gov (United States)

    Zhu, Lingyu; Ji, Shengchang; Shen, Qi; Liu, Yuan; Li, Jinyu; Liu, Hao

    2013-01-01

    The capacitors in high-voltage direct-current (HVDC) converter stations radiate a lot of audible noise which can reach higher than 100 dB. The existing noise level prediction methods are not satisfying enough. In this paper, a new noise level prediction method is proposed based on a frequency response function considering both electrical and mechanical characteristics of capacitors. The electro-mechanical frequency response function (EMFRF) is defined as the frequency domain quotient of the vibration response and the squared capacitor voltage, and it is obtained from impulse current experiment. Under given excitations, the vibration response of the capacitor tank is the product of EMFRF and the square of the given capacitor voltage in frequency domain, and the radiated audible noise is calculated by structure acoustic coupling formulas. The noise level under the same excitations is also measured in laboratory, and the results are compared with the prediction. The comparison proves that the noise prediction method is effective.

  20. Improved non-dimensional dynamic influence function method based on tow-domain method for vibration analysis of membranes

    Directory of Open Access Journals (Sweden)

    SW Kang

    2015-02-01

    Full Text Available This article introduces an improved non-dimensional dynamic influence function method using a sub-domain method for efficiently extracting the eigenvalues and mode shapes of concave membranes with arbitrary shapes. The non-dimensional dynamic influence function method (non-dimensional dynamic influence function method, which was developed by the authors in 1999, gives highly accurate eigenvalues for membranes, plates, and acoustic cavities, compared with the finite element method. However, it needs the inefficient procedure of calculating the singularity of a system matrix in the frequency range of interest for extracting eigenvalues and mode shapes. To overcome the inefficient procedure, this article proposes a practical approach to make the system matrix equation of the concave membrane of interest into a form of algebraic eigenvalue problem. It is shown by several case studies that the proposed method has a good convergence characteristics and yields very accurate eigenvalues, compared with an exact method and finite element method (ANSYS.

  1. A meta-analysis based method for prioritizing candidate genes involved in a pre-specific function

    Directory of Open Access Journals (Sweden)

    Jingjing Zhai

    2016-12-01

    Full Text Available The identification of genes associated with a given biological function in plants remains a challenge, although network-based gene prioritization algorithms have been developed for Arabidopsis thaliana and many non-model plant species. Nevertheless, these network-based gene prioritization algorithms have encountered several problems; one in particular is that of unsatisfactory prediction accuracy due to limited network coverage, varying link quality, and/or uncertain network connectivity. Thus a model that integrates complementary biological data may be expected to increase the prediction accuracy of gene prioritization. Towards this goal, we developed a novel gene prioritization method named RafSee, to rank candidate genes using a random forest algorithm that integrates sequence, evolutionary, and epigenetic features of plants. Subsequently, we proposed an integrative approach named RAP (Rank Aggregation-based data fusion for gene Prioritization, in which an order statistics-based meta-analysis was used to aggregate the rank of the network-based gene prioritization method and RafSee, for accurately prioritizing candidate genes involved in a pre-specific biological function. Finally, we showcased the utility of RAP by prioritizing 380 flowering-time genes in Arabidopsis. The ‘leave-one-out’ cross-validation experiment showed that RafSee could work as a complement to a current state-of-art network-based gene prioritization system (AraNet v2. Moreover, RAP ranked 53.68% (204/380 flowering-time genes higher than AraNet v2, resulting in an 39.46% improvement in term of the first quartile rank. Further evaluations also showed that RAP was effective in prioritizing genes-related to different abiotic stresses. To enhance the usability of RAP for Arabidopsis and non-model plant species, an R package implementing the method is freely available at http://bioinfo.nwafu.edu.cn/software.

  2. Comprehensive reliability allocation method for CNC lathes based on cubic transformed functions of failure mode and effects analysis

    Science.gov (United States)

    Yang, Zhou; Zhu, Yunpeng; Ren, Hongrui; Zhang, Yimin

    2015-03-01

    Reliability allocation of computerized numerical controlled(CNC) lathes is very important in industry. Traditional allocation methods only focus on high-failure rate components rather than moderate failure rate components, which is not applicable in some conditions. Aiming at solving the problem of CNC lathes reliability allocating, a comprehensive reliability allocation method based on cubic transformed functions of failure modes and effects analysis(FMEA) is presented. Firstly, conventional reliability allocation methods are introduced. Then the limitations of direct combination of comprehensive allocation method with the exponential transformed FMEA method are investigated. Subsequently, a cubic transformed function is established in order to overcome these limitations. Properties of the new transformed functions are discussed by considering the failure severity and the failure occurrence. Designers can choose appropriate transform amplitudes according to their requirements. Finally, a CNC lathe and a spindle system are used as an example to verify the new allocation method. Seven criteria are considered to compare the results of the new method with traditional methods. The allocation results indicate that the new method is more flexible than traditional methods. By employing the new cubic transformed function, the method covers a wider range of problems in CNC reliability allocation without losing the advantages of traditional methods.

  3. Green close-quote s function method with energy-independent vertex functions

    International Nuclear Information System (INIS)

    Tsay Tzeng, S.Y.; Kuo, T.T.; Tzeng, Y.; Geyer, H.B.; Navratil, P.

    1996-01-01

    In conventional Green close-quote s function methods the vertex function Γ is generally energy dependent. However, a model-space Green close-quote s function method where the vertex function is manifestly energy independent can be formulated using energy-independent effective interaction theories based on folded diagrams and/or similarity transformations. This is discussed in general and then illustrated for a 1p1h model-space Green close-quote s function applied to a solvable Lipkin many-fermion model. The poles of the conventional Green close-quote s function are obtained by solving a self-consistent Dyson equation and model space calculations may lead to unphysical poles. For the energy-independent model-space Green close-quote s function only the physical poles of the model problem are reproduced and are in satisfactory agreement with the exact excitation energies. copyright 1996 The American Physical Society

  4. Theoretical comparison of performance using transfer functions for reactivity meters based on inverse kinetic method and simple feedback method

    International Nuclear Information System (INIS)

    Shimazu, Yoichiro; Tashiro, Shoichi; Tojo, Masayuki

    2017-01-01

    The performance of two digital reactivity meters, one based on the conventional inverse kinetic method and the other one based on simple feedback theory, are compared analytically using their respective transfer functions. The latter one is proposed by one of the authors. It has been shown that the performance of the two reactivity meters become almost identical when proper system parameters are selected for each reactivity meter. A new correlation between the system parameters of the two reactivity meters is found. With this correlation, filter designers can easily determine the system parameters for the respective reactivity meters to obtain identical performance. (author)

  5. Design of New Test Function Model Based on Multi-objective Optimization Method

    Directory of Open Access Journals (Sweden)

    Zhaoxia Shang

    2017-01-01

    Full Text Available Space partitioning method, as a new algorism, has been applied to planning and decision-making of investment portfolio more and more often. But currently there are so few testing function for this algorism, which has greatly restrained its further development and application. An innovative test function model is designed in this paper and is used to test the algorism. It is proved that for evaluation of space partitioning method in certain applications, this test function has fairly obvious advantage.

  6. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination.

    Science.gov (United States)

    Yang, Yingdong; Mao, Xuchu; Tian, Weifeng

    2016-06-08

    Global navigation satellite systems (GNSS) are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM) to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment) method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  7. Rotation Matrix Method Based on Ambiguity Function for GNSS Attitude Determination

    Directory of Open Access Journals (Sweden)

    Yingdong Yang

    2016-06-01

    Full Text Available Global navigation satellite systems (GNSS are well suited for attitude determination. In this study, we use the rotation matrix method to resolve the attitude angle. This method achieves better performance in reducing computational complexity and selecting satellites. The condition of the baseline length is combined with the ambiguity function method (AFM to search for integer ambiguity, and it is validated in reducing the span of candidates. The noise error is always the key factor to the success rate. It is closely related to the satellite geometry model. In contrast to the AFM, the LAMBDA (Least-squares AMBiguity Decorrelation Adjustment method gets better results in solving the relationship of the geometric model and the noise error. Although the AFM is more flexible, it is lack of analysis on this aspect. In this study, the influence of the satellite geometry model on the success rate is analyzed in detail. The computation error and the noise error are effectively treated. Not only is the flexibility of the AFM inherited, but the success rate is also increased. An experiment is conducted in a selected campus, and the performance is proved to be effective. Our results are based on simulated and real-time GNSS data and are applied on single-frequency processing, which is known as one of the challenging case of GNSS attitude determination.

  8. A Method against Interrupted-Sampling Repeater Jamming Based on Energy Function Detection and Band-Pass Filtering

    Directory of Open Access Journals (Sweden)

    Hui Yuan

    2017-01-01

    Full Text Available Interrupted-sampling repeater jamming (ISRJ is a new kind of coherent jamming to the large time-bandwidth linear frequency modulation (LFM signal. Many jamming modes, such as lifelike multiple false targets and dense false targets, can be made through setting up different parameters. According to the “storage-repeater-storage-repeater” characteristics of the ISRJ and the differences in the time-frequency-energy domain between the ISRJ signal and the target echo signal, one new method based on the energy function detection and band-pass filtering is proposed to suppress the ISRJ. The methods mainly consist of two parts: extracting the signal segments without ISRJ and constructing band-pass filtering function with low sidelobe. The simulation results show that the method is effective in the ISRJ with different parameters.

  9. A prediction method for the wax deposition rate based on a radial basis function neural network

    Directory of Open Access Journals (Sweden)

    Ying Xie

    2017-06-01

    Full Text Available The radial basis function neural network is a popular supervised learning tool based on machinery learning technology. Its high precision having been proven, the radial basis function neural network has been applied in many areas. The accumulation of deposited materials in the pipeline may lead to the need for increased pumping power, a decreased flow rate or even to the total blockage of the line, with losses of production and capital investment, so research on predicting the wax deposition rate is significant for the safe and economical operation of an oil pipeline. This paper adopts the radial basis function neural network to predict the wax deposition rate by considering four main influencing factors, the pipe wall temperature gradient, pipe wall wax crystal solubility coefficient, pipe wall shear stress and crude oil viscosity, by the gray correlational analysis method. MATLAB software is employed to establish the RBF neural network. Compared with the previous literature, favorable consistency exists between the predicted outcomes and the experimental results, with a relative error of 1.5%. It can be concluded that the prediction method of wax deposition rate based on the RBF neural network is feasible.

  10. Penalty parameter of the penalty function method

    DEFF Research Database (Denmark)

    Si, Cheng Yong; Lan, Tian; Hu, Junjie

    2014-01-01

    The penalty parameter of penalty function method is systematically analyzed and discussed. For the problem that Deb's feasibility-based rule doesnot give the detailed instruction as how to rank two solutions when they have the same constraint violation, an improved Deb's feasibility-based rule is...

  11. A Rapid Method to Score Stream Reaches Based on the Overall Performance of Their Main Ecological Functions

    Science.gov (United States)

    Rowe, David K.; Parkyn, Stephanie; Quinn, John; Collier, Kevin; Hatton, Chris; Joy, Michael K.; Maxted, John; Moore, Stephen

    2009-06-01

    A method was developed to score the ecological condition of first- to third-order stream reaches in the Auckland region of New Zealand based on the performance of their key ecological functions. Such a method is required by consultants and resource managers to quantify the reduction in ecological condition of a modified stream reach relative to its unmodified state. This is a fundamental precursor for the determination of fair environmental compensation for achieving no-net-loss in overall stream ecological value. Field testing and subsequent use of the method indicated that it provides a useful measure of ecological condition related to the performance of stream ecological functions. It is relatively simple to apply compared to a full ecological study, is quick to use, and allows identification of the degree of impairment of each of the key ecological functions. The scoring system was designed so that future improvements in the measurement of stream functions can be incorporated into it. Although the methodology was specifically designed for Auckland streams, the principles can be readily adapted to other regions and stream types.

  12. A Method to Optimize Geometric Errors of Machine Tool based on SNR Quality Loss Function and Correlation Analysis

    Directory of Open Access Journals (Sweden)

    Cai Ligang

    2017-01-01

    Full Text Available Instead improving the accuracy of machine tool by increasing the precision of key components level blindly in the production process, the method of combination of SNR quality loss function and machine tool geometric error correlation analysis to optimize five-axis machine tool geometric errors will be adopted. Firstly, the homogeneous transformation matrix method will be used to build five-axis machine tool geometric error modeling. Secondly, the SNR quality loss function will be used for cost modeling. And then, machine tool accuracy optimal objective function will be established based on the correlation analysis. Finally, ISIGHT combined with MATLAB will be applied to optimize each error. The results show that this method is reasonable and appropriate to relax the range of tolerance values, so as to reduce the manufacturing cost of machine tools.

  13. A novel method to solve functional differential equations

    International Nuclear Information System (INIS)

    Tapia, V.

    1990-01-01

    A method to solve differential equations containing the variational operator as the derivation operation is presented. They are called variational differential equations (VDE). The solution to a VDE should be a function containing the derivatives, with respect to the base space coordinates, of the fields up to a generic order s: a s-th-order function. The variational operator doubles the order of the function on which it acts. Therefore, in order to make compatible the orders of the different terms appearing in a VDE, the solution should be a function containing the derivatives of the fields at all orders. But this takes us again back to the functional methods. In order to avoid this, one must restrict the considerations, in the case of second-order VDEs, to the space of s-th-order functions on which the variational operator acts transitively. These functions have been characterized for a one-dimensional base space for the first- and second-order cases. These functions turn out to be polynomial in the highest-order derivatives of the fields with functions of the lower-order derivatives as coefficients. Then VDEs reduce to a system of coupled partial differential equations for the coefficients above mentioned. The importance of the method lies on the fact that the solutions to VDEs are in a one-to-one correspondence with the solutions of functional differential equations. The previous method finds direct applications in quantum field theory, where the Schroedinger equation plays a central role. Since the Schroedinger equation is reduced to a system of coupled partial differential equations, this provides a nonperturbative scheme for quantum field theory. As an example, the massless scalar field is considered

  14. Frames and other bases in abstract and function spaces novel methods in harmonic analysis

    CERN Document Server

    Gia, Quoc; Mayeli, Azita; Mhaskar, Hrushikesh; Zhou, Ding-Xuan

    2017-01-01

    The first of a two volume set on novel methods in harmonic analysis, this book draws on a number of original research and survey papers from well-known specialists detailing the latest innovations and recently discovered links between various fields. Along with many deep theoretical results, these volumes contain numerous applications to problems in signal processing, medical imaging, geodesy, statistics, and data science. The chapters within cover an impressive range of ideas from both traditional and modern harmonic analysis, such as: the Fourier transform, Shannon sampling, frames, wavelets, functions on Euclidean spaces, analysis on function spaces of Riemannian and sub-Riemannian manifolds, Fourier analysis on manifolds and Lie groups, analysis on combinatorial graphs, sheaves, co-sheaves, and persistent homologies on topological spaces. Volume I is organized around the theme of frames and other bases in abstract and function spaces, covering topics such as: The advanced development of frames, including ...

  15. Annotating function to differentially expressed LincRNAs in myelodysplastic syndrome using a network-based method.

    Science.gov (United States)

    Liu, Keqin; Beck, Dominik; Thoms, Julie A I; Liu, Liang; Zhao, Weiling; Pimanda, John E; Zhou, Xiaobo

    2017-09-01

    Long non-coding RNAs (lncRNAs) have been implicated in the regulation of diverse biological functions. The number of newly identified lncRNAs has increased dramatically in recent years but their expression and function have not yet been described from most diseases. To elucidate lncRNA function in human disease, we have developed a novel network based method (NLCFA) integrating correlations between lncRNA, protein coding genes and noncoding miRNAs. We have also integrated target gene associations and protein-protein interactions and designed our model to provide information on the combined influence of mRNAs, lncRNAs and miRNAs on cellular signal transduction networks. We have generated lncRNA expression profiles from the CD34+ haematopoietic stem and progenitor cells (HSPCs) from patients with Myelodysplastic syndromes (MDS) and healthy donors. We report, for the first time, aberrantly expressed lncRNAs in MDS and further prioritize biologically relevant lncRNAs using the NLCFA. Taken together, our data suggests that aberrant levels of specific lncRNAs are intimately involved in network modules that control multiple cancer-associated signalling pathways and cellular processes. Importantly, our method can be applied to prioritize aberrantly expressed lncRNAs for functional validation in other diseases and biological contexts. The method is implemented in R language and Matlab. xizhou@wakehealth.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com

  16. Calculational model based on influence function method for power distribution and control rod worth in fast reactors

    International Nuclear Information System (INIS)

    Sanda, T.; Azekura, K.

    1983-01-01

    A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size

  17. Relativistic density functional theory with picture-change corrected electron density based on infinite-order Douglas-Kroll-Hess method

    Science.gov (United States)

    Oyama, Takuro; Ikabata, Yasuhiro; Seino, Junji; Nakai, Hiromi

    2017-07-01

    This Letter proposes a density functional treatment based on the two-component relativistic scheme at the infinite-order Douglas-Kroll-Hess (IODKH) level. The exchange-correlation energy and potential are calculated using the electron density based on the picture-change corrected density operator transformed by the IODKH method. Numerical assessments indicated that the picture-change uncorrected density functional terms generate significant errors, on the order of hartree for heavy atoms. The present scheme was found to reproduce the energetics in the four-component treatment with high accuracy.

  18. Developing rapid methods for analyzing upland riparian functions and values.

    Science.gov (United States)

    Hruby, Thomas

    2009-06-01

    Regulators protecting riparian areas need to understand the integrity, health, beneficial uses, functions, and values of this resource. Up to now most methods providing information about riparian areas are based on analyzing condition or integrity. These methods, however, provide little information about functions and values. Different methods are needed that specifically address this aspect of riparian areas. In addition to information on functions and values, regulators have very specific needs that include: an analysis at the site scale, low cost, usability, and inclusion of policy interpretations. To meet these needs a rapid method has been developed that uses a multi-criteria decision matrix to categorize riparian areas in Washington State, USA. Indicators are used to identify the potential of the site to provide a function, the potential of the landscape to support the function, and the value the function provides to society. To meet legal needs fixed boundaries for assessment units are established based on geomorphology, the distance from "Ordinary High Water Mark" and different categories of land uses. Assessment units are first classified based on ecoregions, geomorphic characteristics, and land uses. This simplifies the data that need to be collected at a site, but it requires developing and calibrating a separate model for each "class." The approach to developing methods is adaptable to other locations as its basic structure is not dependent on local conditions.

  19. A Numerical Method for Lane-Emden Equations Using Hybrid Functions and the Collocation Method

    Directory of Open Access Journals (Sweden)

    Changqing Yang

    2012-01-01

    Full Text Available A numerical method to solve Lane-Emden equations as singular initial value problems is presented in this work. This method is based on the replacement of unknown functions through a truncated series of hybrid of block-pulse functions and Chebyshev polynomials. The collocation method transforms the differential equation into a system of algebraic equations. It also has application in a wide area of differential equations. Corresponding numerical examples are presented to demonstrate the accuracy of the proposed method.

  20. A gold standard method for the evaluation of antibody-based materials functionality: Approach to forced degradation studies.

    Science.gov (United States)

    Coussot, Gaëlle; Le Postollec, Aurélie; Faye, Clément; Dobrijevic, Michel

    2018-04-15

    The scope of this paper is to present a gold standard method to evaluate functional activity of antibody (Ab)-based materials during the different phases of their development, after their exposure to forced degradations or even during routine quality control. Ab-based materials play a central role in the development of diagnostic devices, for example, for screening or therapeutic target characterization, in formulation development, and in novel micro(nano)technology approaches to develop immunosensors useful for the analysis of trace substances in pharmaceutical and food industries, clinical and environmental fields. A very important aspect in diagnostic device development is the construction of its biofunctional surfaces. These Ab surfaces require biocompatibility, homogeneity, stability, specificity and functionality. Thus, this work describes the validation and applications of a unique ligand binding assay to directly perform the quantitative measurement of functional Ab binding sites immobilized on the solid surfaces. The method called Antibody Anti-HorseRadish Peroxidase (A2HRP) method, uses a covalently coated anti-HRP antibody (anti-HRP Ab) and does not need for a secondary Ab during the detection step. The A2HRP method was validated and gave reliable results over a wide range of absorbance values. Analyzed validation criteria were fulfilled as requested by the food and drug administration (FDA) and European Medicines Agency (EMA) guidance for the validation of bioanalytical methods with 1) an accuracy mean value within +15% of the nominal value; 2) the within-assay precision less than 7.1%, and 3) the inter-day variability under 12.1%. With the A2HRP method, it is then possible to quantify from 0.04 × 10 12 to 2.98 × 10 12 functional Ab binding sites immobilized on the solid surfaces. A2HRP method was validated according to FDA and EMA guidance, allowing the creation of a gold standard method to evaluate Ab surfaces for their resistance under

  1. A new diffusion nodal method based on analytic basis function expansion

    International Nuclear Information System (INIS)

    Noh, J.M.; Cho, N.Z.

    1993-01-01

    The transverse integration procedure commonly used in most advanced nodal methods results in some limitations. The first is that the transverse leakage term that appears in the transverse integration procedure must be appropriately approximated. In most advanced nodal methods, this term is expanded in a quadratic polynomial. The second arises when reconstructing the pinwise flux distribution within a node. The available one-dimensional flux shapes from nodal calculation in each spatial direction cannot be used directly in the flux reconstruction. Finally, the transverse leakage defined for a hexagonal node becomes so complicated as not to be easily handled and contains nonphysical singular terms. In this paper, a new nodal method called the analytic function expansion nodal (AFEN) method is described for both the rectangular geometry and the hexagonal geometry in order to overcome these limitations. This method does not solve the transverse-integrated one-dimensional diffusion equations but instead solves directly the original multidimensional diffusion equation within a node. This is a accomplished by expanding the solution (or the intranodal homogeneous flux distribution) in terms of nonseparable analytic basis functions satisfying the diffusion equation at any point in the node

  2. Method for Car in Dangerous Action Detection by Means of Wavelet Multi Resolution Analysis Based on Appropriate Support Length of Base Function

    OpenAIRE

    Kohei Arai; Tomoko Nishikawa

    2013-01-01

    Multi-Resolution Analysis: MRA based on the mother wavelet function with which support length differs from the image of the automobile rear under run is performed, and the run characteristic of a car is searched for. Speed, deflection, etc. are analyzed and the method of detecting vehicles with high accident danger is proposed. The experimental results show that vehicles in a dangerous action can be detected by the proposed method.

  3. Research on Fault Diagnosis Method Based on Rule Base Neural Network

    Directory of Open Access Journals (Sweden)

    Zheng Ni

    2017-01-01

    Full Text Available The relationship between fault phenomenon and fault cause is always nonlinear, which influences the accuracy of fault location. And neural network is effective in dealing with nonlinear problem. In order to improve the efficiency of uncertain fault diagnosis based on neural network, a neural network fault diagnosis method based on rule base is put forward. At first, the structure of BP neural network is built and the learning rule is given. Then, the rule base is built by fuzzy theory. An improved fuzzy neural construction model is designed, in which the calculated methods of node function and membership function are also given. Simulation results confirm the effectiveness of this method.

  4. Network-based functional enrichment

    Directory of Open Access Journals (Sweden)

    Poirel Christopher L

    2011-11-01

    Full Text Available Abstract Background Many methods have been developed to infer and reason about molecular interaction networks. These approaches often yield networks with hundreds or thousands of nodes and up to an order of magnitude more edges. It is often desirable to summarize the biological information in such networks. A very common approach is to use gene function enrichment analysis for this task. A major drawback of this method is that it ignores information about the edges in the network being analyzed, i.e., it treats the network simply as a set of genes. In this paper, we introduce a novel method for functional enrichment that explicitly takes network interactions into account. Results Our approach naturally generalizes Fisher’s exact test, a gene set-based technique. Given a function of interest, we compute the subgraph of the network induced by genes annotated to this function. We use the sequence of sizes of the connected components of this sub-network to estimate its connectivity. We estimate the statistical significance of the connectivity empirically by a permutation test. We present three applications of our method: i determine which functions are enriched in a given network, ii given a network and an interesting sub-network of genes within that network, determine which functions are enriched in the sub-network, and iii given two networks, determine the functions for which the connectivity improves when we merge the second network into the first. Through these applications, we show that our approach is a natural alternative to network clustering algorithms. Conclusions We presented a novel approach to functional enrichment that takes into account the pairwise relationships among genes annotated by a particular function. Each of the three applications discovers highly relevant functions. We used our methods to study biological data from three different organisms. Our results demonstrate the wide applicability of our methods. Our algorithms are

  5. Adaptive oriented PDEs filtering methods based on new controlling speed function for discontinuous optical fringe patterns

    Science.gov (United States)

    Zhou, Qiuling; Tang, Chen; Li, Biyuan; Wang, Linlin; Lei, Zhenkun; Tang, Shuwei

    2018-01-01

    The filtering of discontinuous optical fringe patterns is a challenging problem faced in this area. This paper is concerned with oriented partial differential equations (OPDEs)-based image filtering methods for discontinuous optical fringe patterns. We redefine a new controlling speed function to depend on the orientation coherence. The orientation coherence can be used to distinguish the continuous regions and the discontinuous regions, and can be calculated by utilizing fringe orientation. We introduce the new controlling speed function to the previous OPDEs and propose adaptive OPDEs filtering models. According to our proposed adaptive OPDEs filtering models, the filtering in the continuous and discontinuous regions can be selectively carried out. We demonstrate the performance of the proposed adaptive OPDEs via application to the simulated and experimental fringe patterns, and compare our methods with the previous OPDEs.

  6. Optimization methods of pulse-to-pulse alignment using femtosecond pulse laser based on temporal coherence function for practical distance measurement

    Science.gov (United States)

    Liu, Yang; Yang, Linghui; Guo, Yin; Lin, Jiarui; Cui, Pengfei; Zhu, Jigui

    2018-02-01

    An interferometer technique based on temporal coherence function of femtosecond pulses is demonstrated for practical distance measurement. Here, the pulse-to-pulse alignment is analyzed for large delay distance measurement. Firstly, a temporal coherence function model between two femtosecond pulses is developed in the time domain for the dispersive unbalanced Michelson interferometer. Then, according to this model, the fringes analysis and the envelope extraction process are discussed. Meanwhile, optimization methods of pulse-to-pulse alignment for practical long distance measurement are presented. The order of the curve fitting and the selection of points for envelope extraction are analyzed. Furthermore, an averaging method based on the symmetry of the coherence function is demonstrated. Finally, the performance of the proposed methods is evaluated in the absolute distance measurement of 20 μ m with path length difference of 9 m. The improvement of standard deviation in experimental results shows that these approaches have the potential for practical distance measurement.

  7. Calculational model based on influence function method for power distribution and control rod worth in fast reactors

    International Nuclear Information System (INIS)

    Toshio, S.; Kazuo, A.

    1983-01-01

    A model for calculating the power distribution and the control rod worth in fast reactors has been developed. This model is based on the influence function method. The characteristics of the model are as follows: 1. Influence functions for any changes in the control rod insertion ratio are expressed by using an influence function for an appropriate control rod insertion in order to reduce the computer memory size required for the method. 2. A control rod worth is calculated on the basis of a one-group approximation in which cross sections are generated by bilinear (flux-adjoint) weighting, not the usual flux weighting, in order to reduce the collapse error. 3. An effective neutron multiplication factor is calculated by adjoint weighting in order to reduce the effect of the error in the one-group flux distribution. The results obtained in numerical examinations of a prototype fast reactor indicate that this method is suitable for on-line core performance evaluation because of a short computing time and a small memory size

  8. Adaptive Functional-Based Neuro-Fuzzy-PID Incremental Controller Structure

    Directory of Open Access Journals (Sweden)

    Ashraf Ahmed Fahmy

    2014-03-01

    Full Text Available This paper presents an adaptive functional-based Neuro-fuzzy-PID incremental (NFPID controller structure that can be tuned either offline or online according to required controller performance. First, differential membership functions are used to represent the fuzzy membership functions of the input-output space of the three term controller. Second, controller rules are generated based on the discrete proportional, derivative, and integral function for the fuzzy space. Finally, a fully differentiable fuzzy neural network is constructed to represent the developed controller for either offline or online controller parameter adaptation.  Two different adaptation methods are used for controller tuning, offline method based on controller transient performance cost function optimization using Bees Algorithm, and online method based on tracking error minimization using back-propagation with momentum algorithm. The proposed control system was tested to show the validity of the controller structure over a fixed PID controller gains to control SCARA type robot arm.

  9. Generic primal-dual interior point methods based on a new kernel function

    NARCIS (Netherlands)

    EL Ghami, M.; Roos, C.

    2008-01-01

    In this paper we present a generic primal-dual interior point methods (IPMs) for linear optimization in which the search direction depends on a univariate kernel function which is also used as proximity measure in the analysis of the algorithm. The proposed kernel function does not satisfy all the

  10. Hankel Matrix Correlation Function-Based Subspace Identification Method for UAV Servo System

    Directory of Open Access Journals (Sweden)

    Minghong She

    2018-01-01

    Full Text Available For the identification problem of closed-loop subspace model, we propose a zero space projection method based on the estimation of correlation function to fill the block Hankel matrix of identification model by combining the linear algebra with geometry. By using the same projection of related data in time offset set and LQ decomposition, the multiplication operation of projection is achieved and dynamics estimation of the unknown equipment system model is obtained. Consequently, we have solved the problem of biased estimation caused when the open-loop subspace identification algorithm is applied to the closed-loop identification. A simulation example is given to show the effectiveness of the proposed approach. In final, the practicability of the identification algorithm is verified by hardware test of UAV servo system in real environment.

  11. Quality functions for requirements engineering in system development methods.

    Science.gov (United States)

    Johansson, M; Timpka, T

    1996-01-01

    Based on a grounded theory framework, this paper analyses the quality characteristics for methods to be used for requirements engineering in the development of medical decision support systems (MDSS). The results from a Quality Function Deployment (QFD) used to rank functions connected to user value and a focus group study were presented to a validation focus group. The focus group studies take advantage of a group process to collect data for further analyses. The results describe factors considered by the participants as important in the development of methods for requirements engineering in health care. Based on the findings, the content which, according to the user a MDSS method should support is established.

  12. Protein Function Prediction Based on Sequence and Structure Information

    KAUST Repository

    Smaili, Fatima Z.

    2016-05-25

    The number of available protein sequences in public databases is increasing exponentially. However, a significant fraction of these sequences lack functional annotation which is essential to our understanding of how biological systems and processes operate. In this master thesis project, we worked on inferring protein functions based on the primary protein sequence. In the approach we follow, 3D models are first constructed using I-TASSER. Functions are then deduced by structurally matching these predicted models, using global and local similarities, through three independent enzyme commission (EC) and gene ontology (GO) function libraries. The method was tested on 250 “hard” proteins, which lack homologous templates in both structure and function libraries. The results show that this method outperforms the conventional prediction methods based on sequence similarity or threading. Additionally, our method could be improved even further by incorporating protein-protein interaction information. Overall, the method we use provides an efficient approach for automated functional annotation of non-homologous proteins, starting from their sequence.

  13. MR-based methods of the functional imaging of the CNS; MR-basierte Methoden der funktionellen Bildgebung des zentralen Nervensystems

    Energy Technology Data Exchange (ETDEWEB)

    Giesel, F.L.; Weber, M.A.; Zechmann, C.; Tengg-Kobligk, H. von; Essig, M.; Kauczor, H.U. [Radiologie, Deutsches Krebsforschungszentrum (DKFZ), Heidelberg (Germany); Wuestenberg, T. [Abt. fuer Medizinische Psychologie, Georg-August-Univ. Goettingen (Germany); Bongers, A.; Baudendistel, K.T. [Medizinische Physik in der Radiologie, Deutsches Krebsforschungszentrum (DKFZ), Heidelberg (Germany); Hahn, H.K. [MeVis, Zentrum fuer Medizinische Diagnosesysteme und Visualisierung, Bremen (Germany)

    2005-05-01

    This review presents the basic principles of functional imaging of the central nervous system utilizing magnetic resonance imaging. The focus is set on visualization of different functional aspects of the brain and related pathologies. Additionally, clinical cases are presented to illustrate the applications of functional imaging techniques in the clinical setting. The relevant physics and physiology of contrast-enhanced and non-contrast-enhanced methods are discussed. The two main functional MR techniques requiring contrast-enhancement are dynamic T1- and T2{sup *}-MRI to image perfusion. Based on different pharmacokinetic models of contrast enhancement diagnostic applications for neurology and radio-oncology are discussed. The functional non-contrast enhanced imaging techniques are based on ''blood oxygenation level dependent (BOLD)-fMRI and arterial spin labeling (ASL) technique. They have gained clinical impact particularly in the fields of psychiatry and neurosurgery. (orig.)

  14. Methods for selective functionalization and separation of carbon nanotubes

    Science.gov (United States)

    Strano, Michael S. (Inventor); Usrey, Monica (Inventor); Barone, Paul (Inventor); Dyke, Christopher A. (Inventor); Tour, James M. (Inventor); Kittrell, W. Carter (Inventor); Hauge, Robert H (Inventor); Smalley, Richard E. (Inventor); Marek, legal representative, Irene Marie (Inventor)

    2011-01-01

    The present invention is directed toward methods of selectively functionalizing carbon nanotubes of a specific type or range of types, based on their electronic properties, using diazonium chemistry. The present invention is also directed toward methods of separating carbon nanotubes into populations of specific types or range(s) of types via selective functionalization and electrophoresis, and also to the novel compositions generated by such separations.

  15. A semi-classical treatment of dissipative processes based on Feynman's influence functional method

    International Nuclear Information System (INIS)

    Moehring, K.; Smilansky, U.

    1980-01-01

    We develop a semi-classical treatment of dissipative processes based on Feynman's influence functional method. Applying it to deep inelastic collisions of heavy ions we study inclusive transition probabilities corresponding to a situation when only a set of collective variables is specified in the initial and final states. We show that the inclusive probabilities as well as the final energy distributions can be expressed in terms of properly defined classical paths and their corresponding stability fields. We present a uniform approximation for the study of quantal interference and focussing phenomena and discuss the conditions under which they are to be expected. For the dissipation mechanism we study three approximations - the harmonic model for the internal system, the weak coupling (diabatic) and the adiabatic coupling. We show that these three limits can be treated in the same manner. We finally compare the present formalism with other methodes as were introduced for the description of dissipation in deep inelastic collisions. (orig.)

  16. Trial-Based Functional Analysis and Functional Communication Training in an Early Childhood Setting

    Science.gov (United States)

    Lambert, Joseph M.; Bloom, Sarah E.; Irvin, Jennifer

    2012-01-01

    Problem behavior is common in early childhood special education classrooms. Functional communication training (FCT; Carr & Durand, 1985) may reduce problem behavior but requires identification of its function. The trial-based functional analysis (FA) is a method that can be used to identify problem behavior function in schools. We conducted…

  17. Secure method for biometric-based recognition with integrated cryptographic functions.

    Science.gov (United States)

    Chiou, Shin-Yan

    2013-01-01

    Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  18. Secure Method for Biometric-Based Recognition with Integrated Cryptographic Functions

    Directory of Open Access Journals (Sweden)

    Shin-Yan Chiou

    2013-01-01

    Full Text Available Biometric systems refer to biometric technologies which can be used to achieve authentication. Unlike cryptography-based technologies, the ratio for certification in biometric systems needs not to achieve 100% accuracy. However, biometric data can only be directly compared through proximal access to the scanning device and cannot be combined with cryptographic techniques. Moreover, repeated use, improper storage, or transmission leaks may compromise security. Prior studies have attempted to combine cryptography and biometrics, but these methods require the synchronization of internal systems and are vulnerable to power analysis attacks, fault-based cryptanalysis, and replay attacks. This paper presents a new secure cryptographic authentication method using biometric features. The proposed system combines the advantages of biometric identification and cryptographic techniques. By adding a subsystem to existing biometric recognition systems, we can simultaneously achieve the security of cryptographic technology and the error tolerance of biometric recognition. This method can be used for biometric data encryption, signatures, and other types of cryptographic computation. The method offers a high degree of security with protection against power analysis attacks, fault-based cryptanalysis, and replay attacks. Moreover, it can be used to improve the confidentiality of biological data storage and biodata identification processes. Remote biometric authentication can also be safely applied.

  19. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-03

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  20. Systems and methods for interpolation-based dynamic programming

    KAUST Repository

    Rockwood, Alyn

    2013-01-01

    Embodiments of systems and methods for interpolation-based dynamic programming. In one embodiment, the method includes receiving an object function and a set of constraints associated with the objective function. The method may also include identifying a solution on the objective function corresponding to intersections of the constraints. Additionally, the method may include generating an interpolated surface that is in constant contact with the solution. The method may also include generating a vector field in response to the interpolated surface.

  1. A G-function-based reliability-based design methodology applied to a cam roller system

    International Nuclear Information System (INIS)

    Wang, W.; Sui, P.; Wu, Y.T.

    1996-01-01

    Conventional reliability-based design optimization methods treats the reliability function as an ordinary function and applies existing mathematical programming techniques to solve the design problem. As a result, the conventional approach requires nested loops with respect to g-function, and is very time consuming. A new reliability-based design method is proposed in this paper that deals with the g-function directly instead of the reliability function. This approach has the potential of significantly reducing the number of calls for g-function calculations since it requires only one full reliability analysis in a design iteration. A cam roller system in a typical high pressure fuel injection diesel engine is designed using both the proposed and the conventional approach. The proposed method is much more efficient for this application

  2. Modeling photonic crystal waveguides with noncircular geometry using green function method

    International Nuclear Information System (INIS)

    Uvarovaa, I.; Tsyganok, B.; Bashkatov, Y.; Khomenko, V.

    2012-01-01

    Currently in the field of photonics is an acute problem fast and accurate simulation photonic crystal waveguides with complex geometry. This paper describes an improved method of Green's functions for non-circular geometries. Based on comparison of selected efficient numerical method for finding the eigenvalues for the Green's function method for non-circular holes chosen effective method for our purposes. Simulation is realized in Maple environment. The simulation results confirmed experimentally. Key words: photonic crystal, waveguide, modeling, Green function, complex geometry

  3. Recent Advances in the Korringa-Kohn-Rostoker Green Function Method

    Directory of Open Access Journals (Sweden)

    Zeller Rudolf

    2014-01-01

    Full Text Available The Korringa-Kohn-Rostoker (KKR Green function (GF method is a technique for all-electron full-potential density-functional calculations. Similar to the historical Wigner-Seitz cellular method, the KKR-GF method uses a partitioning of space into atomic Wigner-Seitz cells. However, the numerically demanding wave-function matching at the cell boundaries is avoided by use of an integral equation formalism based on the concept of reference Green functions. The advantage of this formalism will be illustrated by the recent progress made for very large systems with thousands of inequivalent atoms and for very accurate calculations of atomic forces and total energies.

  4. A Data Forward Stepwise Fitting Algorithm Based on Orthogonal Function System

    Directory of Open Access Journals (Sweden)

    Li Han-Ju

    2017-01-01

    Full Text Available Data fitting is the main method of functional data analysis, and it is widely used in the fields of economy, social science, engineering technology and so on. Least square method is the main method of data fitting, but the least square method is not convergent, no memory property, big fitting error and it is easy to over fitting. Based on the orthogonal trigonometric function system, this paper presents a data forward stepwise fitting algorithm. This algorithm takes forward stepwise fitting strategy, each time using the nearest base function to fit the residual error generated by the previous base function fitting, which makes the residual mean square error minimum. In this paper, we theoretically prove the convergence, the memory property and the fitting error diminishing character for the algorithm. Experimental results show that the proposed algorithm is effective, and the fitting performance is better than that of the least square method and the forward stepwise fitting algorithm based on the non-orthogonal function system.

  5. Performance of wave function and density functional methods for water hydrogen bond spin-spin coupling constants.

    Science.gov (United States)

    García de la Vega, J M; Omar, S; San Fabián, J

    2017-04-01

    Spin-spin coupling constants in water monomer and dimer have been calculated using several wave function and density functional-based methods. CCSD, MCSCF, and SOPPA wave functions methods yield similar results, specially when an additive approach is used with the MCSCF. Several functionals have been used to analyze their performance with the Jacob's ladder and a set of functionals with different HF exchange were tested. Functionals with large HF exchange appropriately predict 1 J O H , 2 J H H and 2h J O O couplings, while 1h J O H is better calculated with functionals that include a reduced fraction of HF exchange. Accurate functionals for 1 J O H and 2 J H H have been tested in a tetramer water model. The hydrogen bond effects on these intramolecular couplings are additive when they are calculated by SOPPA(CCSD) wave function and DFT methods. Graphical Abstract Evaluation of the additive effect of the hydrogen bond on spin-spin coupling constants of water using WF and DFT methods.

  6. Determination of resonance parameters in QCD by functional analysis methods

    International Nuclear Information System (INIS)

    Ciulli, S.; Geniet, F.; Papadopoulos, N.A.; Schilcher, K.

    1988-01-01

    A mathematically rigorous method based on functional analysis is used to determine resonance parameters of an amplitude from its given asymptotic expression in the space-like region. This method is checked on a model amplitude where both the asymptotic expression and the exact function are known. This method is then applied to the determination of the mass and the width of the ρ-meson from the corresponding space-like asymptotic QCD expression. (orig.)

  7. A New Filled Function Method with One Parameter for Global Optimization

    Directory of Open Access Journals (Sweden)

    Fei Wei

    2013-01-01

    Full Text Available The filled function method is an effective approach to find the global minimizer of multidimensional multimodal functions. The conventional filled functions are numerically unstable due to exponential or logarithmic term and sensitive to parameters. In this paper, a new filled function with only one parameter is proposed, which is continuously differentiable and proved to satisfy all conditions of the filled function definition. Moreover, this filled function is not sensitive to parameter, and the overflow can not happen for this function. Based on these, a new filled function method is proposed, and it is numerically stable to the initial point and the parameter variable. The computer simulations indicate that the proposed filled function method is efficient and effective.

  8. SYNTHESIS METHODS OF ALGEBRAIC NORMAL FORM OF MANY-VALUED LOGIC FUNCTIONS

    Directory of Open Access Journals (Sweden)

    A. V. Sokolov

    2016-01-01

    Full Text Available The rapid development of methods of error-correcting coding, cryptography, and signal synthesis theory based on the principles of many-valued logic determines the need for a more detailed study of the forms of representation of functions of many-valued logic. In particular the algebraic normal form of Boolean functions, also known as Zhegalkin polynomial, that well describe many of the cryptographic properties of Boolean functions is widely used. In this article, we formalized the notion of algebraic normal form for many-valued logic functions. We developed a fast method of synthesis of algebraic normal form of 3-functions and 5-functions that work similarly to the Reed-Muller transform for Boolean functions: on the basis of recurrently synthesized transform matrices. We propose the hypothesis, which determines the rules of the synthesis of these matrices for the transformation from the truth table to the coefficients of the algebraic normal form and the inverse transform for any given number of variables of 3-functions or 5-functions. The article also introduces the definition of algebraic degree of nonlinearity of the functions of many-valued logic and the S-box, based on the principles of many-valued logic. Thus, the methods of synthesis of algebraic normal form of 3-functions applied to the known construction of recurrent synthesis of S-boxes of length N = 3k, whereby their algebraic degrees of nonlinearity are computed. The results could be the basis for further theoretical research and practical applications such as: the development of new cryptographic primitives, error-correcting codes, algorithms of data compression, signal structures, and algorithms of block and stream encryption, all based on the perspective principles of many-valued logic. In addition, the fast method of synthesis of algebraic normal form of many-valued logic functions is the basis for their software and hardware implementation.

  9. Source-based neurofeedback methods using EEG recordings: training altered brain activity in a functional brain source derived from blind source separation

    Science.gov (United States)

    White, David J.; Congedo, Marco; Ciorciari, Joseph

    2014-01-01

    A developing literature explores the use of neurofeedback in the treatment of a range of clinical conditions, particularly ADHD and epilepsy, whilst neurofeedback also provides an experimental tool for studying the functional significance of endogenous brain activity. A critical component of any neurofeedback method is the underlying physiological signal which forms the basis for the feedback. While the past decade has seen the emergence of fMRI-based protocols training spatially confined BOLD activity, traditional neurofeedback has utilized a small number of electrode sites on the scalp. As scalp EEG at a given electrode site reflects a linear mixture of activity from multiple brain sources and artifacts, efforts to successfully acquire some level of control over the signal may be confounded by these extraneous sources. Further, in the event of successful training, these traditional neurofeedback methods are likely influencing multiple brain regions and processes. The present work describes the use of source-based signal processing methods in EEG neurofeedback. The feasibility and potential utility of such methods were explored in an experiment training increased theta oscillatory activity in a source derived from Blind Source Separation (BSS) of EEG data obtained during completion of a complex cognitive task (spatial navigation). Learned increases in theta activity were observed in two of the four participants to complete 20 sessions of neurofeedback targeting this individually defined functional brain source. Source-based EEG neurofeedback methods using BSS may offer important advantages over traditional neurofeedback, by targeting the desired physiological signal in a more functionally and spatially specific manner. Having provided preliminary evidence of the feasibility of these methods, future work may study a range of clinically and experimentally relevant brain processes where individual brain sources may be targeted by source-based EEG neurofeedback. PMID

  10. Approximation of the exponential integral (well function) using sampling methods

    Science.gov (United States)

    Baalousha, Husam Musa

    2015-04-01

    Exponential integral (also known as well function) is often used in hydrogeology to solve Theis and Hantush equations. Many methods have been developed to approximate the exponential integral. Most of these methods are based on numerical approximations and are valid for a certain range of the argument value. This paper presents a new approach to approximate the exponential integral. The new approach is based on sampling methods. Three different sampling methods; Latin Hypercube Sampling (LHS), Orthogonal Array (OA), and Orthogonal Array-based Latin Hypercube (OA-LH) have been used to approximate the function. Different argument values, covering a wide range, have been used. The results of sampling methods were compared with results obtained by Mathematica software, which was used as a benchmark. All three sampling methods converge to the result obtained by Mathematica, at different rates. It was found that the orthogonal array (OA) method has the fastest convergence rate compared with LHS and OA-LH. The root mean square error RMSE of OA was in the order of 1E-08. This method can be used with any argument value, and can be used to solve other integrals in hydrogeology such as the leaky aquifer integral.

  11. A third-generation density-functional-theory-based method for calculating canonical molecular orbitals of large molecules.

    Science.gov (United States)

    Hirano, Toshiyuki; Sato, Fumitoshi

    2014-07-28

    We used grid-free modified Cholesky decomposition (CD) to develop a density-functional-theory (DFT)-based method for calculating the canonical molecular orbitals (CMOs) of large molecules. Our method can be used to calculate standard CMOs, analytically compute exchange-correlation terms, and maximise the capacity of next-generation supercomputers. Cholesky vectors were first analytically downscaled using low-rank pivoted CD and CD with adaptive metric (CDAM). The obtained Cholesky vectors were distributed and stored on each computer node in a parallel computer, and the Coulomb, Fock exchange, and pure exchange-correlation terms were calculated by multiplying the Cholesky vectors without evaluating molecular integrals in self-consistent field iterations. Our method enables DFT and massively distributed memory parallel computers to be used in order to very efficiently calculate the CMOs of large molecules.

  12. Reliability analysis of software based safety functions

    International Nuclear Information System (INIS)

    Pulkkinen, U.

    1993-05-01

    The methods applicable in the reliability analysis of software based safety functions are described in the report. Although the safety functions also include other components, the main emphasis in the report is on the reliability analysis of software. The check list type qualitative reliability analysis methods, such as failure mode and effects analysis (FMEA), are described, as well as the software fault tree analysis. The safety analysis based on the Petri nets is discussed. The most essential concepts and models of quantitative software reliability analysis are described. The most common software metrics and their combined use with software reliability models are discussed. The application of software reliability models in PSA is evaluated; it is observed that the recent software reliability models do not produce the estimates needed in PSA directly. As a result from the study some recommendations and conclusions are drawn. The need of formal methods in the analysis and development of software based systems, the applicability of qualitative reliability engineering methods in connection to PSA and the need to make more precise the requirements for software based systems and their analyses in the regulatory guides should be mentioned. (orig.). (46 refs., 13 figs., 1 tab.)

  13. Total-energy Assisted Tight-binding Method Based on Local Density Approximation of Density Functional Theory

    Science.gov (United States)

    Fujiwara, Takeo; Nishino, Shinya; Yamamoto, Susumu; Suzuki, Takashi; Ikeda, Minoru; Ohtani, Yasuaki

    2018-06-01

    A novel tight-binding method is developed, based on the extended Hückel approximation and charge self-consistency, with referring the band structure and the total energy of the local density approximation of the density functional theory. The parameters are so adjusted by computer that the result reproduces the band structure and the total energy, and the algorithm for determining parameters is established. The set of determined parameters is applicable to a variety of crystalline compounds and change of lattice constants, and, in other words, it is transferable. Examples are demonstrated for Si crystals of several crystalline structures varying lattice constants. Since the set of parameters is transferable, the present tight-binding method may be applicable also to molecular dynamics simulations of large-scale systems and long-time dynamical processes.

  14. DESCRIBING FUNCTION METHOD FOR PI-FUZZY CONTROLLED SYSTEMS STABILITY ANALYSIS

    Directory of Open Access Journals (Sweden)

    Stefan PREITL

    2004-12-01

    Full Text Available The paper proposes a global stability analysis method dedicated to fuzzy control systems containing Mamdani PI-fuzzy controllers with output integration to control SISO linear / linearized plants. The method is expressed in terms of relatively simple steps, and it is based on: the generalization of the describing function method for the considered fuzzy control systems to the MIMO case, the approximation of the describing functions by applying the least squares method. The method is applied to the stability analysis of a class of PI-fuzzy controlled servo-systems, and validated by considering a case study.

  15. Exact density functional and wave function embedding schemes based on orbital localization

    International Nuclear Information System (INIS)

    Hégely, Bence; Nagy, Péter R.; Kállay, Mihály; Ferenczy, György G.

    2016-01-01

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  16. Exact density functional and wave function embedding schemes based on orbital localization

    Science.gov (United States)

    Hégely, Bence; Nagy, Péter R.; Ferenczy, György G.; Kállay, Mihály

    2016-08-01

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  17. Exact density functional and wave function embedding schemes based on orbital localization

    Energy Technology Data Exchange (ETDEWEB)

    Hégely, Bence; Nagy, Péter R.; Kállay, Mihály, E-mail: kallay@mail.bme.hu [MTA-BME Lendület Quantum Chemistry Research Group, Department of Physical Chemistry and Materials Science, Budapest University of Technology and Economics, P.O. Box 91, H-1521 Budapest (Hungary); Ferenczy, György G. [Medicinal Chemistry Research Group, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar tudósok körútja 2, H-1117 Budapest (Hungary); Department of Biophysics and Radiation Biology, Semmelweis University, Tűzoltó u. 37-47, H-1094 Budapest (Hungary)

    2016-08-14

    Exact schemes for the embedding of density functional theory (DFT) and wave function theory (WFT) methods into lower-level DFT or WFT approaches are introduced utilizing orbital localization. First, a simple modification of the projector-based embedding scheme of Manby and co-workers [J. Chem. Phys. 140, 18A507 (2014)] is proposed. We also use localized orbitals to partition the system, but instead of augmenting the Fock operator with a somewhat arbitrary level-shift projector we solve the Huzinaga-equation, which strictly enforces the Pauli exclusion principle. Second, the embedding of WFT methods in local correlation approaches is studied. Since the latter methods split up the system into local domains, very simple embedding theories can be defined if the domains of the active subsystem and the environment are treated at a different level. The considered embedding schemes are benchmarked for reaction energies and compared to quantum mechanics (QM)/molecular mechanics (MM) and vacuum embedding. We conclude that for DFT-in-DFT embedding, the Huzinaga-equation-based scheme is more efficient than the other approaches, but QM/MM or even simple vacuum embedding is still competitive in particular cases. Concerning the embedding of wave function methods, the clear winner is the embedding of WFT into low-level local correlation approaches, and WFT-in-DFT embedding can only be more advantageous if a non-hybrid density functional is employed.

  18. Path Planning for Mobile Objects in Four-Dimension Based on Particle Swarm Optimization Method with Penalty Function

    Directory of Open Access Journals (Sweden)

    Yong Ma

    2013-01-01

    Full Text Available We present one algorithm based on particle swarm optimization (PSO with penalty function to determine the conflict-free path for mobile objects in four-dimension (three spatial and one-time dimensions with obstacles. The shortest path of the mobile object is set as goal function, which is constrained by conflict-free criterion, path smoothness, and velocity and acceleration requirements. This problem is formulated as a calculus of variation problem (CVP. With parametrization method, the CVP is converted to a time-varying nonlinear programming problem (TNLPP. Constraints of TNLPP are transformed to general TNLPP without any constraints through penalty functions. Then, by using a little calculations and applying the algorithm PSO, the solution of the CVP is consequently obtained. Approach efficiency is confirmed by numerical examples.

  19. Calculation of neutron importance function in fissionable assemblies using Monte Carlo method

    International Nuclear Information System (INIS)

    Feghhi, S.A.H.; Shahriari, M.; Afarideh, H.

    2007-01-01

    The purpose of the present work is to develop an efficient solution method for the calculation of neutron importance function in fissionable assemblies for all criticality conditions, based on Monte Carlo calculations. The neutron importance function has an important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating the adjoint flux while solving the adjoint weighted transport equation based on deterministic methods. However, in complex geometries these calculations are very complicated. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on the physical concept of neutron importance has been introduced for calculating the neutron importance function in sub-critical, critical and super-critical conditions. For this propose a computer program has been developed. The results of the method have been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries. The correctness of these results has been confirmed for all three criticality conditions. Finally, the efficiency of the method for complex geometries has been shown by the calculation of neutron importance in Miniature Neutron Source Reactor (MNSR) research reactor

  20. Method of hyperspherical functions in a few-body quantum mechanics

    International Nuclear Information System (INIS)

    Dzhibuti, R.I.; Krupennikova, N.B.

    1984-01-01

    A new method for solving a few-body problem in quantum mechanics based on the expansion of the wave function of many-particle system in terms of basis hyperspherical functions is outlined in the monograph. This method gives the possibility to obtain important results in nuclear physics. A materials of general character is presented which can be useful when considering a few-body problem in atomic and molecular physics as well as in elementary particle physics. The paper deals with the theory of hyperspherical functions and the method of expansion in terms of hyperspherical functions basis can be formally considered as a certain generalization of the partial expansion method in the two-body problem. The Raynal-Revai theory is stated for the three-body problem and coe-- fficients of unitary transformations for four-particle hyperspherical function coefficients are introduced. Five-particle hyperspherical functions are introduced and an attempt of generalization of the theory for the systems With any number of particles has been made. The rules of plotting symmetrized hyperspherical functions for three and four identical particles are given. Also described is the method of expansion in terms of hyperspherical functions basis in the coordinate and impulse representations for discrete and continuous spectrum, respectively

  1. Systems and methods for producing low work function electrodes

    Science.gov (United States)

    Kippelen, Bernard; Fuentes-Hernandez, Canek; Zhou, Yinhua; Kahn, Antoine; Meyer, Jens; Shim, Jae Won; Marder, Seth R.

    2015-07-07

    According to an exemplary embodiment of the invention, systems and methods are provided for producing low work function electrodes. According to an exemplary embodiment, a method is provided for reducing a work function of an electrode. The method includes applying, to at least a portion of the electrode, a solution comprising a Lewis basic oligomer or polymer; and based at least in part on applying the solution, forming an ultra-thin layer on a surface of the electrode, wherein the ultra-thin layer reduces the work function associated with the electrode by greater than 0.5 eV. According to another exemplary embodiment of the invention, a device is provided. The device includes a semiconductor; at least one electrode disposed adjacent to the semiconductor and configured to transport electrons in or out of the semiconductor.

  2. Dynamic functional connectivity using state-based dynamic community structure: method and application to opioid analgesia.

    Science.gov (United States)

    Robinson, Lucy F; Atlas, Lauren Y; Wager, Tor D

    2015-03-01

    We present a new method, State-based Dynamic Community Structure, that detects time-dependent community structure in networks of brain regions. Most analyses of functional connectivity assume that network behavior is static in time, or differs between task conditions with known timing. Our goal is to determine whether brain network topology remains stationary over time, or if changes in network organization occur at unknown time points. Changes in network organization may be related to shifts in neurological state, such as those associated with learning, drug uptake or experimental conditions. Using a hidden Markov stochastic blockmodel, we define a time-dependent community structure. We apply this approach to data from a functional magnetic resonance imaging experiment examining how contextual factors influence drug-induced analgesia. Results reveal that networks involved in pain, working memory, and emotion show distinct profiles of time-varying connectivity. Copyright © 2014 Elsevier Inc. All rights reserved.

  3. Reduced density matrix functional theory via a wave function based approach

    Energy Technology Data Exchange (ETDEWEB)

    Schade, Robert; Bloechl, Peter [Institute for Theoretical Physics, Clausthal University of Technology, Clausthal (Germany); Pruschke, Thomas [Institute for Theoretical Physics, University of Goettingen, Goettingen (Germany)

    2016-07-01

    We propose a new method for the calculation of the electronic and atomic structure of correlated electron systems based on reduced density matrix functional theory (rDMFT). The density-matrix functional is evaluated on the fly using Levy's constrained search formalism. The present implementation rests on a local approximation of the interaction reminiscent to that of dynamical mean field theory (DMFT). We focus here on additional approximations to the exact density-matrix functional in the local approximation and evaluate their performance.

  4. Pilates Method for Lung Function and Functional Capacity in Obese Adults.

    Science.gov (United States)

    Niehues, Janaina Rocha; Gonzáles, Inês; Lemos, Robson Rodrigues; Haas, Patrícia

    2015-01-01

    Obesity is defined as the condition in which the body mass index (BMI) is ≥ 30 kg/m2 and is responsible for decreased quality of life and functional limitations. The harmful effects on ventilatory function include reduced lung capacity and volume; diaphragmatic muscle weakness; decreased lung compliance and stiffness; and weakness of the abdominal muscles, among others. Pilates is a method of resistance training that works with low-impact muscle exercises and is based on isometric exercises. The current article is a review of the literature that aims to investigate the hypothesis that the Pilates method, as a complementary method of training, might be beneficial to pulmonary function and functional capacity in obese adults. The intent of the review was to evaluate the use of Pilates as an innovative intervention in the respiratory dysfunctions of obese adults. In studies with other populations, it has been observed that Pilates can be effective in improving chest capacity and expansion and lung volume. That finding is due to the fact that Pilates works through the center of force, made ​​up of the abdominal muscles and gluteus muscles lumbar, which are responsible for the stabilization of the static and dynamic body that is associated with breath control. It has been observed that different Pilates exercises increase the activation and recruitment of the abdominal muscles. Those muscles are important in respiration, both in expiration and inspiration, through the facilitation of diaphragmatic action. In that way, strengthening the abdominal muscles can help improve respiratory function, leading to improvements in lung volume and capacity. The results found in the current literature review support the authors' observations that Pilates promotes the strengthening of the abdominal muscles and that improvements in diaphragmatic function may result in positive outcomes in respiratory function, thereby improving functional capacity. However, the authors did not

  5. An advanced probabilistic structural analysis method for implicit performance functions

    Science.gov (United States)

    Wu, Y.-T.; Millwater, H. R.; Cruse, T. A.

    1989-01-01

    In probabilistic structural analysis, the performance or response functions usually are implicitly defined and must be solved by numerical analysis methods such as finite element methods. In such cases, the most commonly used probabilistic analysis tool is the mean-based, second-moment method which provides only the first two statistical moments. This paper presents a generalized advanced mean value (AMV) method which is capable of establishing the distributions to provide additional information for reliability design. The method requires slightly more computations than the second-moment method but is highly efficient relative to the other alternative methods. In particular, the examples show that the AMV method can be used to solve problems involving non-monotonic functions that result in truncated distributions.

  6. A direct method to transform between expansions in the configuration state function and Slater determinant bases

    International Nuclear Information System (INIS)

    Olsen, Jeppe

    2014-01-01

    A novel algorithm is introduced for the transformation of wave functions between the bases of Slater determinants (SD) and configuration state functions (CSF) in the genealogical coupling scheme. By modifying the expansion coefficients as each electron is spin-coupled, rather than performing a single many-electron transformation, the large transformation matrix that plagues previous approaches is avoided and the required number of operations is drastically reduced. As an example of the efficiency of the algorithm, the transformation for a configuration with 30 unpaired electrons and singlet spin is discussed. For this case, the 10 × 10 6 coefficients in the CSF basis is obtained from the 150 × 10 6 coefficients in the SD basis in 1 min, which should be compared with the seven years that the previously employed method is estimated to require

  7. Methods of filtering the graph images of the functions

    Directory of Open Access Journals (Sweden)

    Олександр Григорович Бурса

    2017-06-01

    Full Text Available The theoretical aspects of cleaning raster images of scanned graphs of functions from digital, chromatic and luminance distortions by using computer graphics techniques have been considered. The basic types of distortions characteristic of graph images of functions have been stated. To suppress the distortion several methods, providing for high-quality of the resulting images and saving their topological features, were suggested. The paper describes the techniques developed and improved by the authors: the method of cleaning the image of distortions by means of iterative contrasting, based on the step-by-step increase in image contrast in the graph by 1%; the method of small entities distortion restoring, based on the thinning of the known matrix of contrast increase filter (the allowable dimensions of the nucleus dilution radius convolution matrix, which provide for the retention of the graph lines have been established; integration technique of the noise reduction method by means of contrasting and distortion restoring method of small entities with known σ-filter. Each method in the complex has been theoretically substantiated. The developed methods involve treatment of graph images as the entire image (global processing and its fragments (local processing. The metrics assessing the quality of the resulting image with the global and local processing have been chosen, the substantiation of the choice as well as the formulas have been given. The proposed complex methods of cleaning the graphs images of functions from grayscale image distortions is adaptive to the form of an image carrier, the distortion level in the image and its distribution. The presented results of testing the developed complex of methods for a representative sample of images confirm its effectiveness

  8. Adaptive and non-adaptive data hiding methods for grayscale images based on modulus function

    Directory of Open Access Journals (Sweden)

    Najme Maleki

    2014-07-01

    Full Text Available This paper presents two adaptive and non-adaptive data hiding methods for grayscale images based on modulus function. Our adaptive scheme is based on the concept of human vision sensitivity, so the pixels in edge areas than to smooth areas can tolerate much more changes without making visible distortion for human eyes. In our adaptive scheme, the average differencing value of four neighborhood pixels into a block via a threshold secret key determines whether current block is located in edge or smooth area. Pixels in the edge areas are embedded by Q-bit of secret data with a larger value of Q than that of pixels placed in smooth areas. Also in this scholar, we represent one non-adaptive data hiding algorithm. Our non-adaptive scheme, via an error reduction procedure, produces a high visual quality for stego-image. The proposed schemes present several advantages. 1-of aspects the embedding capacity and visual quality of stego-image are scalable. In other words, the embedding rate as well as the image quality can be scaled for practical applications 2-the high embedding capacity with minimal visual distortion can be achieved, 3-our methods require little memory space for secret data embedding and extracting phases, 4-secret keys have used to protect of the embedded secret data. Thus, level of security is high, 5-the problem of overflow or underflow does not occur. Experimental results indicated that the proposed adaptive scheme significantly is superior to the currently existing scheme, in terms of stego-image visual quality, embedding capacity and level of security and also our non-adaptive method is better than other non-adaptive methods, in view of stego-image quality. Results show which our adaptive algorithm can resist against the RS steganalysis attack.

  9. A novel GLM-based method for the Automatic IDentification of functional Events (AIDE) in fNIRS data recorded in naturalistic environments.

    Science.gov (United States)

    Pinti, Paola; Merla, Arcangelo; Aichelburg, Clarisse; Lind, Frida; Power, Sarah; Swingler, Elizabeth; Hamilton, Antonia; Gilbert, Sam; Burgess, Paul W; Tachtsidis, Ilias

    2017-07-15

    Recent technological advances have allowed the development of portable functional Near-Infrared Spectroscopy (fNIRS) devices that can be used to perform neuroimaging in the real-world. However, as real-world experiments are designed to mimic everyday life situations, the identification of event onsets can be extremely challenging and time-consuming. Here, we present a novel analysis method based on the general linear model (GLM) least square fit analysis for the Automatic IDentification of functional Events (or AIDE) directly from real-world fNIRS neuroimaging data. In order to investigate the accuracy and feasibility of this method, as a proof-of-principle we applied the algorithm to (i) synthetic fNIRS data simulating both block-, event-related and mixed-design experiments and (ii) experimental fNIRS data recorded during a conventional lab-based task (involving maths). AIDE was able to recover functional events from simulated fNIRS data with an accuracy of 89%, 97% and 91% for the simulated block-, event-related and mixed-design experiments respectively. For the lab-based experiment, AIDE recovered more than the 66.7% of the functional events from the fNIRS experimental measured data. To illustrate the strength of this method, we then applied AIDE to fNIRS data recorded by a wearable system on one participant during a complex real-world prospective memory experiment conducted outside the lab. As part of the experiment, there were four and six events (actions where participants had to interact with a target) for the two different conditions respectively (condition 1: social-interact with a person; condition 2: non-social-interact with an object). AIDE managed to recover 3/4 events and 3/6 events for conditions 1 and 2 respectively. The identified functional events were then corresponded to behavioural data from the video recordings of the movements and actions of the participant. Our results suggest that "brain-first" rather than "behaviour-first" analysis is

  10. PanFP: pangenome-based functional profiles for microbial communities.

    Science.gov (United States)

    Jun, Se-Ran; Robeson, Michael S; Hauser, Loren J; Schadt, Christopher W; Gorin, Andrey A

    2015-09-26

    For decades there has been increasing interest in understanding the relationships between microbial communities and ecosystem functions. Current DNA sequencing technologies allows for the exploration of microbial communities in two principle ways: targeted rRNA gene surveys and shotgun metagenomics. For large study designs, it is often still prohibitively expensive to sequence metagenomes at both the breadth and depth necessary to statistically capture the true functional diversity of a community. Although rRNA gene surveys provide no direct evidence of function, they do provide a reasonable estimation of microbial diversity, while being a very cost-effective way to screen samples of interest for later shotgun metagenomic analyses. However, there is a great deal of 16S rRNA gene survey data currently available from diverse environments, and thus a need for tools to infer functional composition of environmental samples based on 16S rRNA gene survey data. We present a computational method called pangenome-based functional profiles (PanFP), which infers functional profiles of microbial communities from 16S rRNA gene survey data for Bacteria and Archaea. PanFP is based on pangenome reconstruction of a 16S rRNA gene operational taxonomic unit (OTU) from known genes and genomes pooled from the OTU's taxonomic lineage. From this lineage, we derive an OTU functional profile by weighting a pangenome's functional profile with the OTUs abundance observed in a given sample. We validated our method by comparing PanFP to the functional profiles obtained from the direct shotgun metagenomic measurement of 65 diverse communities via Spearman correlation coefficients. These correlations improved with increasing sequencing depth, within the range of 0.8-0.9 for the most deeply sequenced Human Microbiome Project mock community samples. PanFP is very similar in performance to another recently released tool, PICRUSt, for almost all of survey data analysed here. But, our method is unique

  11. Machinery Fault Diagnosis Using Two-Channel Analysis Method Based on Fictitious System Frequency Response Function

    Directory of Open Access Journals (Sweden)

    Kihong Shin

    2015-01-01

    Full Text Available Most existing techniques for machinery health monitoring that utilize measured vibration signals usually require measurement points to be as close as possible to the expected fault components of interest. This is particularly important for implementing condition-based maintenance since the incipient fault signal power may be too small to be detected if a sensor is located further away from the fault source. However, a measurement sensor is often not attached to the ideal point due to geometric or environmental restrictions. In such a case, many of the conventional diagnostic techniques may not be successfully applicable. In this paper, a two-channel analysis method is proposed to overcome such difficulty. It uses two vibration signals simultaneously measured at arbitrary points in a machine. The proposed method is described theoretically by introducing a fictitious system frequency response function. It is then verified experimentally for bearing fault detection. The results show that the suggested method may be a good alternative when ideal points for measurement sensors are not readily available.

  12. Determination of an effective scoring function for RNA-RNA interactions with a physics-based double-iterative method.

    Science.gov (United States)

    Yan, Yumeng; Wen, Zeyu; Zhang, Di; Huang, Sheng-You

    2018-05-18

    RNA-RNA interactions play fundamental roles in gene and cell regulation. Therefore, accurate prediction of RNA-RNA interactions is critical to determine their complex structures and understand the molecular mechanism of the interactions. Here, we have developed a physics-based double-iterative strategy to determine the effective potentials for RNA-RNA interactions based on a training set of 97 diverse RNA-RNA complexes. The double-iterative strategy circumvented the reference state problem in knowledge-based scoring functions by updating the potentials through iteration and also overcame the decoy-dependent limitation in previous iterative methods by constructing the decoys iteratively. The derived scoring function, which is referred to as DITScoreRR, was evaluated on an RNA-RNA docking benchmark of 60 test cases and compared with three other scoring functions. It was shown that for bound docking, our scoring function DITScoreRR obtained the excellent success rates of 90% and 98.3% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 63.3% and 71.7% for van der Waals interactions, 45.0% and 65.0% for ITScorePP, and 11.7% and 26.7% for ZDOCK 2.1, respectively. For unbound docking, DITScoreRR achieved the good success rates of 53.3% and 71.7% in binding mode predictions when the top 1 and 10 predictions were considered, compared to 13.3% and 28.3% for van der Waals interactions, 11.7% and 26.7% for our ITScorePP, and 3.3% and 6.7% for ZDOCK 2.1, respectively. DITScoreRR also performed significantly better in ranking decoys and obtained significantly higher score-RMSD correlations than the other three scoring functions. DITScoreRR will be of great value for the prediction and design of RNA structures and RNA-RNA complexes.

  13. A sensitivity function-based conjugate gradient method for optical tomography with the frequency-domain equation of radiative transfer

    International Nuclear Information System (INIS)

    Kim, Hyun Keol; Charette, Andre

    2007-01-01

    The Sensitivity Function-based Conjugate Gradient Method (SFCGM) is described. This method is used to solve the inverse problems of function estimation, such as the local maps of absorption and scattering coefficients, as applied to optical tomography for biomedical imaging. A highly scattering, absorbing, non-reflecting, non-emitting medium is considered here and simultaneous reconstructions of absorption and scattering coefficients inside the test medium are achieved with the proposed optimization technique, by using the exit intensity measured at boundary surfaces. The forward problem is solved with a discrete-ordinates finite-difference method on the framework of the frequency-domain full equation of radiative transfer. The modulation frequency is set to 600 MHz and the frequency data, obtained with the source modulation, is used as the input data. The inversion results demonstrate that the SFCGM can retrieve simultaneously the spatial distributions of optical properties inside the medium within a reasonable accuracy, by significantly reducing a cross-talk between inter-parameters. It is also observed that the closer-to-detector objects are better retrieved

  14. Identification of fractional order systems using modulating functions method

    KAUST Repository

    Liu, Dayan

    2013-06-01

    The modulating functions method has been used for the identification of linear and nonlinear systems. In this paper, we generalize this method to the on-line identification of fractional order systems based on the Riemann-Liouville fractional derivatives. First, a new fractional integration by parts formula involving the fractional derivative of a modulating function is given. Then, we apply this formula to a fractional order system, for which the fractional derivatives of the input and the output can be transferred into the ones of the modulating functions. By choosing a set of modulating functions, a linear system of algebraic equations is obtained. Hence, the unknown parameters of a fractional order system can be estimated by solving a linear system. Using this method, we do not need any initial values which are usually unknown and not equal to zero. Also we do not need to estimate the fractional derivatives of noisy output. Moreover, it is shown that the proposed estimators are robust against high frequency sinusoidal noises and the ones due to a class of stochastic processes. Finally, the efficiency and the stability of the proposed method is confirmed by some numerical simulations.

  15. Curvelet-domain multiple matching method combined with cubic B-spline function

    Science.gov (United States)

    Wang, Tong; Wang, Deli; Tian, Mi; Hu, Bin; Liu, Chengming

    2018-05-01

    Since the large amount of surface-related multiple existed in the marine data would influence the results of data processing and interpretation seriously, many researchers had attempted to develop effective methods to remove them. The most successful surface-related multiple elimination method was proposed based on data-driven theory. However, the elimination effect was unsatisfactory due to the existence of amplitude and phase errors. Although the subsequent curvelet-domain multiple-primary separation method achieved better results, poor computational efficiency prevented its application. In this paper, we adopt the cubic B-spline function to improve the traditional curvelet multiple matching method. First, select a little number of unknowns as the basis points of the matching coefficient; second, apply the cubic B-spline function on these basis points to reconstruct the matching array; third, build constraint solving equation based on the relationships of predicted multiple, matching coefficients, and actual data; finally, use the BFGS algorithm to iterate and realize the fast-solving sparse constraint of multiple matching algorithm. Moreover, the soft-threshold method is used to make the method perform better. With the cubic B-spline function, the differences between predicted multiple and original data diminish, which results in less processing time to obtain optimal solutions and fewer iterative loops in the solving procedure based on the L1 norm constraint. The applications to synthetic and field-derived data both validate the practicability and validity of the method.

  16. Collision analysis of one kind of chaos-based hash function

    International Nuclear Information System (INIS)

    Xiao Di; Peng Wenbing; Liao Xiaofeng; Xiang Tao

    2010-01-01

    In the last decade, various chaos-based hash functions have been proposed. Nevertheless, the corresponding analyses of them lag far behind. In this Letter, we firstly take a chaos-based hash function proposed very recently in Amin, Faragallah and Abd El-Latif (2009) as a sample to analyze its computational collision problem, and then generalize the construction method of one kind of chaos-based hash function and summarize some attentions to avoid the collision problem. It is beneficial to the hash function design based on chaos in the future.

  17. Introduction to functional methods

    International Nuclear Information System (INIS)

    Faddeev, L.D.

    1976-01-01

    The functional integral is considered in relation to Feynman diagrams and phase space. The holomorphic form of the functional integral is then discussed. The main problem of the lectures, viz. the construction of the S-matrix by means of the functional integral, is considered. The functional methods described explicitly take into account the Bose statistics of the fields involved. The different procedure used to treat fermions is discussed. An introduction to the problem of quantization of gauge fields is given. (B.R.H.)

  18. Development of a neutronics code based on analytic function expansion nodal method for pebble-type High Temperature Gas-cooled Reactor design

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Nam Zin; Lee, Joo Hee; Lee, Jae Jun; Yu, Hui; Lee, Gil Soo [Korea Advanced Institute of Science and Tehcnology, Daejeon (Korea, Republic of)

    2006-03-15

    There is growing interest in developing Pebble Bed Reactors(PBRs) as a candidate of Very High Temperature gas-cooled Reactors(VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. And other existing nodal cannot be adapted for this kind of reactors because of transverse integration problem. In this project, we developed the TOPS code in three dimensional cylindrical geometry based on Analytic Function Expansion Nodal (AFEN) method developed at KAIST. The TOPS code showed better results in computing time than FDM and MCNP. Also TOPS showed very accurate results in reactor analysis.

  19. Development of a neutronics code based on analytic function expansion nodal method for pebble-type High Temperature Gas-cooled Reactor design

    International Nuclear Information System (INIS)

    Cho, Nam Zin; Lee, Joo Hee; Lee, Jae Jun; Yu, Hui; Lee, Gil Soo

    2006-03-01

    There is growing interest in developing Pebble Bed Reactors(PBRs) as a candidate of Very High Temperature gas-cooled Reactors(VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. And other existing nodal cannot be adapted for this kind of reactors because of transverse integration problem. In this project, we developed the TOPS code in three dimensional cylindrical geometry based on Analytic Function Expansion Nodal (AFEN) method developed at KAIST. The TOPS code showed better results in computing time than FDM and MCNP. Also TOPS showed very accurate results in reactor analysis

  20. How useful are prescribing indicators based on the DU90% method to distinguish the quality of prescribing between pharmacotherapy audit meetings with different levels of functioning?

    NARCIS (Netherlands)

    Teichert, M.; Aalst, A. van der; Wit, H. de; Stroo, M.; Smet, P.A.G.M. de

    2007-01-01

    OBJECTIVES: The objective of the study was to assess the association between the quality of drug prescribing based on three indicator types derived from the DU90% method and different levels of functioning in pharmacotherapy audit meetings (PTAMs). MATERIALS AND METHODS: The level of functioning in

  1. Exp-function method for solving fractional partial differential equations.

    Science.gov (United States)

    Zheng, Bin

    2013-01-01

    We extend the Exp-function method to fractional partial differential equations in the sense of modified Riemann-Liouville derivative based on nonlinear fractional complex transformation. For illustrating the validity of this method, we apply it to the space-time fractional Fokas equation and the nonlinear fractional Sharma-Tasso-Olver (STO) equation. As a result, some new exact solutions for them are successfully established.

  2. The Reliasep method used for the functional modeling of complex systems

    International Nuclear Information System (INIS)

    Dubiez, P.; Gaufreteau, P.; Pitton, J.P.

    1997-07-01

    The RELIASEP R method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the 'viewpoints' analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)

  3. The Reliasep method used for the functional modeling of complex systems

    Energy Technology Data Exchange (ETDEWEB)

    Dubiez, P.; Gaufreteau, P.; Pitton, J.P

    1997-07-01

    The RELIASEP{sup R} method and its support tool have been recommended to carry out the functional analysis of large systems within the framework of the design of new power units. Let us first recall the principles of the method based on the breakdown of functions into tree(s). These functions are characterised by their performance and constraints. Then the main modifications made under EDF requirement and in particular the `viewpoints` analyses are presented. The knowledge obtained from the first studies carried out are discussed. (author)

  4. Information filtering via a scaling-based function.

    Science.gov (United States)

    Qiu, Tian; Zhang, Zi-Ke; Chen, Guang

    2013-01-01

    Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL) independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.

  5. Information filtering via a scaling-based function.

    Directory of Open Access Journals (Sweden)

    Tian Qiu

    Full Text Available Finding a universal description of the algorithm optimization is one of the key challenges in personalized recommendation. In this article, for the first time, we introduce a scaling-based algorithm (SCL independent of recommendation list length based on a hybrid algorithm of heat conduction and mass diffusion, by finding out the scaling function for the tunable parameter and object average degree. The optimal value of the tunable parameter can be abstracted from the scaling function, which is heterogeneous for the individual object. Experimental results obtained from three real datasets, Netflix, MovieLens and RYM, show that the SCL is highly accurate in recommendation. More importantly, compared with a number of excellent algorithms, including the mass diffusion method, the original hybrid method, and even an improved version of the hybrid method, the SCL algorithm remarkably promotes the personalized recommendation in three other aspects: solving the accuracy-diversity dilemma, presenting a high novelty, and solving the key challenge of cold start problem.

  6. GOMA: functional enrichment analysis tool based on GO modules

    Institute of Scientific and Technical Information of China (English)

    Qiang Huang; Ling-Yun Wu; Yong Wang; Xiang-Sun Zhang

    2013-01-01

    Analyzing the function of gene sets is a critical step in interpreting the results of high-throughput experiments in systems biology.A variety of enrichment analysis tools have been developed in recent years,but most output a long list of significantly enriched terms that are often redundant,making it difficult to extract the most meaningful functions.In this paper,we present GOMA,a novel enrichment analysis method based on the new concept of enriched functional Gene Ontology (GO) modules.With this method,we systematically revealed functional GO modules,i.e.,groups of functionally similar GO terms,via an optimization model and then ranked them by enrichment scores.Our new method simplifies enrichment analysis results by reducing redundancy,thereby preventing inconsistent enrichment results among functionally similar terms and providing more biologically meaningful results.

  7. A novel JPEG steganography method based on modulus function with histogram analysis

    Directory of Open Access Journals (Sweden)

    V. Banoci

    2012-06-01

    Full Text Available In this paper, we present a novel steganographic method for embedding of secret data in still grayscale JPEG image. In order to provide large capacity of the proposed method while maintaining good visual quality of stego-image, the embedding process is performed in quantized transform coefficients of Discrete Cosine transform (DCT by modifying coefficients according to modulo function, what gives to the steganography system blind extraction predisposition. After-embedding histogram of proposed Modulo Histogram Fitting (MHF method is analyzed to secure steganography system against steganalysis attacks. In addition, AES ciphering was implemented to increase security and improve histogram after-embedding characteristics of proposed steganography system as experimental results show.

  8. New approach to equipment quality evaluation method with distinct functions

    Directory of Open Access Journals (Sweden)

    Milisavljević Vladimir M.

    2016-01-01

    Full Text Available The paper presents new approach for improving method for quality evaluation and selection of equipment (devices and machinery by applying distinct functions. Quality evaluation and selection of devices and machinery is a multi-criteria problem which involves the consideration of numerous parameters of various origins. Original selection method with distinct functions is based on technical parameters with arbitrary evaluation of each parameter importance (weighting. Improvement of this method, presented in this paper, addresses the issue of weighting of parameters by using Delphi Method. Finally, two case studies are provided, which included quality evaluation of standard boilers for heating and evaluation of load-haul-dump (LHD machines, to demonstrate applicability of this approach. Analytical Hierarchical Process (AHP is used as a control method.

  9. Hazard identification based on plant functional modelling

    International Nuclear Information System (INIS)

    Rasmussen, B.; Whetton, C.

    1993-10-01

    A major objective of the present work is to provide means for representing a process plant as a socio-technical system, so as to allow hazard identification at a high level. The method includes technical, human and organisational aspects and is intended to be used for plant level hazard identification so as to identify critical areas and the need for further analysis using existing methods. The first part of the method is the preparation of a plant functional model where a set of plant functions link together hardware, software, operations, work organisation and other safety related aspects of the plant. The basic principle of the functional modelling is that any aspect of the plant can be represented by an object (in the sense that this term is used in computer science) based upon an Intent (or goal); associated with each Intent are Methods, by which the Intent is realized, and Constraints, which limit the Intent. The Methods and Constraints can themselves be treated as objects and decomposed into lower-level Intents (hence the procedure is known as functional decomposition) so giving rise to a hierarchical, object-oriented structure. The plant level hazard identification is carried out on the plant functional model using the Concept Hazard Analysis method. In this, the user will be supported by checklists and keywords and the analysis is structured by pre-defined worksheets. The preparation of the plant functional model and the performance of the hazard identification can be carried out manually or with computer support. (au) (4 tabs., 10 ills., 7 refs.)

  10. Reliability-based design optimization via high order response surface method

    International Nuclear Information System (INIS)

    Li, Hong Shuang

    2013-01-01

    To reduce the computational effort of reliability-based design optimization (RBDO), the response surface method (RSM) has been widely used to evaluate reliability constraints. We propose an efficient methodology for solving RBDO problems based on an improved high order response surface method (HORSM) that takes advantage of an efficient sampling method, Hermite polynomials and uncertainty contribution concept to construct a high order response surface function with cross terms for reliability analysis. The sampling method generates supporting points from Gauss-Hermite quadrature points, which can be used to approximate response surface function without cross terms, to identify the highest order of each random variable and to determine the significant variables connected with point estimate method. The cross terms between two significant random variables are added to the response surface function to improve the approximation accuracy. Integrating the nested strategy, the improved HORSM is explored in solving RBDO problems. Additionally, a sampling based reliability sensitivity analysis method is employed to reduce the computational effort further when design variables are distributional parameters of input random variables. The proposed methodology is applied on two test problems to validate its accuracy and efficiency. The proposed methodology is more efficient than first order reliability method based RBDO and Monte Carlo simulation based RBDO, and enables the use of RBDO as a practical design tool.

  11. Quantal density functional theory II. Approximation methods and applications

    International Nuclear Information System (INIS)

    Sahni, Viraht

    2010-01-01

    This book is on approximation methods and applications of Quantal Density Functional Theory (QDFT), a new local effective-potential-energy theory of electronic structure. What distinguishes the theory from traditional density functional theory is that the electron correlations due to the Pauli exclusion principle, Coulomb repulsion, and the correlation contribution to the kinetic energy -- the Correlation-Kinetic effects -- are separately and explicitly defined. As such it is possible to study each property of interest as a function of the different electron correlations. Approximations methods based on the incorporation of different electron correlations, as well as a many-body perturbation theory within the context of QDFT, are developed. The applications are to the few-electron inhomogeneous electron gas systems in atoms and molecules, as well as to the many-electron inhomogeneity at metallic surfaces. (orig.)

  12. Estimation of functional failure probability of passive systems based on adaptive importance sampling method

    International Nuclear Information System (INIS)

    Wang Baosheng; Wang Dongqing; Zhang Jianmin; Jiang Jing

    2012-01-01

    In order to estimate the functional failure probability of passive systems, an innovative adaptive importance sampling methodology is presented. In the proposed methodology, information of variables is extracted with some pre-sampling of points in the failure region. An important sampling density is then constructed from the sample distribution in the failure region. Taking the AP1000 passive residual heat removal system as an example, the uncertainties related to the model of a passive system and the numerical values of its input parameters are considered in this paper. And then the probability of functional failure is estimated with the combination of the response surface method and adaptive importance sampling method. The numerical results demonstrate the high computed efficiency and excellent computed accuracy of the methodology compared with traditional probability analysis methods. (authors)

  13. The Innovative Bike Conceptual Design by Using Modified Functional Element Design Method

    Directory of Open Access Journals (Sweden)

    Nien-Te Liu

    2016-11-01

    Full Text Available The purpose of the study is to propose a new design process by modifying functional element design approach which can commence a large amount of innovative concepts within a short period of time. Firstly, the original creative functional elements design method is analyzed and the drawbacks are discussed. Then, the modified is proposed and is divided into 6 steps. The creative functional element representations, generalization, specialization, and particularization are used in this method. Every step is described clearly, and users could design by following the process easily. In this paper, a clear and accurate design process is proposed based on the creative functional element design method. By following this method, a lot of innovative bicycles will be created quickly.

  14. Adaptive endpoint detection of seismic signal based on auto-correlated function

    International Nuclear Information System (INIS)

    Fan Wanchun; Shi Ren

    2001-01-01

    Based on the analysis of auto-correlation function, the notion of the distance between auto-correlation function was quoted, and the characterization of the noise and the signal with noise were discussed by using the distance. Then, the method of auto- adaptable endpoint detection of seismic signal based on auto-correlated similarity was summed up. The steps of implementation and determining of the thresholds were presented in detail. The experimental results that were compared with the methods based on artificial detecting show that this method has higher sensitivity even in a low signal with noise ratio circumstance

  15. A numerical integration-based yield estimation method for integrated circuits

    International Nuclear Information System (INIS)

    Liang Tao; Jia Xinzhang

    2011-01-01

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  16. A numerical integration-based yield estimation method for integrated circuits

    Energy Technology Data Exchange (ETDEWEB)

    Liang Tao; Jia Xinzhang, E-mail: tliang@yahoo.cn [Key Laboratory of Ministry of Education for Wide Bandgap Semiconductor Materials and Devices, School of Microelectronics, Xidian University, Xi' an 710071 (China)

    2011-04-15

    A novel integration-based yield estimation method is developed for yield optimization of integrated circuits. This method tries to integrate the joint probability density function on the acceptability region directly. To achieve this goal, the simulated performance data of unknown distribution should be converted to follow a multivariate normal distribution by using Box-Cox transformation (BCT). In order to reduce the estimation variances of the model parameters of the density function, orthogonal array-based modified Latin hypercube sampling (OA-MLHS) is presented to generate samples in the disturbance space during simulations. The principle of variance reduction of model parameters estimation through OA-MLHS together with BCT is also discussed. Two yield estimation examples, a fourth-order OTA-C filter and a three-dimensional (3D) quadratic function are used for comparison of our method with Monte Carlo based methods including Latin hypercube sampling and importance sampling under several combinations of sample sizes and yield values. Extensive simulations show that our method is superior to other methods with respect to accuracy and efficiency under all of the given cases. Therefore, our method is more suitable for parametric yield optimization. (semiconductor integrated circuits)

  17. Calculating the knowledge-based similarity of functional groups using crystallographic data

    Science.gov (United States)

    Watson, Paul; Willett, Peter; Gillet, Valerie J.; Verdonk, Marcel L.

    2001-09-01

    A knowledge-based method for calculating the similarity of functional groups is described and validated. The method is based on experimental information derived from small molecule crystal structures. These data are used in the form of scatterplots that show the likelihood of a non-bonded interaction being formed between functional group A (the `central group') and functional group B (the `contact group' or `probe'). The scatterplots are converted into three-dimensional maps that show the propensity of the probe at different positions around the central group. Here we describe how to calculate the similarity of a pair of central groups based on these maps. The similarity method is validated using bioisosteric functional group pairs identified in the Bioster database and Relibase. The Bioster database is a critical compilation of thousands of bioisosteric molecule pairs, including drugs, enzyme inhibitors and agrochemicals. Relibase is an object-oriented database containing structural data about protein-ligand interactions. The distributions of the similarities of the bioisosteric functional group pairs are compared with similarities for all the possible pairs in IsoStar, and are found to be significantly different. Enrichment factors are also calculated showing the similarity method is statistically significantly better than random in predicting bioisosteric functional group pairs.

  18. The Boundary Function Method. Fundamentals

    Science.gov (United States)

    Kot, V. A.

    2017-03-01

    The boundary function method is proposed for solving applied problems of mathematical physics in the region defined by a partial differential equation of the general form involving constant or variable coefficients with a Dirichlet, Neumann, or Robin boundary condition. In this method, the desired function is defined by a power polynomial, and a boundary function represented in the form of the desired function or its derivative at one of the boundary points is introduced. Different sequences of boundary equations have been set up with the use of differential operators. Systems of linear algebraic equations constructed on the basis of these sequences allow one to determine the coefficients of a power polynomial. Constitutive equations have been derived for initial boundary-value problems of all the main types. With these equations, an initial boundary-value problem is transformed into the Cauchy problem for the boundary function. The determination of the boundary function by its derivative with respect to the time coordinate completes the solution of the problem.

  19. Optimising Job-Shop Functions Utilising the Score-Function Method

    DEFF Research Database (Denmark)

    Nielsen, Erland Hejn

    2000-01-01

    During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging to this ......During the last 1-2 decades, simulation optimisation of discrete event dynamic systems (DEDS) has made considerable theoretical progress with respect to computational efficiency. The score-function (SF) method and the infinitesimal perturbation analysis (IPA) are two candidates belonging...... of a Job-Shop can be handled by the SF method....

  20. An influence function method based subsidence prediction program for longwall mining operations in inclined coal seams

    Energy Technology Data Exchange (ETDEWEB)

    Yi Luo; Jian-wei Cheng [West Virginia University, Morgantown, WV (United States). Department of Mining Engineering

    2009-09-15

    The distribution of the final surface subsidence basin induced by longwall operations in inclined coal seam could be significantly different from that in flat coal seam and demands special prediction methods. Though many empirical prediction methods have been developed, these methods are inflexible for varying geological and mining conditions. An influence function method has been developed to take the advantage of its fundamentally sound nature and flexibility. In developing this method, significant modifications have been made to the original Knothe function to produce an asymmetrical influence function. The empirical equations for final subsidence parameters derived from US subsidence data and Chinese empirical values have been incorporated into the mathematical models to improve the prediction accuracy. A corresponding computer program is developed. A number of subsidence cases for longwall mining operations in coal seams with varying inclination angles have been used to demonstrate the applicability of the developed subsidence prediction model. 9 refs., 8 figs.

  1. History based batch method preserving tally means

    International Nuclear Information System (INIS)

    Shim, Hyung Jin; Choi, Sung Hoon

    2012-01-01

    In the Monte Carlo (MC) eigenvalue calculations, the sample variance of a tally mean calculated from its cycle-wise estimates is biased because of the inter-cycle correlations of the fission source distribution (FSD). Recently, we proposed a new real variance estimation method named the history-based batch method in which a MC run is treated as multiple runs with small number of histories per cycle to generate independent tally estimates. In this paper, the history-based batch method based on the weight correction is presented to preserve the tally mean from the original MC run. The effectiveness of the new method is examined for the weakly coupled fissile array problem as a function of the dominance ratio and the batch size, in comparison with other schemes available

  2. Solution of the generalized Emden-Fowler equations by the hybrid functions method

    International Nuclear Information System (INIS)

    Tabrizidooz, H R; Marzban, H R; Razzaghi, M

    2009-01-01

    In this paper, we present a numerical algorithm for solving the generalized Emden-Fowler equations, which have many applications in mathematical physics and astrophysics. The method is based on hybrid functions approximations. The properties of hybrid functions, which consist of block-pulse functions and Lagrange interpolating polynomials, are presented. These properties are then utilized to reduce the computation of the generalized Emden-Fowler equations to a system of nonlinear equations. The method is easy to implement and yields very accurate results.

  3. Doubly stochastic radial basis function methods

    Science.gov (United States)

    Yang, Fenglian; Yan, Liang; Ling, Leevan

    2018-06-01

    We propose a doubly stochastic radial basis function (DSRBF) method for function recoveries. Instead of a constant, we treat the RBF shape parameters as stochastic variables whose distribution were determined by a stochastic leave-one-out cross validation (LOOCV) estimation. A careful operation count is provided in order to determine the ranges of all the parameters in our methods. The overhead cost for setting up the proposed DSRBF method is O (n2) for function recovery problems with n basis. Numerical experiments confirm that the proposed method not only outperforms constant shape parameter formulation (in terms of accuracy with comparable computational cost) but also the optimal LOOCV formulation (in terms of both accuracy and computational cost).

  4. Application of microarray and functional-based screening methods for the detection of antimicrobial resistance genes in the microbiomes of healthy humans.

    Directory of Open Access Journals (Sweden)

    Roderick M Card

    Full Text Available The aim of this study was to screen for the presence of antimicrobial resistance genes within the saliva and faecal microbiomes of healthy adult human volunteers from five European countries. Two non-culture based approaches were employed to obviate potential bias associated with difficult to culture members of the microbiota. In a gene target-based approach, a microarray was employed to screen for the presence of over 70 clinically important resistance genes in the saliva and faecal microbiomes. A total of 14 different resistance genes were detected encoding resistances to six antibiotic classes (aminoglycosides, β-lactams, macrolides, sulphonamides, tetracyclines and trimethoprim. The most commonly detected genes were erm(B, blaTEM, and sul2. In a functional-based approach, DNA prepared from pooled saliva samples was cloned into Escherichia coli and screened for expression of resistance to ampicillin or sulphonamide, two of the most common resistances found by array. The functional ampicillin resistance screen recovered genes encoding components of a predicted AcrRAB efflux pump. In the functional sulphonamide resistance screen, folP genes were recovered encoding mutant dihydropteroate synthase, the target of sulphonamide action. The genes recovered from the functional screens were from the chromosomes of commensal species that are opportunistically pathogenic and capable of exchanging DNA with related pathogenic species. Genes identified by microarray were not recovered in the activity-based screen, indicating that these two methods can be complementary in facilitating the identification of a range of resistance mechanisms present within the human microbiome. It also provides further evidence of the diverse reservoir of resistance mechanisms present in bacterial populations in the human gut and saliva. In future the methods described in this study can be used to monitor changes in the resistome in response to antibiotic therapy.

  5. Deterministic and fuzzy-based methods to evaluate community resilience

    Science.gov (United States)

    Kammouh, Omar; Noori, Ali Zamani; Taurino, Veronica; Mahin, Stephen A.; Cimellaro, Gian Paolo

    2018-04-01

    Community resilience is becoming a growing concern for authorities and decision makers. This paper introduces two indicator-based methods to evaluate the resilience of communities based on the PEOPLES framework. PEOPLES is a multi-layered framework that defines community resilience using seven dimensions. Each of the dimensions is described through a set of resilience indicators collected from literature and they are linked to a measure allowing the analytical computation of the indicator's performance. The first method proposed in this paper requires data on previous disasters as an input and returns as output a performance function for each indicator and a performance function for the whole community. The second method exploits a knowledge-based fuzzy modeling for its implementation. This method allows a quantitative evaluation of the PEOPLES indicators using descriptive knowledge rather than deterministic data including the uncertainty involved in the analysis. The output of the fuzzy-based method is a resilience index for each indicator as well as a resilience index for the community. The paper also introduces an open source online tool in which the first method is implemented. A case study illustrating the application of the first method and the usage of the tool is also provided in the paper.

  6. Exp-function method for solving Maccari's system

    International Nuclear Information System (INIS)

    Zhang Sheng

    2007-01-01

    In this Letter, the Exp-function method is used to seek exact solutions of Maccari's system. As a result, single and combined generalized solitonary solutions are obtained, from which some known solutions obtained by extended sine-Gordon equation method and improved hyperbolic function method are recovered as special cases. It is shown that the Exp-function method provides a very effective and powerful mathematical tool for solving nonlinear evolution equations in mathematical physics

  7. Model-Based Method for Sensor Validation

    Science.gov (United States)

    Vatan, Farrokh

    2012-01-01

    Fault detection, diagnosis, and prognosis are essential tasks in the operation of autonomous spacecraft, instruments, and in situ platforms. One of NASA s key mission requirements is robust state estimation. Sensing, using a wide range of sensors and sensor fusion approaches, plays a central role in robust state estimation, and there is a need to diagnose sensor failure as well as component failure. Sensor validation can be considered to be part of the larger effort of improving reliability and safety. The standard methods for solving the sensor validation problem are based on probabilistic analysis of the system, from which the method based on Bayesian networks is most popular. Therefore, these methods can only predict the most probable faulty sensors, which are subject to the initial probabilities defined for the failures. The method developed in this work is based on a model-based approach and provides the faulty sensors (if any), which can be logically inferred from the model of the system and the sensor readings (observations). The method is also more suitable for the systems when it is hard, or even impossible, to find the probability functions of the system. The method starts by a new mathematical description of the problem and develops a very efficient and systematic algorithm for its solution. The method builds on the concepts of analytical redundant relations (ARRs).

  8. On the Reliability of Source Time Functions Estimated Using Empirical Green's Function Methods

    Science.gov (United States)

    Gallegos, A. C.; Xie, J.; Suarez Salas, L.

    2017-12-01

    The Empirical Green's Function (EGF) method (Hartzell, 1978) has been widely used to extract source time functions (STFs). In this method, seismograms generated by collocated events with different magnitudes are deconvolved. Under a fundamental assumption that the STF of the small event is a delta function, the deconvolved Relative Source Time Function (RSTF) yields the large event's STF. While this assumption can be empirically justified by examination of differences in event size and frequency content of the seismograms, there can be a lack of rigorous justification of the assumption. In practice, a small event might have a finite duration when the RSTF is retrieved and interpreted as the large event STF with a bias. In this study, we rigorously analyze this bias using synthetic waveforms generated by convolving a realistic Green's function waveform with pairs of finite-duration triangular or parabolic STFs. The RSTFs are found using a time-domain based matrix deconvolution. We find when the STFs of smaller events are finite, the RSTFs are a series of narrow non-physical spikes. Interpreting these RSTFs as a series of high-frequency source radiations would be very misleading. The only reliable and unambiguous information we can retrieve from these RSTFs is the difference in durations and the moment ratio of the two STFs. We can apply a Tikhonov smoothing to obtain a single-pulse RSTF, but its duration is dependent on the choice of weighting, which may be subjective. We then test the Multi-Channel Deconvolution (MCD) method (Plourde & Bostock, 2017) which assumes that both STFs have finite durations to be solved for. A concern about the MCD method is that the number of unknown parameters is larger, which would tend to make the problem rank-deficient. Because the kernel matrix is dependent on the STFs to be solved for under a positivity constraint, we can only estimate the rank-deficiency with a semi-empirical approach. Based on the results so far, we find that the

  9. Adaptive endpoint detection of seismic signal based on auto-correlated function

    International Nuclear Information System (INIS)

    Fan Wanchun; Shi Ren

    2000-01-01

    There are certain shortcomings for the endpoint detection by time-waveform envelope and/or by checking the travel table (both labelled as the artificial detection method). Based on the analysis of the auto-correlation function, the notion of the distance between auto-correlation functions was quoted, and the characterizations of the noise and the signal with noise were discussed by using the distance. Then, the method of auto-adaptable endpoint detection of seismic signal based on auto-correlated similarity was summed up. The steps of implementation and determining of the thresholds were presented in detail. The experimental results that were compared with the methods based on artificial detecting show that this method has higher sensitivity even in a low SNR circumstance

  10. Atlas-based functional radiosurgery: Early results

    Energy Technology Data Exchange (ETDEWEB)

    Stancanello, J.; Romanelli, P.; Pantelis, E.; Sebastiano, F.; Modugno, N. [Politecnico di Milano, Bioengineering Department and NEARlab, Milano, 20133 (Italy) and Siemens AG, Research and Clinical Collaborations, Erlangen, 91052 (Germany); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy); CyberKnife Center, Iatropolis, Athens, 15231 (Greece); Functional Neurosurgery Deptartment, Neuromed IRCCS, Pozzilli, 86077 (Italy)

    2009-02-15

    Functional disorders of the brain, such as dystonia and neuropathic pain, may respond poorly to medical therapy. Deep brain stimulation (DBS) of the globus pallidus pars interna (GPi) and the centromedian nucleus of the thalamus (CMN) may alleviate dystonia and neuropathic pain, respectively. A noninvasive alternative to DBS is radiosurgical ablation [internal pallidotomy (IP) and medial thalamotomy (MT)]. The main technical limitation of radiosurgery is that targets are selected only on the basis of MRI anatomy, without electrophysiological confirmation. This means that, to be feasible, image-based targeting must be highly accurate and reproducible. Here, we report on the feasibility of an atlas-based approach to targeting for functional radiosurgery. In this method, masks of the GPi, CMN, and medio-dorsal nucleus were nonrigidly registered to patients' T1-weighted MRI (T1w-MRI) and superimposed on patients' T2-weighted MRI (T2w-MRI). Radiosurgical targets were identified on the T2w-MRI registered to the planning CT by an expert functional neurosurgeon. To assess its feasibility, two patients were treated with the CyberKnife using this method of targeting; a patient with dystonia received an IP (120 Gy prescribed to the 65% isodose) and a patient with neuropathic pain received a MT (120 Gy to the 77% isodose). Six months after treatment, T2w-MRIs and contrast-enhanced T1w-MRIs showed edematous regions around the lesions; target placements were reevaluated by DW-MRIs. At 12 months post-treatment steroids for radiation-induced edema and medications for dystonia and neuropathic pain were suppressed. Both patients experienced significant relief from pain and dystonia-related problems. Fifteen months after treatment edema had disappeared. Thus, this work shows promising feasibility of atlas-based functional radiosurgery to improve patient condition. Further investigations are indicated for optimizing treatment dose.

  11. Image based rendering of iterated function systems

    NARCIS (Netherlands)

    Wijk, van J.J.; Saupe, D.

    2004-01-01

    A fast method to generate fractal imagery is presented. Iterated function systems (IFS) are based on repeatedly copying transformed images. We show that this can be directly translated into standard graphics operations: Each image is generated by texture mapping and blending copies of the previous

  12. Functional Size Measurement applied to UML-based user requirements

    NARCIS (Netherlands)

    van den Berg, Klaas; Dekkers, Ton; Oudshoorn, Rogier; Dekkers, T.

    There is a growing interest in applying standardized methods for Functional Size Measurement (FSM) to Functional User Requirements (FUR) based on models in the Unified Modelling Language (UML). No consensus exists on this issue. We analyzed the demands that FSM places on FURs. We propose a

  13. Functional Mobility Testing: A Novel Method to Create Suit Design Requirements

    Science.gov (United States)

    England, Scott A.; Benson, Elizabeth A.; Rajulu, Sudhakar L.

    2008-01-01

    This study was performed to aide in the creation of design requirements for the next generation of space suits that more accurately describe the level of mobility necessary for a suited crewmember through the use of an innovative methodology utilizing functional mobility. A novel method was utilized involving the collection of kinematic data while 20 subjects (10 male, 10 female) performed pertinent functional tasks that will be required of a suited crewmember during various phases of a lunar mission. These tasks were selected based on relevance and criticality from a larger list of tasks that may be carried out by the crew. Kinematic data was processed through Vicon BodyBuilder software to calculate joint angles for the ankle, knee, hip, torso, shoulder, elbow, and wrist. Maximum functional mobility was consistently lower than maximum isolated mobility. This study suggests that conventional methods for establishing design requirements for human-systems interfaces based on maximal isolated joint capabilities may overestimate the required mobility. Additionally, this method provides a valuable means of evaluating systems created from these requirements by comparing the mobility available in a new spacesuit, or the mobility required to use a new piece of hardware, to this newly established database of functional mobility.

  14. Weighted functional linear regression models for gene-based association analysis.

    Science.gov (United States)

    Belonogova, Nadezhda M; Svishcheva, Gulnara R; Wilson, James F; Campbell, Harry; Axenovich, Tatiana I

    2018-01-01

    Functional linear regression models are effectively used in gene-based association analysis of complex traits. These models combine information about individual genetic variants, taking into account their positions and reducing the influence of noise and/or observation errors. To increase the power of methods, where several differently informative components are combined, weights are introduced to give the advantage to more informative components. Allele-specific weights have been introduced to collapsing and kernel-based approaches to gene-based association analysis. Here we have for the first time introduced weights to functional linear regression models adapted for both independent and family samples. Using data simulated on the basis of GAW17 genotypes and weights defined by allele frequencies via the beta distribution, we demonstrated that type I errors correspond to declared values and that increasing the weights of causal variants allows the power of functional linear models to be increased. We applied the new method to real data on blood pressure from the ORCADES sample. Five of the six known genes with P models. Moreover, we found an association between diastolic blood pressure and the VMP1 gene (P = 8.18×10-6), when we used a weighted functional model. For this gene, the unweighted functional and weighted kernel-based models had P = 0.004 and 0.006, respectively. The new method has been implemented in the program package FREGAT, which is freely available at https://cran.r-project.org/web/packages/FREGAT/index.html.

  15. A study of parallelizing O(N) Green-function-based Monte Carlo method for many fermions coupled with classical degrees of freedom

    International Nuclear Information System (INIS)

    Zhang Shixun; Yamagia, Shinichi; Yunoki, Seiji

    2013-01-01

    Models of fermions interacting with classical degrees of freedom are applied to a large variety of systems in condensed matter physics. For this class of models, Weiße [Phys. Rev. Lett. 102, 150604 (2009)] has recently proposed a very efficient numerical method, called O(N) Green-Function-Based Monte Carlo (GFMC) method, where a kernel polynomial expansion technique is used to avoid the full numerical diagonalization of the fermion Hamiltonian matrix of size N, which usually costs O(N 3 ) computational complexity. Motivated by this background, in this paper we apply the GFMC method to the double exchange model in three spatial dimensions. We mainly focus on the implementation of GFMC method using both MPI on a CPU-based cluster and Nvidia's Compute Unified Device Architecture (CUDA) programming techniques on a GPU-based (Graphics Processing Unit based) cluster. The time complexity of the algorithm and the parallel implementation details on the clusters are discussed. We also show the performance scaling for increasing Hamiltonian matrix size and increasing number of nodes, respectively. The performance evaluation indicates that for a 32 3 Hamiltonian a single GPU shows higher performance equivalent to more than 30 CPU cores parallelized using MPI

  16. Annotation and retrieval system of CAD models based on functional semantics

    Science.gov (United States)

    Wang, Zhansong; Tian, Ling; Duan, Wenrui

    2014-11-01

    CAD model retrieval based on functional semantics is more significant than content-based 3D model retrieval during the mechanical conceptual design phase. However, relevant research is still not fully discussed. Therefore, a functional semantic-based CAD model annotation and retrieval method is proposed to support mechanical conceptual design and design reuse, inspire designer creativity through existing CAD models, shorten design cycle, and reduce costs. Firstly, the CAD model functional semantic ontology is constructed to formally represent the functional semantics of CAD models and describe the mechanical conceptual design space comprehensively and consistently. Secondly, an approach to represent CAD models as attributed adjacency graphs(AAG) is proposed. In this method, the geometry and topology data are extracted from STEP models. On the basis of AAG, the functional semantics of CAD models are annotated semi-automatically by matching CAD models that contain the partial features of which functional semantics have been annotated manually, thereby constructing CAD Model Repository that supports model retrieval based on functional semantics. Thirdly, a CAD model retrieval algorithm that supports multi-function extended retrieval is proposed to explore more potential creative design knowledge in the semantic level. Finally, a prototype system, called Functional Semantic-based CAD Model Annotation and Retrieval System(FSMARS), is implemented. A case demonstrates that FSMARS can successfully botain multiple potential CAD models that conform to the desired function. The proposed research addresses actual needs and presents a new way to acquire CAD models in the mechanical conceptual design phase.

  17. Generalization of the influence function method in mining subsidence

    International Nuclear Information System (INIS)

    Bello Garcia, A.; Mendendez Diaz, A.; Ordieres Mere, J.B.; Gonzalez Nicieza, C.

    1996-01-01

    A generic approach to subsidence prediction based on the influence function method is presented. The changes proposed to the classical approach are the result of a previous analysis stage where a generalization to the 3D problem was made. In addition other hypothesis in order to relax the structural principles of the classical model are suggested. The quantitative results of this process and a brief discussion of its method of employment is presented. 13 refs., 8 figs., 5 tabs

  18. Harris functional and related methods for calculating total energies in density-functional theory

    International Nuclear Information System (INIS)

    Averill, F.W.; Painter, G.S.

    1990-01-01

    The simplified energy functional of Harris has given results of useful accuracy for systems well outside the limits of weakly interacting fragments for which the method was originally proposed. In the present study, we discuss the source of the frequent good agreement of the Harris energy with full Kohn-Sham self-consistent results. A procedure is described for extending the applicability of the scheme to more strongly interacting systems by going beyond the frozen-atom fragment approximation. A gradient-force expression is derived, based on the Harris functional, which accounts for errors in the fragment charge representation. Results are presented for some diatomic molecules, illustrating the points of this study

  19. A Hybrid Positioning Method Based on Hypothesis Testing

    DEFF Research Database (Denmark)

    Amiot, Nicolas; Pedersen, Troels; Laaraiedh, Mohamed

    2012-01-01

    maxima. We propose to first estimate the support region of the two peaks of the likelihood function using a set membership method, and then decide between the two regions using a rule based on the less reliable observations. Monte Carlo simulations show that the performance of the proposed method...

  20. Problem-Matched Basis Functions for Microstrip Coupled Slot Antennas based on Transmission Line Greens Functions

    NARCIS (Netherlands)

    Bruni, S.; Llombart Juan, N.; Neto, A.; Gerini, G.; Maci, S.

    2004-01-01

    A general algorithm for the analysis of microstrip coupled leaky wave slot antennas was discussed. The method was based on the construction of physically appealing entire domain Methods of Moments (MoM) basis function that allowed a consistent reduction of the number of unknowns and of total

  1. The response-matrix based AFEN method for the hexagonal geometry

    International Nuclear Information System (INIS)

    Noh, Jae Man; Kim, Keung Koo; Zee, Sung Quun; Joo, Hyung Kook; Cho, Byng Oh; Jeong, Hyung Guk; Cho, Jin Young

    1998-03-01

    The analytic function expansion nodal (AFEN) method, developed to overcome the limitations caused by the transverse integration, has been successfully to predict the neutron behavior in the hexagonal core as well as rectangular core. In the hexagonal node, the transverse leakage resulted from the transverse integration has some singular terms such as delta-function and step-functions near the node center line. In most nodal methods using the transverse integration, the accuracy of nodal method is degraded because the transverse leakage is approximated as a smooth function across the node center line by ignoring singular terms. However, the AFEN method in which there is no transverse leakage term in deriving nodal coupling equations keeps good accuracy for hexagonal node. In this study, the AFEN method which shows excellent accuracy in the hexagonal core analyses is reformulated as a response matrix form. This form of the AFEN method can be implemented easily to nodal codes based on the response matrix method. Therefore, the Coarse Mesh Rebalance (CMR) acceleration technique which is one of main advantages of the response matrix method can be utilized for the AFEN method. The response matrix based AFEN method has been successfully implemented into the MASTER code and its accuracy and computational efficiency were examined by analyzing the two- and three- dimensional benchmark problem of VVER-440. Based on the results, it can be concluded that the newly formulated AFEN method predicts accurately the assembly powers (within 0.2% average error) as well as the effective multiplication factor (within 0.2% average error) as well as the effective multiplication factor (within 20 pcm error). In addition, the CMR acceleration technique is quite efficient in reducing the computation time of the AFEN method by 8 to 10 times. (author). 22 refs., 1 tab., 4 figs

  2. Benchmarking Density Functional Theory Based Methods To Model NiOOH Material Properties: Hubbard and van der Waals Corrections vs Hybrid Functionals.

    Science.gov (United States)

    Zaffran, Jeremie; Caspary Toroker, Maytal

    2016-08-09

    NiOOH has recently been used to catalyze water oxidation by way of electrochemical water splitting. Few experimental data are available to rationalize the successful catalytic capability of NiOOH. Thus, theory has a distinctive role for studying its properties. However, the unique layered structure of NiOOH is associated with the presence of essential dispersion forces within the lattice. Hence, the choice of an appropriate exchange-correlation functional within Density Functional Theory (DFT) is not straightforward. In this work, we will show that standard DFT is sufficient to evaluate the geometry, but DFT+U and hybrid functionals are required to calculate the oxidation states. Notably, the benefit of DFT with van der Waals correction is marginal. Furthermore, only hybrid functionals succeed in opening a bandgap, and such methods are necessary to study NiOOH electronic structure. In this work, we expect to give guidelines to theoreticians dealing with this material and to present a rational approach in the choice of the DFT method of calculation.

  3. Modulation Based on Probability Density Functions

    Science.gov (United States)

    Williams, Glenn L.

    2009-01-01

    A proposed method of modulating a sinusoidal carrier signal to convey digital information involves the use of histograms representing probability density functions (PDFs) that characterize samples of the signal waveform. The method is based partly on the observation that when a waveform is sampled (whether by analog or digital means) over a time interval at least as long as one half cycle of the waveform, the samples can be sorted by frequency of occurrence, thereby constructing a histogram representing a PDF of the waveform during that time interval.

  4. Computational prediction of drug-drug interactions based on drugs functional similarities.

    Science.gov (United States)

    Ferdousi, Reza; Safdari, Reza; Omidi, Yadollah

    2017-06-01

    Therapeutic activities of drugs are often influenced by co-administration of drugs that may cause inevitable drug-drug interactions (DDIs) and inadvertent side effects. Prediction and identification of DDIs are extremely vital for the patient safety and success of treatment modalities. A number of computational methods have been employed for the prediction of DDIs based on drugs structures and/or functions. Here, we report on a computational method for DDIs prediction based on functional similarity of drugs. The model was set based on key biological elements including carriers, transporters, enzymes and targets (CTET). The model was applied for 2189 approved drugs. For each drug, all the associated CTETs were collected, and the corresponding binary vectors were constructed to determine the DDIs. Various similarity measures were conducted to detect DDIs. Of the examined similarity methods, the inner product-based similarity measures (IPSMs) were found to provide improved prediction values. Altogether, 2,394,766 potential drug pairs interactions were studied. The model was able to predict over 250,000 unknown potential DDIs. Upon our findings, we propose the current method as a robust, yet simple and fast, universal in silico approach for identification of DDIs. We envision that this proposed method can be used as a practical technique for the detection of possible DDIs based on the functional similarities of drugs. Copyright © 2017. Published by Elsevier Inc.

  5. The Multi-Criteria Negotiation Analysis Based on the Membership Function

    Directory of Open Access Journals (Sweden)

    Roszkowska Ewa

    2014-08-01

    Full Text Available In this paper we propose a multi-criteria model based on the fuzzy preferences approach which can be implemented in the prenegotiation phase to evaluate the negotiations packages. The applicability of some multi-criteria ranking methods were discussed for building a scoring function for negotiation packages. The first one is Simple Additive Weighting (SAW technique which determines the sum of the partial satisfactions from each negotiation issue and aggregate them using the issue weights. The other one is Distance Based Methods (DBM, with its extension based on the distances to ideal or anti-ideal package, i.e. the TOPSIS procedure. In our approach the negotiator's preferences over the issues are represented by fuzzy membership functions and next a selected multi-criteria decision making method is adopted to determine the global rating of each package. The membership functions are used here as the equivalents of utility functions spread over the negotiation issues, which let us compare different type of data. One of the key advantages of the approach proposed is its usefulness for building a general scoring function in the ill-structured negotiation problem, namely the situation in which the problem itself as well as the negotiators preferences cannot be precisely defined, the available information is uncertain, subjective and vague. Secondly, all proposed variants of scoring functions produce consistent rankings, even though the new packages are added (or removed and do not result in rank reversal.

  6. Calculation of neutron importance function in fissionable assemblies using Monte Carlo method

    International Nuclear Information System (INIS)

    Feghhi, S. A. H.; Afarideh, H.; Shahriari, M.

    2007-01-01

    The purpose of the present work is to develop an efficient solution method to calculate neutron importance function in fissionable assemblies for all criticality conditions, using Monte Carlo Method. The neutron importance function has a well important role in perturbation theory and reactor dynamic calculations. Usually this function can be determined by calculating adjoint flux through out solving the Adjoint weighted transport equation with deterministic methods. However, in complex geometries these calculations are very difficult. In this article, considering the capabilities of MCNP code in solving problems with complex geometries and its closeness to physical concepts, a comprehensive method based on physical concept of neutron importance has been introduced for calculating neutron importance function in sub-critical, critical and supercritical conditions. For this means a computer program has been developed. The results of the method has been benchmarked with ANISN code calculations in 1 and 2 group modes for simple geometries and their correctness has been approved for all three criticality conditions. Ultimately, the efficiency of the method for complex geometries has been shown by calculation of neutron importance in MNSR research reactor

  7. [Standardization of the terms for Chinese herbal functions based on functional targeting].

    Science.gov (United States)

    Xiao, Bin; Tao, Ou; Gu, Hao; Wang, Yun; Qiao, Yan-Jiang

    2011-03-01

    Functional analysis concisely summarizes and concentrates on the therapeutic characteristics and features of Chinese herbal medicine. Standardization of the terms for Chinese herbal functions not only plays a key role in modern research and development of Chinese herbal medicine, but also has far-reaching clinical applications. In this paper, a new method for standardizing the terms for Chinese herbal function was proposed. Firstly, functional targets were collected. Secondly, the pathological conditions and the mode of action of every functional target were determined by analyzing the references. Thirdly, the relationships between the pathological condition and the mode of action were determined based on Chinese medicine theory and data. This three-step approach allows for standardization of the terms for Chinese herbal functions. Promoting the standardization of Chinese medicine terms will benefit the overall clinical application of Chinese herbal medicine.

  8. Mindfulness-Based Cognitive Therapy for severe Functional Disorders

    DEFF Research Database (Denmark)

    Fjorback, Lone Overby

    MINDFULNESS-BASED COGNITIVE THERAPY FOR FUNCTIONAL DISORDERS- A RANDOMISED CONTROLLED TRIAL   Background: Mindfulness-Based Stress Reduction (MBSR) is a group skills-training program developed by Kabat-Zinn. It is designed to teach patients to become more aware of and relate differently...... to their thoughts, feelings, and bodily sensations. Randomised controlled studies of MBSR have shown mitigation of stress, anxiety, and dysphoria in general population and reduction in total mood disturbance and stress symptoms in a medical population. In Mindfulness Based Cognitive Therapy MBSR is recombined...... with cognitive therapy. Aim: To examine the efficacy of Mindfulness-Based Cognitive Therapy in severe Functional disorders, defined as severe Bodily Distress Disorder. Method: 120 patients are randomised to either Mindfulness Based Cognitive Therapy: a manualized programme with eight weekly 3 ½ hour group...

  9. Mindfulness-Based Cognitive Therapy for severe Functional Disorders

    DEFF Research Database (Denmark)

    Fjorback, Lone Overby

    with cognitive therapy. Aim: To examine the efficacy of Mindfulness-Based Cognitive Therapy in severe Functional disorders, defined as severe Bodily Distress Disorder. Method: 120 patients are randomised to either Mindfulness Based Cognitive Therapy: a manualized programme with eight weekly 3 ½ hour group......MINDFULNESS-BASED COGNITIVE THERAPY FOR FUNCTIONAL DISORDERS- A RANDOMISED CONTROLLED TRIAL   Background: Mindfulness-Based Stress Reduction (MBSR) is a group skills-training program developed by Kabat-Zinn. It is designed to teach patients to become more aware of and relate differently...... to their thoughts, feelings, and bodily sensations. Randomised controlled studies of MBSR have shown mitigation of stress, anxiety, and dysphoria in general population and reduction in total mood disturbance and stress symptoms in a medical population. In Mindfulness Based Cognitive Therapy MBSR is recombined...

  10. A fast computation method for MUSIC spectrum function based on circular arrays

    Science.gov (United States)

    Du, Zhengdong; Wei, Ping

    2015-02-01

    The large computation amount of multiple signal classification (MUSIC) spectrum function seriously affects the timeliness of direction finding system using MUSIC algorithm, especially in the two-dimensional directions of arrival (DOA) estimation of azimuth and elevation with a large antenna array. This paper proposes a fast computation method for MUSIC spectrum. It is suitable for any circular array. First, the circular array is transformed into a virtual uniform circular array, in the process of calculating MUSIC spectrum, for the cyclic characteristics of steering vector, the inner product in the calculation of spatial spectrum is realised by cyclic convolution. The computational amount of MUSIC spectrum is obviously less than that of the conventional method. It is a very practical way for MUSIC spectrum computation in circular arrays.

  11. Computational Methods for Large Spatio-temporal Datasets and Functional Data Ranking

    KAUST Repository

    Huang, Huang

    2017-07-16

    This thesis focuses on two topics, computational methods for large spatial datasets and functional data ranking. Both are tackling the challenges of big and high-dimensional data. The first topic is motivated by the prohibitive computational burden in fitting Gaussian process models to large and irregularly spaced spatial datasets. Various approximation methods have been introduced to reduce the computational cost, but many rely on unrealistic assumptions about the process and retaining statistical efficiency remains an issue. We propose a new scheme to approximate the maximum likelihood estimator and the kriging predictor when the exact computation is infeasible. The proposed method provides different types of hierarchical low-rank approximations that are both computationally and statistically efficient. We explore the improvement of the approximation theoretically and investigate the performance by simulations. For real applications, we analyze a soil moisture dataset with 2 million measurements with the hierarchical low-rank approximation and apply the proposed fast kriging to fill gaps for satellite images. The second topic is motivated by rank-based outlier detection methods for functional data. Compared to magnitude outliers, it is more challenging to detect shape outliers as they are often masked among samples. We develop a new notion of functional data depth by taking the integration of a univariate depth function. Having a form of the integrated depth, it shares many desirable features. Furthermore, the novel formation leads to a useful decomposition for detecting both shape and magnitude outliers. Our simulation studies show the proposed outlier detection procedure outperforms competitors in various outlier models. We also illustrate our methodology using real datasets of curves, images, and video frames. Finally, we introduce the functional data ranking technique to spatio-temporal statistics for visualizing and assessing covariance properties, such as

  12. Applying the expansion method in hierarchical functions to the solution of Navier-Stokes equations for incompressible fluids

    International Nuclear Information System (INIS)

    Sabundjian, Gaiane

    1999-01-01

    This work presents a novel numeric method, based on the finite element method, applied for the solution of the Navier-Stokes equations for incompressible fluids in two dimensions in laminar flow. The method is based on the expansion of the variables in almost hierarchical functions. The used expansion functions are based on Legendre polynomials, adjusted in the rectangular elements in a such a way that corner, side and area functions are defined. The order of the expansion functions associated with the sides and with the area of the elements can be adjusted to the necessary or desired degree. This novel numeric method is denominated by Hierarchical Expansion Method. In order to validate the proposed numeric method three well-known problems of the literature in two dimensions are analyzed. The results show the method capacity in supplying precise results. From the results obtained in this thesis it is possible to conclude that the hierarchical expansion method can be applied successfully for the solution of fluid dynamic problems that involve incompressible fluids. (author)

  13. Failure Probability Calculation Method Using Kriging Metamodel-based Importance Sampling Method

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Seunggyu [Korea Aerospace Research Institue, Daejeon (Korea, Republic of); Kim, Jae Hoon [Chungnam Nat’l Univ., Daejeon (Korea, Republic of)

    2017-05-15

    The kernel density was determined based on sampling points obtained in a Markov chain simulation and was assumed to be an important sampling function. A Kriging metamodel was constructed in more detail in the vicinity of a limit state. The failure probability was calculated based on importance sampling, which was performed for the Kriging metamodel. A pre-existing method was modified to obtain more sampling points for a kernel density in the vicinity of a limit state. A stable numerical method was proposed to find a parameter of the kernel density. To assess the completeness of the Kriging metamodel, the possibility of changes in the calculated failure probability due to the uncertainty of the Kriging metamodel was calculated.

  14. Bootstrapping conformal field theories with the extremal functional method.

    Science.gov (United States)

    El-Showk, Sheer; Paulos, Miguel F

    2013-12-13

    The existence of a positive linear functional acting on the space of (differences between) conformal blocks has been shown to rule out regions in the parameter space of conformal field theories (CFTs). We argue that at the boundary of the allowed region the extremal functional contains, in principle, enough information to determine the dimensions and operator product expansion (OPE) coefficients of an infinite number of operators appearing in the correlator under analysis. Based on this idea we develop the extremal functional method (EFM), a numerical procedure for deriving the spectrum and OPE coefficients of CFTs lying on the boundary (of solution space). We test the EFM by using it to rederive the low lying spectrum and OPE coefficients of the two-dimensional Ising model based solely on the dimension of a single scalar quasiprimary--no Virasoro algebra required. Our work serves as a benchmark for applications to more interesting, less known CFTs in the near future.

  15. Development of a code in three-dimensional cylindrical geometry based on analytic function expansion nodal (AFEN) method

    International Nuclear Information System (INIS)

    Lee, Joo Hee

    2006-02-01

    There is growing interest in developing pebble bed reactors (PBRs) as a candidate of very high temperature gas-cooled reactors (VHTRs). Until now, most existing methods of nuclear design analysis for this type of reactors are base on old finite-difference solvers or on statistical methods. But for realistic analysis of PBRs, there is strong desire of making available high fidelity nodal codes in three-dimensional (r,θ,z) cylindrical geometry. Recently, the Analytic Function Expansion Nodal (AFEN) method developed quite extensively in Cartesian (x,y,z) geometry and in hexagonal-z geometry was extended to two-group (r,z) cylindrical geometry, and gave very accurate results. In this thesis, we develop a method for the full three-dimensional cylindrical (r,θ,z) geometry and implement the method into a code named TOPS. The AFEN methodology in this geometry as in hexagonal geometry is 'robus' (e.g., no occurrence of singularity), due to the unique feature of the AFEN method that it does not use the transverse integration. The transverse integration in the usual nodal methods, however, leads to an impasse, that is, failure of the azimuthal term to be transverse-integrated over r-z surface. We use 13 nodal unknowns in an outer node and 7 nodal unknowns in an innermost node. The general solution of the node can be expressed in terms of that nodal unknowns, and can be updated using the nodal balance equation and the current continuity condition. For more realistic analysis of PBRs, we implemented em Marshak boundary condition to treat the incoming current zero boundary condition and the partial current translation (PCT) method to treat voids in the core. The TOPS code was verified in the various numerical tests derived from Dodds problem and PBMR-400 benchmark problem. The results of the TOPS code show high accuracy and fast computing time than the VENTURE code that is based on finite difference method (FDM)

  16. Assessment of soil microbial diversity with functional multi-endpoint methods

    DEFF Research Database (Denmark)

    Winding, Anne; Creamer, R. E.; Rutgers, M.

    on CO2 development by the microbes such as substrate induced respiration (SIR) on specific substrates have lead to the development of MicroResp™ and Community Level Physiological Profile (CLPP) with Biolog™ plates, and soil enzymatic activity assayed by Extracellular Enzyme Activity (EEA) based on MUF......Soil microbial diversity provides the cornerstone for support of soil ecosystem services by key roles in soil organic matter turnover, carbon sequestration and water infiltration. However, standardized methods to quantify the multitude of microbial functions in soils are lacking. Methods based...... to the lack of principle methods, the data obtained from these substitute methods are currently not used in classification and assessment schemes, making quantification of natural capital and ecosystems services of the soil a difficult venture. In this contribution, we compare and contrast the three...

  17. Development of thermal stress screening method. Application of green function method

    International Nuclear Information System (INIS)

    Furuhashi, Ichiro; Shibamoto, Hiroshi; Kasahara, Naoto

    2004-01-01

    This work was achieved for the development of the screening method of thermal transient stresses in FBR components. We proposed an approximation method for evaluations of thermal stress under variable heat transfer coefficients (non-linear problems) using the Green functions of thermal stresses with constant heat transfer coefficients (linear problems). Detailed thermal stress analyses provided Green functions for a skirt structure and a tube-sheet of Intermediate Heat Exchanger. The upper bound Green functions were obtained by the analyses using those upper bound heat transfer coefficients. The medium and the lower bound Green functions were got by the analyses of those under medium and the lower bound heat transfer coefficients. Conventional evaluations utilized the upper bound Green functions. On the other hand, we proposed a new evaluation method by using the upper bound, medium and the lower bound Green functions. The comparison of above results gave the results as follows. The conventional evaluations were conservative and appropriate for the cases under one fluid thermal transient structure such as the skirt. The conventional evaluations were generally conservative for the complicated structures under two or more fluids thermal transients such as the tube-sheet. But the danger locations could exists for the complicated structures under two or more fluids transients, namely the conventional evaluations were non-conservative. The proposed evaluations gave good estimations for these complicated structures. Though above results, we have made the basic documents of the screening method of thermal transient stresses using the conventional method and the new method. (author)

  18. Functional geometric method for solving free boundary problems for harmonic functions

    Energy Technology Data Exchange (ETDEWEB)

    Demidov, Aleksander S [M. V. Lomonosov Moscow State University, Moscow (Russian Federation)

    2010-01-01

    A survey is given of results and approaches for a broad spectrum of free boundary problems for harmonic functions of two variables. The main results are obtained by the functional geometric method. The core of these methods is an interrelated analysis of the functional and geometric characteristics of the problems under consideration and of the corresponding non-linear Riemann-Hilbert problems. An extensive list of open questions is presented. Bibliography: 124 titles.

  19. Coupled double-distribution-function lattice Boltzmann method for the compressible Navier-Stokes equations.

    Science.gov (United States)

    Li, Q; He, Y L; Wang, Y; Tao, W Q

    2007-11-01

    A coupled double-distribution-function lattice Boltzmann method is developed for the compressible Navier-Stokes equations. Different from existing thermal lattice Boltzmann methods, this method can recover the compressible Navier-Stokes equations with a flexible specific-heat ratio and Prandtl number. In the method, a density distribution function based on a multispeed lattice is used to recover the compressible continuity and momentum equations, while the compressible energy equation is recovered by an energy distribution function. The energy distribution function is then coupled to the density distribution function via the thermal equation of state. In order to obtain an adjustable specific-heat ratio, a constant related to the specific-heat ratio is introduced into the equilibrium energy distribution function. Two different coupled double-distribution-function lattice Boltzmann models are also proposed in the paper. Numerical simulations are performed for the Riemann problem, the double-Mach-reflection problem, and the Couette flow with a range of specific-heat ratios and Prandtl numbers. The numerical results are found to be in excellent agreement with analytical and/or other solutions.

  20. An efficient method for hybrid density functional calculation with spin-orbit coupling

    Science.gov (United States)

    Wang, Maoyuan; Liu, Gui-Bin; Guo, Hong; Yao, Yugui

    2018-03-01

    In first-principles calculations, hybrid functional is often used to improve accuracy from local exchange correlation functionals. A drawback is that evaluating the hybrid functional needs significantly more computing effort. When spin-orbit coupling (SOC) is taken into account, the non-collinear spin structure increases computing effort by at least eight times. As a result, hybrid functional calculations with SOC are intractable in most cases. In this paper, we present an approximate solution to this problem by developing an efficient method based on a mixed linear combination of atomic orbital (LCAO) scheme. We demonstrate the power of this method using several examples and we show that the results compare very well with those of direct hybrid functional calculations with SOC, yet the method only requires a computing effort similar to that without SOC. The presented technique provides a good balance between computing efficiency and accuracy, and it can be extended to magnetic materials.

  1. Comparison of the auxiliary function method and the discrete-ordinate method for solving the radiative transfer equation for light scattering.

    Science.gov (United States)

    da Silva, Anabela; Elias, Mady; Andraud, Christine; Lafait, Jacques

    2003-12-01

    Two methods for solving the radiative transfer equation are compared with the aim of computing the angular distribution of the light scattered by a heterogeneous scattering medium composed of a single flat layer or a multilayer. The first method [auxiliary function method (AFM)], recently developed, uses an auxiliary function and leads to an exact solution; the second [discrete-ordinate method (DOM)] is based on the channel concept and needs an angular discretization. The comparison is applied to two different media presenting two typical and extreme scattering behaviors: Rayleigh and Mie scattering with smooth or very anisotropic phase functions, respectively. A very good agreement between the predictions of the two methods is observed in both cases. The larger the number of channels used in the DOM, the better the agreement. The principal advantages and limitations of each method are also listed.

  2. Springback Compensation Based on FDM-DTF Method

    International Nuclear Information System (INIS)

    Liu Qiang; Kang Lan

    2010-01-01

    Stamping part error caused by springback is usually considered to be a tooling defect in sheet metal forming process. This problem can be corrected by adjusting the tooling shape to appropriate shape. In this paper, springback compensation based on FDM-DTF method is proposed to be used for design and modification of the tooling shape. Firstly, based on FDM method, the tooling shape is designed by reversing inner force's direction at the end of forming simulation, the required tooling shape can be got through some iterations. Secondly actual tooling is produced based on results got in the first step. When the tooling and part surface discrete data are investigated, the transfer function between numerical springback error and real springback error can be calculated based on wavelet transform results, which can be used in predicting the tooling shape for the desired product. Finally the FDM-DTF method is proved to control springback effectively after it has been applied in the 2D irregular product springback control.

  3. A Meshfree Cell-based Smoothed Point Interpolation Method for Solid Mechanics Problems

    International Nuclear Information System (INIS)

    Zhang Guiyong; Liu Guirong

    2010-01-01

    In the framework of a weakened weak (W 2 ) formulation using a generalized gradient smoothing operation, this paper introduces a novel meshfree cell-based smoothed point interpolation method (CS-PIM) for solid mechanics problems. The W 2 formulation seeks solutions from a normed G space which includes both continuous and discontinuous functions and allows the use of much more types of methods to create shape functions for numerical methods. When PIM shape functions are used, the functions constructed are in general not continuous over the entire problem domain and hence are not compatible. Such an interpolation is not in a traditional H 1 space, but in a G 1 space. By introducing the generalized gradient smoothing operation properly, the requirement on function is now further weakened upon the already weakened requirement for functions in a H 1 space and G 1 space can be viewed as a space of functions with weakened weak (W 2 ) requirement on continuity. The cell-based smoothed point interpolation method (CS-PIM) is formulated based on the W 2 formulation, in which displacement field is approximated using the PIM shape functions, which possess the Kronecker delta property facilitating the enforcement of essential boundary conditions [3]. The gradient (strain) field is constructed by the generalized gradient smoothing operation within the cell-based smoothing domains, which are exactly the triangular background cells. A W 2 formulation of generalized smoothed Galerkin (GS-Galerkin) weak form is used to derive the discretized system equations. It was found that the CS-PIM possesses the following attractive properties: (1) It is very easy to implement and works well with the simplest linear triangular mesh without introducing additional degrees of freedom; (2) it is at least linearly conforming; (3) this method is temporally stable and works well for dynamic analysis; (4) it possesses a close-to-exact stiffness, which is much softer than the overly-stiff FEM model and

  4. A recognition method research based on the heart sound texture map

    Directory of Open Access Journals (Sweden)

    Huizhong Cheng

    2016-06-01

    Full Text Available In order to improve the Heart Sound recognition rate and reduce the recognition time, in this paper, we introduces a new method for Heart Sound pattern recognition by using Heart Sound Texture Map. Based on the Heart Sound model, we give the Heart Sound time-frequency diagram and the Heart Sound Texture Map definition, we study the structure of the Heart Sound Window Function principle and realization method, and then discusses how to use the Heart Sound Window Function and the Short-time Fourier Transform to obtain two-dimensional Heart Sound time-frequency diagram, propose corner correlation recognition algorithm based on the Heart Sound Texture Map according to the characteristics of Heart Sound. The simulation results show that the Heart Sound Window Function compared with the traditional window function makes the first (S1 and the second (S2 Heart Sound texture clearer. And the corner correlation recognition algorithm based on the Heart Sound Texture Map can significantly improve the recognition rate and reduce the expense, which is an effective Heart Sound recognition method.

  5. Structure-based inference of molecular functions of proteins of unknown function from Berkeley Structural Genomics Center

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Sung-Hou; Shin, Dong Hae; Hou, Jingtong; Chandonia, John-Marc; Das, Debanu; Choi, In-Geol; Kim, Rosalind; Kim, Sung-Hou

    2007-09-02

    Advances in sequence genomics have resulted in an accumulation of a huge number of protein sequences derived from genome sequences. However, the functions of a large portion of them cannot be inferred based on the current methods of sequence homology detection to proteins of known functions. Three-dimensional structure can have an important impact in providing inference of molecular function (physical and chemical function) of a protein of unknown function. Structural genomics centers worldwide have been determining many 3-D structures of the proteins of unknown functions, and possible molecular functions of them have been inferred based on their structures. Combined with bioinformatics and enzymatic assay tools, the successful acceleration of the process of protein structure determination through high throughput pipelines enables the rapid functional annotation of a large fraction of hypothetical proteins. We present a brief summary of the process we used at the Berkeley Structural Genomics Center to infer molecular functions of proteins of unknown function.

  6. The Functional Resonance Analysis Method for a systemic risk based environmental auditing in a sinter plant: A semi-quantitative approach

    International Nuclear Information System (INIS)

    Patriarca, Riccardo; Di Gravio, Giulio; Costantino, Francesco; Tronci, Massimo

    2017-01-01

    Environmental auditing is a main issue for any production plant and assessing environmental performance is crucial to identify risks factors. The complexity of current plants arises from interactions among technological, human and organizational system components, which are often transient and not easily detectable. The auditing thus requires a systemic perspective, rather than focusing on individual behaviors, as emerged in recent research in the safety domain for socio-technical systems. We explore the significance of modeling the interactions of system components in everyday work, by the application of a recent systemic method, i.e. the Functional Resonance Analysis Method (FRAM), in order to define dynamically the system structure. We present also an innovative evolution of traditional FRAM following a semi-quantitative approach based on Monte Carlo simulation. This paper represents the first contribution related to the application of FRAM in the environmental context, moreover considering a consistent evolution based on Monte Carlo simulation. The case study of an environmental risk auditing in a sinter plant validates the research, showing the benefits in terms of identifying potential critical activities, related mitigating actions and comprehensive environmental monitoring indicators. - Highlights: • We discuss the relevance of a systemic risk based environmental audit. • We present FRAM to represent functional interactions of the system. • We develop a semi-quantitative FRAM framework to assess environmental risks. • We apply the semi-quantitative FRAM framework to build a model for a sinter plant.

  7. The Functional Resonance Analysis Method for a systemic risk based environmental auditing in a sinter plant: A semi-quantitative approach

    Energy Technology Data Exchange (ETDEWEB)

    Patriarca, Riccardo, E-mail: riccardo.patriarca@uniroma1.it; Di Gravio, Giulio; Costantino, Francesco; Tronci, Massimo

    2017-03-15

    Environmental auditing is a main issue for any production plant and assessing environmental performance is crucial to identify risks factors. The complexity of current plants arises from interactions among technological, human and organizational system components, which are often transient and not easily detectable. The auditing thus requires a systemic perspective, rather than focusing on individual behaviors, as emerged in recent research in the safety domain for socio-technical systems. We explore the significance of modeling the interactions of system components in everyday work, by the application of a recent systemic method, i.e. the Functional Resonance Analysis Method (FRAM), in order to define dynamically the system structure. We present also an innovative evolution of traditional FRAM following a semi-quantitative approach based on Monte Carlo simulation. This paper represents the first contribution related to the application of FRAM in the environmental context, moreover considering a consistent evolution based on Monte Carlo simulation. The case study of an environmental risk auditing in a sinter plant validates the research, showing the benefits in terms of identifying potential critical activities, related mitigating actions and comprehensive environmental monitoring indicators. - Highlights: • We discuss the relevance of a systemic risk based environmental audit. • We present FRAM to represent functional interactions of the system. • We develop a semi-quantitative FRAM framework to assess environmental risks. • We apply the semi-quantitative FRAM framework to build a model for a sinter plant.

  8. Approximation of the Doppler broadening function by Frobenius method

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C.

    2005-01-01

    An analytical approximation of the Doppler broadening function ψ(x,ξ) is proposed. This approximation is based on the solution of the differential equation for ψ(x,ξ) using the methods of Frobenius and the parameters variation. The analytical form derived for ψ(x,ξ) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances mainly for the calculations of multigroup parameters and self-protection factors of the resonances, being the last used to correct microscopic cross-sections measurements by the activation technique. (author)

  9. Exact solitary wave solutions for some nonlinear evolution equations via Exp-function method

    International Nuclear Information System (INIS)

    Ebaid, A.

    2007-01-01

    Based on the Exp-function method, exact solutions for some nonlinear evolution equations are obtained. The KdV equation, Burgers' equation and the combined KdV-mKdV equation are chosen to illustrate the effectiveness of the method

  10. Lagrange polynomial interpolation method applied in the calculation of the J({xi},{beta}) function

    Energy Technology Data Exchange (ETDEWEB)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro [Centro Federal de Educacao Tecnologica de Quimica de Nilopolis, RJ (Brazil)]. E-mails: munhoz.vf@gmail.com; dpalma@cefeteq.br; Martinez, Aquilino Senra [Universidade Federal do Rio de Janeiro (UFRJ), RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia (COPPE) (COPPE). Programa de Engenharia Nuclear]. E-mail: aquilino@lmp.ufrj.br

    2008-07-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  11. Lagrange polynomial interpolation method applied in the calculation of the J(ξ,β) function

    International Nuclear Information System (INIS)

    Fraga, Vinicius Munhoz; Palma, Daniel Artur Pinheiro; Martinez, Aquilino Senra

    2008-01-01

    The explicit dependence of the Doppler broadening function creates difficulties in the obtaining an analytical expression for J function . The objective of this paper is to present a method for the quick and accurate calculation of J function based on the recent advances in the calculation of the Doppler broadening function and on a systematic analysis of its integrand. The methodology proposed, of a semi-analytical nature, uses the Lagrange polynomial interpolation method and the Frobenius formulation in the calculation of Doppler broadening function . The results have proven satisfactory from the standpoint of accuracy and processing time. (author)

  12. Functional networks inference from rule-based machine learning models.

    Science.gov (United States)

    Lazzarini, Nicola; Widera, Paweł; Williamson, Stuart; Heer, Rakesh; Krasnogor, Natalio; Bacardit, Jaume

    2016-01-01

    Functional networks play an important role in the analysis of biological processes and systems. The inference of these networks from high-throughput (-omics) data is an area of intense research. So far, the similarity-based inference paradigm (e.g. gene co-expression) has been the most popular approach. It assumes a functional relationship between genes which are expressed at similar levels across different samples. An alternative to this paradigm is the inference of relationships from the structure of machine learning models. These models are able to capture complex relationships between variables, that often are different/complementary to the similarity-based methods. We propose a protocol to infer functional networks from machine learning models, called FuNeL. It assumes, that genes used together within a rule-based machine learning model to classify the samples, might also be functionally related at a biological level. The protocol is first tested on synthetic datasets and then evaluated on a test suite of 8 real-world datasets related to human cancer. The networks inferred from the real-world data are compared against gene co-expression networks of equal size, generated with 3 different methods. The comparison is performed from two different points of view. We analyse the enriched biological terms in the set of network nodes and the relationships between known disease-associated genes in a context of the network topology. The comparison confirms both the biological relevance and the complementary character of the knowledge captured by the FuNeL networks in relation to similarity-based methods and demonstrates its potential to identify known disease associations as core elements of the network. Finally, using a prostate cancer dataset as a case study, we confirm that the biological knowledge captured by our method is relevant to the disease and consistent with the specialised literature and with an independent dataset not used in the inference process. The

  13. Convex-based void filling method for CAD-based Monte Carlo geometry modeling

    International Nuclear Information System (INIS)

    Yu, Shengpeng; Cheng, Mengyun; Song, Jing; Long, Pengcheng; Hu, Liqin

    2015-01-01

    Highlights: • We present a new void filling method named CVF for CAD based MC geometry modeling. • We describe convex based void description based and quality-based space subdivision. • The results showed improvements provided by CVF for both modeling and MC calculation efficiency. - Abstract: CAD based automatic geometry modeling tools have been widely applied to generate Monte Carlo (MC) calculation geometry for complex systems according to CAD models. Automatic void filling is one of the main functions in the CAD based MC geometry modeling tools, because the void space between parts in CAD models is traditionally not modeled while MC codes such as MCNP need all the problem space to be described. A dedicated void filling method, named Convex-based Void Filling (CVF), is proposed in this study for efficient void filling and concise void descriptions. The method subdivides all the problem space into disjointed regions using Quality based Subdivision (QS) and describes the void space in each region with complementary descriptions of the convex volumes intersecting with that region. It has been implemented in SuperMC/MCAM, the Multiple-Physics Coupling Analysis Modeling Program, and tested on International Thermonuclear Experimental Reactor (ITER) Alite model. The results showed that the new method reduced both automatic modeling time and MC calculation time

  14. BLUES function method in computational physics

    Science.gov (United States)

    Indekeu, Joseph O.; Müller-Nedebock, Kristian K.

    2018-04-01

    We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.

  15. Quantum master equation method based on the broken-symmetry time-dependent density functional theory: application to dynamic polarizability of open-shell molecular systems.

    Science.gov (United States)

    Kishi, Ryohei; Nakano, Masayoshi

    2011-04-21

    A novel method for the calculation of the dynamic polarizability (α) of open-shell molecular systems is developed based on the quantum master equation combined with the broken-symmetry (BS) time-dependent density functional theory within the Tamm-Dancoff approximation, referred to as the BS-DFTQME method. We investigate the dynamic α density distribution obtained from BS-DFTQME calculations in order to analyze the spatial contributions of electrons to the field-induced polarization and clarify the contributions of the frontier orbital pair to α and its density. To demonstrate the performance of this method, we examine the real part of dynamic α of singlet 1,3-dipole systems having a variety of diradical characters (y). The frequency dispersion of α, in particular in the resonant region, is shown to strongly depend on the exchange-correlation functional as well as on the diradical character. Under sufficiently off-resonant condition, the dynamic α is found to decrease with increasing y and/or the fraction of Hartree-Fock exchange in the exchange-correlation functional, which enhances the spin polarization, due to the decrease in the delocalization effects of π-diradical electrons in the frontier orbital pair. The BS-DFTQME method with the BHandHLYP exchange-correlation functional also turns out to semiquantitatively reproduce the α spectra calculated by a strongly correlated ab initio molecular orbital method, i.e., the spin-unrestricted coupled-cluster singles and doubles.

  16. A recursive Monte Carlo method for estimating importance functions in deep penetration problems

    International Nuclear Information System (INIS)

    Goldstein, M.

    1980-04-01

    A pratical recursive Monte Carlo method for estimating the importance function distribution, aimed at importance sampling for the solution of deep penetration problems in three-dimensional systems, was developed. The efficiency of the recursive method was investigated for sample problems including one- and two-dimensional, monoenergetic and and multigroup problems, as well as for a practical deep-penetration problem with streaming. The results of the recursive Monte Carlo calculations agree fairly well with Ssub(n) results. It is concluded that the recursive Monte Carlo method promises to become a universal method for estimating the importance function distribution for the solution of deep-penetration problems, in all kinds of systems: for many systems the recursive method is likely to be more efficient than previously existing methods; for three-dimensional systems it is the first method that can estimate the importance function with the accuracy required for an efficient solution based on importance sampling of neutron deep-penetration problems in those systems

  17. Density-functional expansion methods: Grand challenges.

    Science.gov (United States)

    Giese, Timothy J; York, Darrin M

    2012-03-01

    We discuss the source of errors in semiempirical density functional expansion (VE) methods. In particular, we show that VE methods are capable of well-reproducing their standard Kohn-Sham density functional method counterparts, but suffer from large errors upon using one or more of these approximations: the limited size of the atomic orbital basis, the Slater monopole auxiliary basis description of the response density, and the one- and two-body treatment of the core-Hamiltonian matrix elements. In the process of discussing these approximations and highlighting their symptoms, we introduce a new model that supplements the second-order density-functional tight-binding model with a self-consistent charge-dependent chemical potential equalization correction; we review our recently reported method for generalizing the auxiliary basis description of the atomic orbital response density; and we decompose the first-order potential into a summation of additive atomic components and many-body corrections, and from this examination, we provide new insights and preliminary results that motivate and inspire new approximate treatments of the core-Hamiltonian.

  18. Lung function imaging methods in Cystic Fibrosis pulmonary disease.

    Science.gov (United States)

    Kołodziej, Magdalena; de Veer, Michael J; Cholewa, Marian; Egan, Gary F; Thompson, Bruce R

    2017-05-17

    Monitoring of pulmonary physiology is fundamental to the clinical management of patients with Cystic Fibrosis. The current standard clinical practise uses spirometry to assess lung function which delivers a clinically relevant functional readout of total lung function, however does not supply any visible or localised information. High Resolution Computed Tomography (HRCT) is a well-established current 'gold standard' method for monitoring lung anatomical changes in Cystic Fibrosis patients. HRCT provides excellent morphological information, however, the X-ray radiation dose can become significant if multiple scans are required to monitor chronic diseases such as cystic fibrosis. X-ray phase-contrast imaging is another emerging X-ray based methodology for Cystic Fibrosis lung assessment which provides dynamic morphological and functional information, albeit with even higher X-ray doses than HRCT. Magnetic Resonance Imaging (MRI) is a non-ionising radiation imaging method that is garnering growing interest among researchers and clinicians working with Cystic Fibrosis patients. Recent advances in MRI have opened up the possibilities to observe lung function in real time to potentially allow sensitive and accurate assessment of disease progression. The use of hyperpolarized gas or non-contrast enhanced MRI can be tailored to clinical needs. While MRI offers significant promise it still suffers from poor spatial resolution and the development of an objective scoring system especially for ventilation assessment.

  19. Characterization of adaptive statistical iterative reconstruction (ASIR) in low contrast helical abdominal imaging via a transfer function based method

    Science.gov (United States)

    Zhang, Da; Li, Xinhua; Liu, Bob

    2012-03-01

    Since the introduction of ASiR, its potential in noise reduction has been reported in various clinical applications. However, the influence of different scan and reconstruction parameters on the trade off between ASiR's blurring effect and noise reduction in low contrast imaging has not been fully studied. Simple measurements on low contrast images, such as CNR or phantom scores could not explore the nuance nature of this problem. We tackled this topic using a method which compares the performance of ASiR in low contrast helical imaging based on an assumed filter layer on top of the FBP reconstruction. Transfer functions of this filter layer were obtained from the noise power spectra (NPS) of corresponding FBP and ASiR images that share the same scan and reconstruction parameters. 2D transfer functions were calculated as sqrt[NPSASiR(u, v)/NPSFBP(u, v)]. Synthesized ACR phantom images were generated by filtering the FBP images with the transfer functions of specific (FBP, ASiR) pairs, and were compared with the ASiR images. It is shown that the transfer functions could predict the deterministic blurring effect of ASiR on low contrast objects, as well as the degree of noise reductions. Using this method, the influence of dose, scan field of view (SFOV), display field of view (DFOV), ASiR level, and Recon Mode on the behavior of ASiR in low contrast imaging was studied. It was found that ASiR level, dose level, and DFOV play more important roles in determining the behavior of ASiR than the other two parameters.

  20. Research on a Nonlinear Robust Adaptive Control Method of the Elbow Joint of a Seven-Function Hydraulic Manipulator Based on Double-Screw-Pair Transmission

    Directory of Open Access Journals (Sweden)

    Gaosheng Luo

    2014-01-01

    Full Text Available A robust adaptive control method with full-state feedback is proposed based on the fact that the elbow joint of a seven-function hydraulic manipulator with double-screw-pair transmission features the following control characteristics: a strongly nonlinear hydraulic system, parameter uncertainties susceptible to temperature and pressure changes of the external environment, and unknown outer disturbances. Combined with the design method of the back-stepping controller, the asymptotic stability of the control system in the presence of disturbances from uncertain systematic parameters and unknown external disturbances was demonstrated using Lyapunov stability theory. Based on the elbow joint of the seven-function master-slave hydraulic manipulator for the 4500 m Deep-Sea Working System as the research subject, a comparative study was conducted using the control method presented in this paper for unknown external disturbances. Simulations and experiments of different unknown outer disturbances showed that (1 the proposed controller could robustly track the desired reference trajectory with satisfactory dynamic performance and steady accuracy and that (2 the modified parameter adaptive laws could also guarantee that the estimated parameters are bounded.

  1. Taylor-series method for four-nucleon wave functions

    International Nuclear Information System (INIS)

    Sandulescu, A.; Tarnoveanu, I.; Rizea, M.

    1977-09-01

    Taylor-series method for transforming the infinite or finite well two-nucleon wave functions from individual coordinates to relative and c.m. coordinates, by expanding the single particle shell model wave functions around c.m. of the system, is generalized to four-nucleon wave functions. Also the connections with the Talmi-Moshinsky method for two and four harmonic oscillator wave functions are deduced. For both methods Fortran IV programs for the expansion coefficients have been written and the equivalence of corresponding expressions numerically proved. (author)

  2. Sliding mode control of photoelectric tracking platform based on the inverse system method

    Directory of Open Access Journals (Sweden)

    Yao Zong Chen

    2016-01-01

    Full Text Available In order to improve the photoelectric tracking platform tracking performance, an integral sliding mode control strategy based on inverse system decoupling method is proposed. The electromechanical dynamic model is established based on multi-body system theory and Newton-Euler method. The coupled multi-input multi-output (MIMO nonlinear system is transformed into two pseudo-linear single-input single-output (SISO subsystems based on the inverse system method. An integral sliding mode control scheme is designed for the decoupled pseudo-linear system. In order to eliminate system chattering phenomenon caused by traditional sign function in sliding-mode controller, the sign function is replaced by the Sigmoid function. Simulation results show that the proposed decoupling method and the control strategy can restrain the influences of internal coupling and disturbance effectively, and has better robustness and higher tracking accuracy.

  3. Multi-phase flow monitoring with electrical impedance tomography using level set based method

    International Nuclear Information System (INIS)

    Liu, Dong; Khambampati, Anil Kumar; Kim, Sin; Kim, Kyung Youn

    2015-01-01

    Highlights: • LSM has been used for shape reconstruction to monitor multi-phase flow using EIT. • Multi-phase level set model for conductivity is represented by two level set functions. • LSM handles topological merging and breaking naturally during evolution process. • To reduce the computational time, a narrowband technique was applied. • Use of narrowband and optimization approach results in efficient and fast method. - Abstract: In this paper, a level set-based reconstruction scheme is applied to multi-phase flow monitoring using electrical impedance tomography (EIT). The proposed scheme involves applying a narrowband level set method to solve the inverse problem of finding the interface between the regions having different conductivity values. The multi-phase level set model for the conductivity distribution inside the domain is represented by two level set functions. The key principle of the level set-based method is to implicitly represent the shape of interface as the zero level set of higher dimensional function and then solve a set of partial differential equations. The level set-based scheme handles topological merging and breaking naturally during the evolution process. It also offers several advantages compared to traditional pixel-based approach. Level set-based method for multi-phase flow is tested with numerical and experimental data. It is found that level set-based method has better reconstruction performance when compared to pixel-based method

  4. Optimizing distance-based methods for large data sets

    Science.gov (United States)

    Scholl, Tobias; Brenner, Thomas

    2015-10-01

    Distance-based methods for measuring spatial concentration of industries have received an increasing popularity in the spatial econometrics community. However, a limiting factor for using these methods is their computational complexity since both their memory requirements and running times are in {{O}}(n^2). In this paper, we present an algorithm with constant memory requirements and shorter running time, enabling distance-based methods to deal with large data sets. We discuss three recent distance-based methods in spatial econometrics: the D&O-Index by Duranton and Overman (Rev Econ Stud 72(4):1077-1106, 2005), the M-function by Marcon and Puech (J Econ Geogr 10(5):745-762, 2010) and the Cluster-Index by Scholl and Brenner (Reg Stud (ahead-of-print):1-15, 2014). Finally, we present an alternative calculation for the latter index that allows the use of data sets with millions of firms.

  5. Asynchronous Gossip-Based Gradient-Free Method for Multiagent Optimization

    OpenAIRE

    Deming Yuan

    2014-01-01

    This paper considers the constrained multiagent optimization problem. The objective function of the problem is a sum of convex functions, each of which is known by a specific agent only. For solving this problem, we propose an asynchronous distributed method that is based on gradient-free oracles and gossip algorithm. In contrast to the existing work, we do not require that agents be capable of computing the subgradients of their objective functions and coordinating their...

  6. Temporal quadratic expansion nodal Green's function method

    International Nuclear Information System (INIS)

    Liu Cong; Jing Xingqing; Xu Xiaolin

    2000-01-01

    A new approach is presented to efficiently solve the three-dimensional space-time reactor dynamics equation which overcomes the disadvantages of current methods. In the Temporal Quadratic Expansion Nodal Green's Function Method (TQE/NGFM), the Quadratic Expansion Method (QEM) is used for the temporal solution with the Nodal Green's Function Method (NGFM) employed for the spatial solution. Test calculational results using TQE/NGFM show that its time step size can be 5-20 times larger than that of the Fully Implicit Method (FIM) for similar precision. Additionally, the spatial mesh size with NGFM can be nearly 20 times larger than that using the finite difference method. So, TQE/NGFM is proved to be an efficient reactor dynamics analysis method

  7. Forecasting nonlinear chaotic time series with function expression method based on an improved genetic-simulated annealing algorithm.

    Science.gov (United States)

    Wang, Jun; Zhou, Bi-hua; Zhou, Shu-dao; Sheng, Zheng

    2015-01-01

    The paper proposes a novel function expression method to forecast chaotic time series, using an improved genetic-simulated annealing (IGSA) algorithm to establish the optimum function expression that describes the behavior of time series. In order to deal with the weakness associated with the genetic algorithm, the proposed algorithm incorporates the simulated annealing operation which has the strong local search ability into the genetic algorithm to enhance the performance of optimization; besides, the fitness function and genetic operators are also improved. Finally, the method is applied to the chaotic time series of Quadratic and Rossler maps for validation. The effect of noise in the chaotic time series is also studied numerically. The numerical results verify that the method can forecast chaotic time series with high precision and effectiveness, and the forecasting precision with certain noise is also satisfactory. It can be concluded that the IGSA algorithm is energy-efficient and superior.

  8. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Science.gov (United States)

    Liew, Khang Jie; Ramli, Ahmad; Abd Majid, Ahmad

    2016-01-01

    This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  9. The derivation of the Doppler broadening function using Frobenius method

    International Nuclear Information System (INIS)

    Palma, Daniel A.P.; Martinez, Aquilino S.; Silva, Fernando C.

    2006-01-01

    An analytical approximation of the Doppler broadening function ψ(ξ,x) is proposed. This approximation is based on the solution of the differential equation for ψ(ξ,x) using the methods of Frobenius and parameters variation. The analytical form derived for ψ(ξ,x) in terms of elementary functions is very simple and precise. It can be useful for applications related to the treatment of nuclear resonances, mainly for calculations of multigroup parameters and resonances self-protection factors, the latter being used to correct microscopic cross section measurements by the activation technique. (author)

  10. A Sequence and Structure Based Method to Predict Putative Substrates, Functions and Regulatory Networks of Endo Proteases

    Science.gov (United States)

    Venkatraman, Prasanna; Balakrishnan, Satish; Rao, Shashidhar; Hooda, Yogesh; Pol, Suyog

    2009-01-01

    Background Proteases play a central role in cellular homeostasis and are responsible for the spatio- temporal regulation of function. Many putative proteases have been recently identified through genomic approaches, leading to a surge in global profiling attempts to characterize their function. Through such efforts and others it has become evident that many proteases play non-traditional roles. Accordingly, the number and the variety of the substrate repertoire of proteases are expected to be much larger than previously assumed. In line with such global profiling attempts, we present here a method for the prediction of natural substrates of endo proteases (human proteases used as an example) by employing short peptide sequences as specificity determinants. Methodology/Principal Findings Our method incorporates specificity determinants unique to individual enzymes and physiologically relevant dual filters namely, solvent accessible surface area-a parameter dependent on protein three-dimensional structure and subcellular localization. By incorporating such hitherto unused principles in prediction methods, a novel ligand docking strategy to mimic substrate binding at the active site of the enzyme, and GO functions, we identify and perform subjective validation on putative substrates of matriptase and highlight new functions of the enzyme. Using relative solvent accessibility to rank order we show how new protease regulatory networks and enzyme cascades can be created. Conclusion We believe that our physiologically relevant computational approach would be a very useful complementary method in the current day attempts to profile proteases (endo proteases in particular) and their substrates. In addition, by using functional annotations, we have demonstrated how normal and unknown functions of a protease can be envisaged. We have developed a network which can be integrated to create a proteolytic world. This network can in turn be extended to integrate other regulatory

  11. A Layered Searchable Encryption Scheme with Functional Components Independent of Encryption Methods

    Science.gov (United States)

    Luo, Guangchun; Qin, Ke

    2014-01-01

    Searchable encryption technique enables the users to securely store and search their documents over the remote semitrusted server, which is especially suitable for protecting sensitive data in the cloud. However, various settings (based on symmetric or asymmetric encryption) and functionalities (ranked keyword query, range query, phrase query, etc.) are often realized by different methods with different searchable structures that are generally not compatible with each other, which limits the scope of application and hinders the functional extensions. We prove that asymmetric searchable structure could be converted to symmetric structure, and functions could be modeled separately apart from the core searchable structure. Based on this observation, we propose a layered searchable encryption (LSE) scheme, which provides compatibility, flexibility, and security for various settings and functionalities. In this scheme, the outputs of the core searchable component based on either symmetric or asymmetric setting are converted to some uniform mappings, which are then transmitted to loosely coupled functional components to further filter the results. In such a way, all functional components could directly support both symmetric and asymmetric settings. Based on LSE, we propose two representative and novel constructions for ranked keyword query (previously only available in symmetric scheme) and range query (previously only available in asymmetric scheme). PMID:24719565

  12. Effects of Computer-Based Training on Procedural Modifications to Standard Functional Analyses

    Science.gov (United States)

    Schnell, Lauren K.; Sidener, Tina M.; DeBar, Ruth M.; Vladescu, Jason C.; Kahng, SungWoo

    2018-01-01

    Few studies have evaluated methods for training decision-making when functional analysis data are undifferentiated. The current study evaluated computer-based training to teach 20 graduate students to arrange functional analysis conditions, analyze functional analysis data, and implement procedural modifications. Participants were exposed to…

  13. An integrated miRNA functional screening and target validation method for organ morphogenesis.

    Science.gov (United States)

    Rebustini, Ivan T; Vlahos, Maryann; Packer, Trevor; Kukuruzinska, Maria A; Maas, Richard L

    2016-03-16

    The relative ease of identifying microRNAs and their increasing recognition as important regulators of organogenesis motivate the development of methods to efficiently assess microRNA function during organ morphogenesis. In this context, embryonic organ explants provide a reliable and reproducible system that recapitulates some of the important early morphogenetic processes during organ development. Here we present a method to target microRNA function in explanted mouse embryonic organs. Our method combines the use of peptide-based nanoparticles to transfect specific microRNA inhibitors or activators into embryonic organ explants, with a microRNA pulldown assay that allows direct identification of microRNA targets. This method provides effective assessment of microRNA function during organ morphogenesis, allows prioritization of multiple microRNAs in parallel for subsequent genetic approaches, and can be applied to a variety of embryonic organs.

  14. Nonlinear System Identification via Basis Functions Based Time Domain Volterra Model

    Directory of Open Access Journals (Sweden)

    Yazid Edwar

    2014-07-01

    Full Text Available This paper proposes basis functions based time domain Volterra model for nonlinear system identification. The Volterra kernels are expanded by using complex exponential basis functions and estimated via genetic algorithm (GA. The accuracy and practicability of the proposed method are then assessed experimentally from a scaled 1:100 model of a prototype truss spar platform. Identification results in time and frequency domain are presented and coherent functions are performed to check the quality of the identification results. It is shown that results between experimental data and proposed method are in good agreement.

  15. Improved quasi-static nodal green's function method

    International Nuclear Information System (INIS)

    Li Junli; Jing Xingqing; Hu Dapu

    1997-01-01

    Improved Quasi-Static Green's Function Method (IQS/NGFM) is presented, as an new kinetic method. To solve the three-dimensional transient problem, improved Quasi-Static Method is adopted to deal with the temporal problem, which will increase the time step as long as possible so as to decrease the number of times of space calculation. The time step of IQS/NGFM can be increased to 5∼10 times longer than that of Full Implicit Differential Method. In spatial calculation, the NGFM is used to get the distribution of shape function, and it's spatial mesh can be nearly 20 times larger than that of Definite Differential Method. So the IQS/NGFM is considered as an efficient kinetic method

  16. An overview of modal-based damage identification methods

    Energy Technology Data Exchange (ETDEWEB)

    Farrar, C.R.; Doebling, S.W. [Los Alamos National Lab., NM (United States). Engineering Analysis Group

    1997-09-01

    This paper provides an overview of methods that examine changes in measured vibration response to detect, locate, and characterize damage in structural and mechanical systems. The basic idea behind this technology is that modal parameters (notably frequencies, mode shapes, and modal damping) are functions of the physical properties of the structure (mass, damping, and stiffness). Therefore, changes in the physical properties will cause detectable changes in the modal properties. The motivation for the development of this technology is first provided. The methods are then categorized according to various criteria such as the level of damage detection provided, model-based vs. non-model-based methods and linear vs. nonlinear methods. This overview is limited to methods that can be adapted to a wide range of structures (i.e., are not dependent on a particular assumed model form for the system such as beam-bending behavior and methods and that are not based on updating finite element models). Next, the methods are described in general terms including difficulties associated with their implementation and their fidelity. Past, current and future-planned applications of this technology to actual engineering systems are summarized. The paper concludes with a discussion of critical issues for future research in the area of modal-based damage identification.

  17. A functional-dependencies-based Bayesian networks learning method and its application in a mobile commerce system.

    Science.gov (United States)

    Liao, Stephen Shaoyi; Wang, Huai Qing; Li, Qiu Dan; Liu, Wei Yi

    2006-06-01

    This paper presents a new method for learning Bayesian networks from functional dependencies (FD) and third normal form (3NF) tables in relational databases. The method sets up a linkage between the theory of relational databases and probabilistic reasoning models, which is interesting and useful especially when data are incomplete and inaccurate. The effectiveness and practicability of the proposed method is demonstrated by its implementation in a mobile commerce system.

  18. A path-based measurement for human miRNA functional similarities using miRNA-disease associations

    Science.gov (United States)

    Ding, Pingjian; Luo, Jiawei; Xiao, Qiu; Chen, Xiangtao

    2016-09-01

    Compared with the sequence and expression similarity, miRNA functional similarity is so important for biology researches and many applications such as miRNA clustering, miRNA function prediction, miRNA synergism identification and disease miRNA prioritization. However, the existing methods always utilized the predicted miRNA target which has high false positive and false negative to calculate the miRNA functional similarity. Meanwhile, it is difficult to achieve high reliability of miRNA functional similarity with miRNA-disease associations. Therefore, it is increasingly needed to improve the measurement of miRNA functional similarity. In this study, we develop a novel path-based calculation method of miRNA functional similarity based on miRNA-disease associations, called MFSP. Compared with other methods, our method obtains higher average functional similarity of intra-family and intra-cluster selected groups. Meanwhile, the lower average functional similarity of inter-family and inter-cluster miRNA pair is obtained. In addition, the smaller p-value is achieved, while applying Wilcoxon rank-sum test and Kruskal-Wallis test to different miRNA groups. The relationship between miRNA functional similarity and other information sources is exhibited. Furthermore, the constructed miRNA functional network based on MFSP is a scale-free and small-world network. Moreover, the higher AUC for miRNA-disease prediction indicates the ability of MFSP uncovering miRNA functional similarity.

  19. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    International Nuclear Information System (INIS)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo; Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro; Kato, Rikio

    2005-01-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99m Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I AC μb with Chang's attenuation correction factor. The scatter component image is estimated by convolving I AC μb with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99m Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  20. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method.

    Science.gov (United States)

    Shidahara, Miho; Watabe, Hiroshi; Kim, Kyeong Min; Kato, Takashi; Kawatsu, Shoji; Kato, Rikio; Yoshimura, Kumiko; Iida, Hidehiro; Ito, Kengo

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with 99mTc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I(mub)AC with Chang's attenuation correction factor. The scatter component image is estimated by convolving I(mub)AC with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and 99mTc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine.

  1. Point Set Denoising Using Bootstrap-Based Radial Basis Function.

    Directory of Open Access Journals (Sweden)

    Khang Jie Liew

    Full Text Available This paper examines the application of a bootstrap test error estimation of radial basis functions, specifically thin-plate spline fitting, in surface smoothing. The presence of noisy data is a common issue of the point set model that is generated from 3D scanning devices, and hence, point set denoising is one of the main concerns in point set modelling. Bootstrap test error estimation, which is applied when searching for the smoothing parameters of radial basis functions, is revisited. The main contribution of this paper is a smoothing algorithm that relies on a bootstrap-based radial basis function. The proposed method incorporates a k-nearest neighbour search and then projects the point set to the approximated thin-plate spline surface. Therefore, the denoising process is achieved, and the features are well preserved. A comparison of the proposed method with other smoothing methods is also carried out in this study.

  2. Computational Methods and Function Theory

    CERN Document Server

    Saff, Edward; Salinas, Luis; Varga, Richard

    1990-01-01

    The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.

  3. Protein-protein interaction network-based detection of functionally similar proteins within species.

    Science.gov (United States)

    Song, Baoxing; Wang, Fen; Guo, Yang; Sang, Qing; Liu, Min; Li, Dengyun; Fang, Wei; Zhang, Deli

    2012-07-01

    Although functionally similar proteins across species have been widely studied, functionally similar proteins within species showing low sequence similarity have not been examined in detail. Identification of these proteins is of significant importance for understanding biological functions, evolution of protein families, progression of co-evolution, and convergent evolution and others which cannot be obtained by detection of functionally similar proteins across species. Here, we explored a method of detecting functionally similar proteins within species based on graph theory. After denoting protein-protein interaction networks using graphs, we split the graphs into subgraphs using the 1-hop method. Proteins with functional similarities in a species were detected using a method of modified shortest path to compare these subgraphs and to find the eligible optimal results. Using seven protein-protein interaction networks and this method, some functionally similar proteins with low sequence similarity that cannot detected by sequence alignment were identified. By analyzing the results, we found that, sometimes, it is difficult to separate homologous from convergent evolution. Evaluation of the performance of our method by gene ontology term overlap showed that the precision of our method was excellent. Copyright © 2012 Wiley Periodicals, Inc.

  4. Relationships between the generalized functional method and other methods of nonimaging optical design.

    Science.gov (United States)

    Bortz, John; Shatz, Narkis

    2011-04-01

    The recently developed generalized functional method provides a means of designing nonimaging concentrators and luminaires for use with extended sources and receivers. We explore the mathematical relationships between optical designs produced using the generalized functional method and edge-ray, aplanatic, and simultaneous multiple surface (SMS) designs. Edge-ray and dual-surface aplanatic designs are shown to be special cases of generalized functional designs. In addition, it is shown that dual-surface SMS designs are closely related to generalized functional designs and that certain computational advantages accrue when the two design methods are combined. A number of examples are provided. © 2011 Optical Society of America

  5. A Sequential Optimization Sampling Method for Metamodels with Radial Basis Functions

    Science.gov (United States)

    Pan, Guang; Ye, Pengcheng; Yang, Zhidong

    2014-01-01

    Metamodels have been widely used in engineering design to facilitate analysis and optimization of complex systems that involve computationally expensive simulation programs. The accuracy of metamodels is strongly affected by the sampling methods. In this paper, a new sequential optimization sampling method is proposed. Based on the new sampling method, metamodels can be constructed repeatedly through the addition of sampling points, namely, extrema points of metamodels and minimum points of density function. Afterwards, the more accurate metamodels would be constructed by the procedure above. The validity and effectiveness of proposed sampling method are examined by studying typical numerical examples. PMID:25133206

  6. A Survey of Functional Behavior Assessment Methods Used by Behavior Analysts in Practice

    Science.gov (United States)

    Oliver, Anthony C.; Pratt, Leigh A.; Normand, Matthew P.

    2015-01-01

    To gather information about the functional behavior assessment (FBA) methods behavior analysts use in practice, we sent a web-based survey to 12,431 behavior analysts certified by the Behavior Analyst Certification Board. Ultimately, 724 surveys were returned, with the results suggesting that most respondents regularly use FBA methods, especially…

  7. A multi-label learning based kernel automatic recommendation method for support vector machine.

    Science.gov (United States)

    Zhang, Xueying; Song, Qinbao

    2015-01-01

    Choosing an appropriate kernel is very important and critical when classifying a new problem with Support Vector Machine. So far, more attention has been paid on constructing new kernels and choosing suitable parameter values for a specific kernel function, but less on kernel selection. Furthermore, most of current kernel selection methods focus on seeking a best kernel with the highest classification accuracy via cross-validation, they are time consuming and ignore the differences among the number of support vectors and the CPU time of SVM with different kernels. Considering the tradeoff between classification success ratio and CPU time, there may be multiple kernel functions performing equally well on the same classification problem. Aiming to automatically select those appropriate kernel functions for a given data set, we propose a multi-label learning based kernel recommendation method built on the data characteristics. For each data set, the meta-knowledge data base is first created by extracting the feature vector of data characteristics and identifying the corresponding applicable kernel set. Then the kernel recommendation model is constructed on the generated meta-knowledge data base with the multi-label classification method. Finally, the appropriate kernel functions are recommended to a new data set by the recommendation model according to the characteristics of the new data set. Extensive experiments over 132 UCI benchmark data sets, with five different types of data set characteristics, eleven typical kernels (Linear, Polynomial, Radial Basis Function, Sigmoidal function, Laplace, Multiquadric, Rational Quadratic, Spherical, Spline, Wave and Circular), and five multi-label classification methods demonstrate that, compared with the existing kernel selection methods and the most widely used RBF kernel function, SVM with the kernel function recommended by our proposed method achieved the highest classification performance.

  8. Sinc-function based Network

    DEFF Research Database (Denmark)

    Madsen, Per Printz

    1998-01-01

    The purpose of this paper is to describe a neural network (SNN), that is based on Shannons ideas of reconstruction of a real continuous function from its samples. The basic function, used in this network, is the Sinc-function. Two learning algorithms are described. A simple one called IM...

  9. A valuation method on physiological functionality of food materials

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2001-10-15

    This reports is about valuation method on physiological functionality of food materials. It includes ten reports: maintenance condition of functional foods in Korea by Kim, Byeong Tae, management plan and classification of functional foods by Jung, Myeong Seop, measurement method vitality of functional foods for preventing diabetes, measurement way of aging delayed activation by Lee, Jae Yong, improvement on effectiveness of anti hypertension by functional foods by Park, Jeon Hong, and practice case for the method of test on anti gastritis antiulcer by Lee, Eun Bang.

  10. A valuation method on physiological functionality of food materials

    International Nuclear Information System (INIS)

    2001-10-01

    This reports is about valuation method on physiological functionality of food materials. It includes ten reports: maintenance condition of functional foods in Korea by Kim, Byeong Tae, management plan and classification of functional foods by Jung, Myeong Seop, measurement method vitality of functional foods for preventing diabetes, measurement way of aging delayed activation by Lee, Jae Yong, improvement on effectiveness of anti hypertension by functional foods by Park, Jeon Hong, and practice case for the method of test on anti gastritis antiulcer by Lee, Eun Bang.

  11. LIF: A new Kriging based learning function and its application to structural reliability analysis

    International Nuclear Information System (INIS)

    Sun, Zhili; Wang, Jian; Li, Rui; Tong, Cao

    2017-01-01

    The main task of structural reliability analysis is to estimate failure probability of a studied structure taking randomness of input variables into account. To consider structural behavior practically, numerical models become more and more complicated and time-consuming, which increases the difficulty of reliability analysis. Therefore, sequential strategies of design of experiment (DoE) are raised. In this research, a new learning function, named least improvement function (LIF), is proposed to update DoE of Kriging based reliability analysis method. LIF values how much the accuracy of estimated failure probability will be improved if adding a given point into DoE. It takes both statistical information provided by the Kriging model and the joint probability density function of input variables into account, which is the most important difference from the existing learning functions. Maximum point of LIF is approximately determined with Markov Chain Monte Carlo(MCMC) simulation. A new reliability analysis method is developed based on the Kriging model, in which LIF, MCMC and Monte Carlo(MC) simulation are employed. Three examples are analyzed. Results show that LIF and the new method proposed in this research are very efficient when dealing with nonlinear performance function, small probability, complicated limit state and engineering problems with high dimension. - Highlights: • Least improvement function (LIF) is proposed for structural reliability analysis. • LIF takes both Kriging based statistical information and joint PDF into account. • A reliability analysis method is constructed based on Kriging, MCS and LIF.

  12. Green's function method and its application to verification of diffusion models of GASFLOW code

    International Nuclear Information System (INIS)

    Xu, Z.; Travis, J.R.; Breitung, W.

    2007-07-01

    To validate the diffusion model and the aerosol particle model of the GASFLOW computer code, theoretical solutions of advection diffusion problems are developed by using the Green's function method. The work consists of a theory part and an application part. In the first part, the Green's functions of one-dimensional advection diffusion problems are solved in infinite, semi-infinite and finite domains with the Dirichlet, the Neumann and/or the Robin boundary conditions. Novel and effective image systems especially for the advection diffusion problems are made to find the Green's functions in a semi-infinite domain. Eigenfunction method is utilized to find the Green's functions in a bounded domain. In the case, key steps of a coordinate transform based on a concept of reversed time scale, a Laplace transform and an exponential transform are proposed to solve the Green's functions. Then the product rule of the multi-dimensional Green's functions is discussed in a Cartesian coordinate system. Based on the building blocks of one-dimensional Green's functions, the multi-dimensional Green's function solution can be constructed by applying the product rule. Green's function tables are summarized to facilitate the application of the Green's function. In the second part, the obtained Green's function solutions benchmark a series of validations to the diffusion model of gas species in continuous phase and the diffusion model of discrete aerosol particles in the GASFLOW code. Perfect agreements are obtained between the GASFLOW simulations and the Green's function solutions in case of the gas diffusion. Very good consistencies are found between the theoretical solutions of the advection diffusion equations and the numerical particle distributions in advective flows, when the drag force between the micron-sized particles and the conveying gas flow meets the Stokes' law about resistance. This situation is corresponding to a very small Reynolds number based on the particle

  13. A Method for Functional Task Alignment Analysis of an Arthrocentesis Simulator.

    Science.gov (United States)

    Adams, Reid A; Gilbert, Gregory E; Buckley, Lisa A; Nino Fong, Rodolfo; Fuentealba, I Carmen; Little, Erika L

    2018-05-16

    During simulation-based education, simulators are subjected to procedures composed of a variety of tasks and processes. Simulators should functionally represent a patient in response to the physical action of these tasks. The aim of this work was to describe a method for determining whether a simulator does or does not have sufficient functional task alignment (FTA) to be used in a simulation. Potential performance checklist items were gathered from published arthrocentesis guidelines and aggregated into a performance checklist using Lawshe's method. An expert panel used this performance checklist and an FTA analysis questionnaire to evaluate a simulator's ability to respond to the physical actions required by the performance checklist. Thirteen items, from a pool of 39, were included on the performance checklist. Experts had mixed reviews of the simulator's FTA and its suitability for use in simulation. Unexpectedly, some positive FTA was found for several tasks where the simulator lacked functionality. By developing a detailed list of specific tasks required to complete a clinical procedure, and surveying experts on the simulator's response to those actions, educators can gain insight into the simulator's clinical accuracy and suitability. Unexpected of positive FTA ratings of function deficits suggest that further revision of the survey method is required.

  14. Development of diagnosis and maintenance support system for nuclear power plants with flexible inference function and knowledge base edition support function

    International Nuclear Information System (INIS)

    Fujii, Makoto; Seki, Eiji; Tai, Ichiro; Morioka, Toshihiko

    1988-01-01

    For the reliable and efficient diagnosis and inspection work of the nuclear power plant equipments, 'Diagnosis and Maintenance Support System' has been developed. This system has functions to assist operators or engineers to observe and evaluate equipment conditions based on the experts' knowledge. These functions are carried out through dialogue between the system and users. This system has two subsystems: diagnosis subsystem and knowledge base edition support subsystem. To achieve the functions of diagnosis subsystem, a new method of knowledge processing for equipment diagnosis is adopted. This method is based on the concept of 'Cause Generation and Checking'. Knowledge for diagnosis is represented with modularized production rules. And each rule module consists of four different type rules with hierarchical structure. With this approach, the system is equipped with sufficient performance not only in diagnosis function but also in flexible man-machine interface. Knowledge base edition support subsystem (Graphical Rule Editor) is provided for this system. This editor has functions to display and edit the contents of knowledge base with tree structures through the graphic display. With these functions, the efficiency of constructing expert system is highly improved. By applying this system to the maintenance support of neutron monitoring system, it is proved that this system has satisfactory performance as a diagnosis and maintenance support system. (author)

  15. Integrative approaches to the prediction of protein functions based on the feature selection

    Directory of Open Access Journals (Sweden)

    Lee Hyunju

    2009-12-01

    Full Text Available Abstract Background Protein function prediction has been one of the most important issues in functional genomics. With the current availability of various genomic data sets, many researchers have attempted to develop integration models that combine all available genomic data for protein function prediction. These efforts have resulted in the improvement of prediction quality and the extension of prediction coverage. However, it has also been observed that integrating more data sources does not always increase the prediction quality. Therefore, selecting data sources that highly contribute to the protein function prediction has become an important issue. Results We present systematic feature selection methods that assess the contribution of genome-wide data sets to predict protein functions and then investigate the relationship between genomic data sources and protein functions. In this study, we use ten different genomic data sources in Mus musculus, including: protein-domains, protein-protein interactions, gene expressions, phenotype ontology, phylogenetic profiles and disease data sources to predict protein functions that are labelled with Gene Ontology (GO terms. We then apply two approaches to feature selection: exhaustive search feature selection using a kernel based logistic regression (KLR, and a kernel based L1-norm regularized logistic regression (KL1LR. In the first approach, we exhaustively measure the contribution of each data set for each function based on its prediction quality. In the second approach, we use the estimated coefficients of features as measures of contribution of data sources. Our results show that the proposed methods improve the prediction quality compared to the full integration of all data sources and other filter-based feature selection methods. We also show that contributing data sources can differ depending on the protein function. Furthermore, we observe that highly contributing data sets can be similar among

  16. Linear regression methods a ccording to objective functions

    OpenAIRE

    Yasemin Sisman; Sebahattin Bektas

    2012-01-01

    The aim of the study is to explain the parameter estimation methods and the regression analysis. The simple linear regressionmethods grouped according to the objective function are introduced. The numerical solution is achieved for the simple linear regressionmethods according to objective function of Least Squares and theLeast Absolute Value adjustment methods. The success of the appliedmethods is analyzed using their objective function values.

  17. Effective-range function methods for charged particle collisions

    Science.gov (United States)

    Gaspard, David; Sparenberg, Jean-Marc

    2018-04-01

    Different versions of the effective-range function method for charged particle collisions are studied and compared. In addition, a novel derivation of the standard effective-range function is presented from the analysis of Coulomb wave functions in the complex plane of the energy. The recently proposed effective-range function denoted as Δℓ [Ramírez Suárez and Sparenberg, Phys. Rev. C 96, 034601 (2017), 10.1103/PhysRevC.96.034601] and an earlier variant [Hamilton et al., Nucl. Phys. B 60, 443 (1973), 10.1016/0550-3213(73)90193-4] are related to the standard function. The potential interest of Δℓ for the study of low-energy cross sections and weakly bound states is discussed in the framework of the proton-proton S10 collision. The resonant state of the proton-proton collision is successfully computed from the extrapolation of Δℓ instead of the standard function. It is shown that interpolating Δℓ can lead to useful extrapolation to negative energies, provided scattering data are known below one nuclear Rydberg energy (12.5 keV for the proton-proton system). This property is due to the connection between Δℓ and the effective-range function by Hamilton et al. that is discussed in detail. Nevertheless, such extrapolations to negative energies should be used with caution because Δℓ is not analytic at zero energy. The expected analytic properties of the main functions are verified in the complex energy plane by graphical color-based representations.

  18. A Multiple Criteria Decision Making Method Based on Relative Value Distances

    Directory of Open Access Journals (Sweden)

    Shyur Huan-jyh

    2015-12-01

    Full Text Available This paper proposes a new multiple criteria decision-making method called ERVD (election based on relative value distances. The s-shape value function is adopted to replace the expected utility function to describe the risk-averse and risk-seeking behavior of decision makers. Comparisons and experiments contrasting with the TOPSIS (Technique for Order Preference by Similarity to the Ideal Solution method are carried out to verify the feasibility of using the proposed method to represent the decision makers’ preference in the decision making process. Our experimental results show that the proposed approach is an appropriate and effective MCDM method.

  19. Using Trial-Based Functional Analysis to Design Effective Interventions for Students Diagnosed with Autism Spectrum Disorder

    Science.gov (United States)

    Larkin, Wallace; Hawkins, Renee O.; Collins, Tai

    2016-01-01

    Functional behavior assessments and function-based interventions are effective methods for addressing the challenging behaviors of children; however, traditional functional analysis has limitations that impact usability in applied settings. Trial-based functional analysis addresses concerns relating to the length of time, level of expertise…

  20. Methods for deconvolving sparse positive delta function series

    International Nuclear Information System (INIS)

    Trussell, H.J.; Schwalbe, L.A.

    1981-01-01

    Sparse delta function series occur as data in many chemical analyses and seismic methods. These original data are often sufficiently degraded by the recording instrument response that the individual delta function peaks are difficult to distinguish and measure. A method, which has been used to measure these peaks, is to fit a parameterized model by a nonlinear least-squares fitting algorithm. The deconvolution approaches described have the advantage of not requiring a parameterized point spread function, nor do they expect a fixed number of peaks. Two new methods are presented. The maximum power technique is reviewed. A maximum a posteriori technique is introduced. Results on both simulated and real data by the two methods are presented. The characteristics of the data can determine which method gives superior results. 5 figures

  1. Comparison of lists of genes based on functional profiles

    Directory of Open Access Journals (Sweden)

    Salicrú Miquel

    2011-10-01

    Full Text Available Abstract Background How to compare studies on the basis of their biological significance is a problem of central importance in high-throughput genomics. Many methods for performing such comparisons are based on the information in databases of functional annotation, such as those that form the Gene Ontology (GO. Typically, they consist of analyzing gene annotation frequencies in some pre-specified GO classes, in a class-by-class way, followed by p-value adjustment for multiple testing. Enrichment analysis, where a list of genes is compared against a wider universe of genes, is the most common example. Results A new global testing procedure and a method incorporating it are presented. Instead of testing separately for each GO class, a single global test for all classes under consideration is performed. The test is based on the distance between the functional profiles, defined as the joint frequencies of annotation in a given set of GO classes. These classes may be chosen at one or more GO levels. The new global test is more powerful and accurate with respect to type I errors than the usual class-by-class approach. When applied to some real datasets, the results suggest that the method may also provide useful information that complements the tests performed using a class-by-class approach if gene counts are sparse in some classes. An R library, goProfiles, implements these methods and is available from Bioconductor, http://bioconductor.org/packages/release/bioc/html/goProfiles.html. Conclusions The method provides an inferential basis for deciding whether two lists are functionally different. For global comparisons it is preferable to the global chi-square test of homogeneity. Furthermore, it may provide additional information if used in conjunction with class-by-class methods.

  2. The Method of a Standalone Functional Verifying Operability of Sonar Control Systems

    Directory of Open Access Journals (Sweden)

    A. A. Sotnikov

    2014-01-01

    Full Text Available This article describes a method of standalone verifying sonar control system, which is based on functional checking of control system operability.The main features of realized method are a development of the valid mathematic model for simulation of sonar signals at the point of hydroacoustic antenna, a valid representation of the sonar control system modes as a discrete Markov model, providing functional object verification in real time mode.Some ways are proposed to control computational complexity in case of insufficient computing resources of the simulation equipment, namely the way of model functionality reduction and the way of adequacy reduction.Experiments were made using testing equipment, which was developed by department of Research Institute of Information Control System at Bauman Moscow State Technical University to verify technical validity of industrial sonar complexes.On-board software was artificially changed to create malfunctions in functionality of sonar control systems during the verifying process in order to estimate verifying system performances.The method efficiency was proved by the theory and experiment results in comparison with the basic methodology of verifying technical systems.This method could be also used in debugging of on-board software of sonar complexes and in development of new promising algorithms of sonar signal processing.

  3. Effects of gross motor function and manual function levels on performance-based ADL motor skills of children with spastic cerebral palsy

    OpenAIRE

    Park, Myoung-Ok

    2017-01-01

    [Purpose] The purpose of this study was to determine effects of Gross Motor Function Classification System and Manual Ability Classification System levels on performance-based motor skills of children with spastic cerebral palsy. [Subjects and Methods] Twenty-three children with cerebral palsy were included. The Assessment of Motor and Process Skills was used to evaluate performance-based motor skills in daily life. Gross motor function was assessed using Gross Motor Function Classification S...

  4. Development of a practical image-based scatter correction method for brain perfusion SPECT: comparison with the TEW method

    Energy Technology Data Exchange (ETDEWEB)

    Shidahara, Miho; Kato, Takashi; Kawatsu, Shoji; Yoshimura, Kumiko; Ito, Kengo [National Center for Geriatrics and Gerontology Research Institute, Department of Brain Science and Molecular Imaging, Obu, Aichi (Japan); Watabe, Hiroshi; Kim, Kyeong Min; Iida, Hidehiro [National Cardiovascular Center Research Institute, Department of Investigative Radiology, Suita (Japan); Kato, Rikio [National Center for Geriatrics and Gerontology, Department of Radiology, Obu (Japan)

    2005-10-01

    An image-based scatter correction (IBSC) method was developed to convert scatter-uncorrected into scatter-corrected SPECT images. The purpose of this study was to validate this method by means of phantom simulations and human studies with {sup 99m}Tc-labeled tracers, based on comparison with the conventional triple energy window (TEW) method. The IBSC method corrects scatter on the reconstructed image I{sub AC}{sup {mu}}{sup b} with Chang's attenuation correction factor. The scatter component image is estimated by convolving I{sub AC}{sup {mu}}{sup b} with a scatter function followed by multiplication with an image-based scatter fraction function. The IBSC method was evaluated with Monte Carlo simulations and {sup 99m}Tc-ethyl cysteinate dimer SPECT human brain perfusion studies obtained from five volunteers. The image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were compared. Using data obtained from the simulations, the image counts and contrast of the scatter-corrected images obtained by the IBSC and TEW methods were found to be nearly identical for both gray and white matter. In human brain images, no significant differences in image contrast were observed between the IBSC and TEW methods. The IBSC method is a simple scatter correction technique feasible for use in clinical routine. (orig.)

  5. Impact of Base Functional Component Types on Software Functional Size based Effort Estimation

    OpenAIRE

    Gencel, Cigdem; Buglione, Luigi

    2008-01-01

    Software effort estimation is still a significant challenge for software management. Although Functional Size Measurement (FSM) methods have been standardized and have become widely used by the software organizations, the relationship between functional size and development effort still needs further investigation. Most of the studies focus on the project cost drivers and consider total software functional size as the primary input to estimation models. In this study, we investigate whether u...

  6. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  7. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  8. Geometric optical transfer function and tis computation method

    International Nuclear Information System (INIS)

    Wang Qi

    1992-01-01

    Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation

  9. Standardized reporting of functioning information on ICF-based common metrics.

    Science.gov (United States)

    Prodinger, Birgit; Tennant, Alan; Stucki, Gerold

    2018-02-01

    In clinical practice and research a variety of clinical data collection tools are used to collect information on people's functioning for clinical practice and research and national health information systems. Reporting on ICF-based common metrics enables standardized documentation of functioning information in national health information systems. The objective of this methodological note on applying the ICF in rehabilitation is to demonstrate how to report functioning information collected with a data collection tool on ICF-based common metrics. We first specify the requirements for the standardized reporting of functioning information. Secondly, we introduce the methods needed for transforming functioning data to ICF-based common metrics. Finally, we provide an example. The requirements for standardized reporting are as follows: 1) having a common conceptual framework to enable content comparability between any health information; and 2) a measurement framework so that scores between two or more clinical data collection tools can be directly compared. The methods needed to achieve these requirements are the ICF Linking Rules and the Rasch measurement model. Using data collected incorporating the 36-item Short Form Health Survey (SF-36), the World Health Organization Disability Assessment Schedule 2.0 (WHODAS 2.0), and the Stroke Impact Scale 3.0 (SIS 3.0), the application of the standardized reporting based on common metrics is demonstrated. A subset of items from the three tools linked to common chapters of the ICF (d4 Mobility, d5 Self-care and d6 Domestic life), were entered as "super items" into the Rasch model. Good fit was achieved with no residual local dependency and a unidimensional metric. A transformation table allows for comparison between scales, and between a scale and the reporting common metric. Being able to report functioning information collected with commonly used clinical data collection tools with ICF-based common metrics enables clinicians

  10. Method of applying single higher order polynomial basis function over multiple domains

    CSIR Research Space (South Africa)

    Lysko, AA

    2010-03-01

    Full Text Available A novel method has been devised where one set of higher order polynomial-based basis functions can be applied over several wire segments, thus permitting to decouple the number of unknowns from the number of segments, and so from the geometrical...

  11. A comparison of high-order polynomial and wave-based methods for Helmholtz problems

    Science.gov (United States)

    Lieu, Alice; Gabard, Gwénaël; Bériot, Hadrien

    2016-09-01

    The application of computational modelling to wave propagation problems is hindered by the dispersion error introduced by the discretisation. Two common strategies to address this issue are to use high-order polynomial shape functions (e.g. hp-FEM), or to use physics-based, or Trefftz, methods where the shape functions are local solutions of the problem (typically plane waves). Both strategies have been actively developed over the past decades and both have demonstrated their benefits compared to conventional finite-element methods, but they have yet to be compared. In this paper a high-order polynomial method (p-FEM with Lobatto polynomials) and the wave-based discontinuous Galerkin method are compared for two-dimensional Helmholtz problems. A number of different benchmark problems are used to perform a detailed and systematic assessment of the relative merits of these two methods in terms of interpolation properties, performance and conditioning. It is generally assumed that a wave-based method naturally provides better accuracy compared to polynomial methods since the plane waves or Bessel functions used in these methods are exact solutions of the Helmholtz equation. Results indicate that this expectation does not necessarily translate into a clear benefit, and that the differences in performance, accuracy and conditioning are more nuanced than generally assumed. The high-order polynomial method can in fact deliver comparable, and in some cases superior, performance compared to the wave-based DGM. In addition to benchmarking the intrinsic computational performance of these methods, a number of practical issues associated with realistic applications are also discussed.

  12. An Intuitionistic Fuzzy Stochastic Decision-Making Method Based on Case-Based Reasoning and Prospect Theory

    Directory of Open Access Journals (Sweden)

    Peng Li

    2017-01-01

    Full Text Available According to the case-based reasoning method and prospect theory, this paper mainly focuses on finding a way to obtain decision-makers’ preferences and the criterion weights for stochastic multicriteria decision-making problems and classify alternatives. Firstly, we construct a new score function for an intuitionistic fuzzy number (IFN considering the decision-making environment. Then, we aggregate the decision-making information in different natural states according to the prospect theory and test decision-making matrices. A mathematical programming model based on a case-based reasoning method is presented to obtain the criterion weights. Moreover, in the original decision-making problem, we integrate all the intuitionistic fuzzy decision-making matrices into an expectation matrix using the expected utility theory and classify or rank the alternatives by the case-based reasoning method. Finally, two illustrative examples are provided to illustrate the implementation process and applicability of the developed method.

  13. Sum rules in the response function method

    International Nuclear Information System (INIS)

    Takayanagi, Kazuo

    1990-01-01

    Sum rules in the response function method are studied in detail. A sum rule can be obtained theoretically by integrating the imaginary part of the response function over the excitation energy with a corresponding energy weight. Generally, the response function is calculated perturbatively in terms of the residual interaction, and the expansion can be described by diagrammatic methods. In this paper, we present a classification of the diagrams so as to clarify which diagram has what contribution to which sum rule. This will allow us to get insight into the contributions to the sum rules of all the processes expressed by Goldstone diagrams. (orig.)

  14. A calculation method for finite depth free-surface green function

    Directory of Open Access Journals (Sweden)

    Yingyi Liu

    2015-03-01

    Full Text Available An improved boundary element method is presented for numerical analysis of hydrodynamic behavior of marine structures. A new algorithm for numerical solution of the finite depth free-surface Green function in three dimensions is developed based on multiple series representations. The whole range of the key parameter R/h is divided into four regions, within which different representation is used to achieve fast convergence. The well-known epsilon algorithm is also adopted to accelerate the convergence. The critical convergence criteria for each representation are investigated and provided. The proposed method is validated by several well-documented benchmark problems.

  15. OCOPTR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation. DRVOCR, Minimization of Nonlinear Function, Variable Metric Method, Derivative Calculation

    International Nuclear Information System (INIS)

    Nazareth, J. L.

    1979-01-01

    1 - Description of problem or function: OCOPTR and DRVOCR are computer programs designed to find minima of non-linear differentiable functions f: R n →R with n dimensional domains. OCOPTR requires that the user only provide function values (i.e. it is a derivative-free routine). DRVOCR requires the user to supply both function and gradient information. 2 - Method of solution: OCOPTR and DRVOCR use the variable metric (or quasi-Newton) method of Davidon (1975). For OCOPTR, the derivatives are estimated by finite differences along a suitable set of linearly independent directions. For DRVOCR, the derivatives are user- supplied. Some features of the codes are the storage of the approximation to the inverse Hessian matrix in lower trapezoidal factored form and the use of an optimally-conditioned updating method. Linear equality constraints are permitted subject to the initial Hessian factor being chosen correctly. 3 - Restrictions on the complexity of the problem: The functions to which the routine is applied are assumed to be differentiable. The routine also requires (n 2 /2) + 0(n) storage locations where n is the problem dimension

  16. Analysis of calculating methods for failure distribution function based on maximal entropy principle

    International Nuclear Information System (INIS)

    Guo Chunying; Lin Yuangen; Jiang Meng; Wu Changli

    2009-01-01

    The computation of invalidation distribution functions of electronic devices when exposed in gamma rays is discussed here. First, the possible devices failure distribution models are determined through the tests of statistical hypotheses using the test data. The results show that: the devices' failure distribution can obey multi-distributions when the test data is few. In order to decide the optimum failure distribution model, the maximal entropy principle is used and the elementary failure models are determined. Then, the Bootstrap estimation method is used to simulate the intervals estimation of the mean and the standard deviation. On the basis of this, the maximal entropy principle is used again and the simulated annealing method is applied to find the optimum values of the mean and the standard deviation. Accordingly, the electronic devices' optimum failure distributions are finally determined and the survival probabilities are calculated. (authors)

  17. The orthogonal gradients method: A radial basis functions method for solving partial differential equations on arbitrary surfaces

    KAUST Repository

    Piret, Cécile

    2012-05-01

    Much work has been done on reconstructing arbitrary surfaces using the radial basis function (RBF) method, but one can hardly find any work done on the use of RBFs to solve partial differential equations (PDEs) on arbitrary surfaces. In this paper, we investigate methods to solve PDEs on arbitrary stationary surfaces embedded in . R3 using the RBF method. We present three RBF-based methods that easily discretize surface differential operators. We take advantage of the meshfree character of RBFs, which give us a high accuracy and the flexibility to represent the most complex geometries in any dimension. Two out of the three methods, which we call the orthogonal gradients (OGr) methods are the result of our work and are hereby presented for the first time. © 2012 Elsevier Inc.

  18. A Robust Algorithm of Multiquadric Method Based on an Improved Huber Loss Function for Interpolating Remote-Sensing-Derived Elevation Data Sets

    Directory of Open Access Journals (Sweden)

    Chuanfa Chen

    2015-03-01

    Full Text Available Remote-sensing-derived elevation data sets often suffer from noise and outliers due to various reasons, such as the physical limitations of sensors, multiple reflectance, occlusions and low contrast of texture. Outliers generally have a seriously negative effect on DEM construction. Some interpolation methods like ordinary kriging (OK are capable of smoothing noise inherent in sample points, but are sensitive to outliers. In this paper, a robust algorithm of multiquadric method (MQ based on an Improved Huber loss function (MQ-IH has been developed to decrease the impact of outliers on DEM construction. Theoretically, the improved Huber loss function is null for outliers, quadratic for small errors, and linear for others. Simulated data sets drawn from a mathematical surface with different error distributions were employed to analyze the robustness of MQ-IH. Results indicate that MQ-IH obtains a good balance between efficiency and robustness. Namely, the performance of MQ-IH is comparative to those of the classical MQ and MQ based on the Classical Huber loss function (MQ-CH when sample points follow a normal distribution, and the former outperforms the latter two when sample points are subject to outliers. For example, for the Cauchy error distribution with the location parameter of 0 and scale parameter of 1, the root mean square errors (RMSEs of MQ-CH and the classical MQ are 0.3916 and 1.4591, respectively, whereas that of MQ-IH is 0.3698. The performance of MQ-IH is further evaluated by qualitative and quantitative analysis through a real-world example of DEM construction with the stereo-images-derived elevation points. Results demonstrate that compared with the classical interpolation methods, including natural neighbor (NN, OK and ANUDEM (a program that calculates regular grid digital elevation models (DEMs with sensible shape and drainage structure from arbitrarily large topographic data sets, and two versions of MQ, including the

  19. Multi-function radar emitter identification based on stochastic syntax-directed translation schema

    OpenAIRE

    Liu, Haijun; Yu, Hongqi; Sun, Zhaolin; Diao, Jietao

    2014-01-01

    To cope with the problem of emitter identification caused by the radar words’ uncertainty of measured multi-function radar emitters, this paper proposes a new identification method based on stochastic syntax-directed translation schema (SSDTS). This method, which is deduced from the syntactic modeling of multi-function radars, considers the probabilities of radar phrases appearance in different radar modes as well as the probabilities of radar word errors occurrence in different radar phrases...

  20. Fibonacci collocation method with a residual error Function to solve linear Volterra integro differential equations

    Directory of Open Access Journals (Sweden)

    Salih Yalcinbas

    2016-01-01

    Full Text Available In this paper, a new collocation method based on the Fibonacci polynomials is introduced to solve the high-order linear Volterra integro-differential equations under the conditions. Numerical examples are included to demonstrate the applicability and validity of the proposed method and comparisons are made with the existing results. In addition, an error estimation based on the residual functions is presented for this method. The approximate solutions are improved by using this error estimation.

  1. Metamodel-based inverse method for parameter identification: elastic-plastic damage model

    Science.gov (United States)

    Huang, Changwu; El Hami, Abdelkhalak; Radi, Bouchaïb

    2017-04-01

    This article proposed a metamodel-based inverse method for material parameter identification and applies it to elastic-plastic damage model parameter identification. An elastic-plastic damage model is presented and implemented in numerical simulation. The metamodel-based inverse method is proposed in order to overcome the disadvantage in computational cost of the inverse method. In the metamodel-based inverse method, a Kriging metamodel is constructed based on the experimental design in order to model the relationship between material parameters and the objective function values in the inverse problem, and then the optimization procedure is executed by the use of a metamodel. The applications of the presented material model and proposed parameter identification method in the standard A 2017-T4 tensile test prove that the presented elastic-plastic damage model is adequate to describe the material's mechanical behaviour and that the proposed metamodel-based inverse method not only enhances the efficiency of parameter identification but also gives reliable results.

  2. Verifying the functional ability of microstructured surfaces by model-based testing

    Science.gov (United States)

    Hartmann, Wito; Weckenmann, Albert

    2014-09-01

    Micro- and nanotechnology enables the use of new product features such as improved light absorption, self-cleaning or protection, which are based, on the one hand, on the size of functional nanostructures and the other hand, on material-specific properties. With the need to reliably measure progressively smaller geometric features, coordinate and surface-measuring instruments have been refined and now allow high-resolution topography and structure measurements down to the sub-nanometre range. Nevertheless, in many cases it is not possible to make a clear statement about the functional ability of the workpiece or its topography because conventional concepts of dimensioning and tolerancing are solely geometry oriented and standardized surface parameters are not sufficient to consider interaction with non-geometric parameters, which are dominant for functions such as sliding, wetting, sealing and optical reflection. To verify the functional ability of microstructured surfaces, a method was developed based on a parameterized mathematical-physical model of the function. From this model, function-related properties can be identified and geometric parameters can be derived, which may be different for the manufacturing and verification processes. With this method it is possible to optimize the definition of the shape of the workpiece regarding the intended function by applying theoretical and experimental knowledge, as well as modelling and simulation. Advantages of this approach will be discussed and demonstrated by the example of a microstructured inking roll.

  3. Explicit appropriate basis function method for numerical solution of stiff systems

    International Nuclear Information System (INIS)

    Chen, Wenzhen; Xiao, Hongguang; Li, Haofeng; Chen, Ling

    2015-01-01

    Highlights: • An explicit numerical method called the appropriate basis function method is presented. • The method differs from the power series method for obtaining approximate numerical solutions. • Two cases show the method is fit for linear and nonlinear stiff systems. • The method is very simple and effective for most of differential equation systems. - Abstract: In this paper, an explicit numerical method, called the appropriate basis function method, is presented. The explicit appropriate basis function method differs from the power series method because it employs an appropriate basis function such as the exponential function, or periodic function, other than a polynomial, to obtain approximate numerical solutions. The method is successful and effective for the numerical solution of the first order ordinary differential equations. Two examples are presented to show the ability of the method for dealing with linear and nonlinear systems of differential equations

  4. Uncertainties of predictions from parton distribution functions. I. The Lagrange multiplier method

    International Nuclear Information System (INIS)

    Stump, D.; Pumplin, J.; Brock, R.; Casey, D.; Huston, J.; Kalk, J.; Lai, H. L.; Tung, W. K.

    2002-01-01

    We apply the Lagrange multiplier method to study the uncertainties of physical predictions due to the uncertainties of parton distribution functions (PDF's), using the cross section σ W for W production at a hadron collider as an archetypal example. An effective χ 2 function based on the CTEQ global QCD analysis is used to generate a series of PDF's, each of which represents the best fit to the global data for some specified value of σ W . By analyzing the likelihood of these 'alterative hypotheses', using available information on errors from the individual experiments, we estimate that the fractional uncertainty of σ W due to current experimental input to the PDF analysis is approximately ±4% at the Fermilab Tevatron, and ±8-10% at the CERN Large Hadron Collider. We give sets of PDF's corresponding to these up and down variations of σ W . We also present similar results on Z production at the colliders. Our method can be applied to any combination of physical variables in precision QCD phenomenology, and it can be used to generate benchmarks for testing the accuracy of approximate methods based on the error matrix

  5. Psychophysical "blinding" methods reveal a functional hierarchy of unconscious visual processing.

    Science.gov (United States)

    Breitmeyer, Bruno G

    2015-09-01

    Numerous non-invasive experimental "blinding" methods exist for suppressing the phenomenal awareness of visual stimuli. Not all of these suppressive methods occur at, and thus index, the same level of unconscious visual processing. This suggests that a functional hierarchy of unconscious visual processing can in principle be established. The empirical results of extant studies that have used a number of different methods and additional reasonable theoretical considerations suggest the following tentative hierarchy. At the highest levels in this hierarchy is unconscious processing indexed by object-substitution masking. The functional levels indexed by crowding, the attentional blink (and other attentional blinding methods), backward pattern masking, metacontrast masking, continuous flash suppression, sandwich masking, and single-flash interocular suppression, fall at progressively lower levels, while unconscious processing at the lowest levels is indexed by eye-based binocular-rivalry suppression. Although unconscious processing levels indexed by additional blinding methods is yet to be determined, a tentative placement at lower levels in the hierarchy is also given for unconscious processing indexed by Troxler fading and adaptation-induced blindness, and at higher levels in the hierarchy indexed by attentional blinding effects in addition to the level indexed by the attentional blink. The full mapping of levels in the functional hierarchy onto cortical activation sites and levels is yet to be determined. The existence of such a hierarchy bears importantly on the search for, and the distinctions between, neural correlates of conscious and unconscious vision. Copyright © 2015 Elsevier Inc. All rights reserved.

  6. Atlas-based identification of targets for functional radiosurgery

    International Nuclear Information System (INIS)

    Stancanello, Joseph; Romanelli, Pantaleo; Modugno, Nicola; Cerveri, Pietro; Ferrigno, Giancarlo; Uggeri, Fulvio; Cantore, Giampaolo

    2006-01-01

    Functional disorders of the brain, such as Parkinson's disease, dystonia, epilepsy, and neuropathic pain, may exhibit poor response to medical therapy. In such cases, surgical intervention may become necessary. Modern surgical approaches to such disorders include radio-frequency lesioning and deep brain stimulation (DBS). The subthalamic nucleus (STN) is one of the most useful stereotactic targets available: STN DBS is known to induce substantial improvement in patients with end-stage Parkinson's disease. Other targets include the Globus Pallidus pars interna (GPi) for dystonia and Parkinson's disease, and the centromedian nucleus of the thalamus (CMN) for neuropathic pain. Radiosurgery is an attractive noninvasive alternative to treat some functional brain disorders. The main technical limitation to radiosurgery is that the target can be selected only on the basis of magnetic resonance anatomy without electrophysiological confirmation. The aim of this work is to provide a method for the correct atlas-based identification of the target to be used in functional neurosurgery treatment planning. The coordinates of STN, CMN, and GPi were identified in the Talairach and Tournoux atlas and transformed to the corresponding regions of the Montreal Neurological Institute (MNI) electronic atlas. Binary masks describing the target nuclei were created. The MNI electronic atlas was deformed onto the patient magnetic resonance imaging-T1 scan by applying an affine transformation followed by a local nonrigid registration. The first transformation was based on normalized cross correlation and the second on optimization of a two-part objective function consisting of similarity criteria and weighted regularization. The obtained deformation field was then applied to the target masks. The minimum distance between the surface of an implanted electrode and the surface of the deformed mask was calculated. The validation of the method consisted of comparing the electrode-mask distance to

  7. Availability of thermodynamic system with multiple performance parameters based on vector-universal generating function

    International Nuclear Information System (INIS)

    Cai Qi; Shang Yanlong; Chen Lisheng; Zhao Yuguang

    2013-01-01

    Vector-universal generating function was presented to analyze the availability of thermodynamic system with multiple performance parameters. Vector-universal generating function of component's performance was defined, the arithmetic model based on vector-universal generating function was derived for the thermodynamic system, and the calculation method was given for state probability of multi-state component. With the stochastic simulation of the degeneration trend of the multiple factors, the system availability with multiple performance parameters was obtained under composite factors. It is shown by an example that the results of the availability obtained by the binary availability analysis method are somewhat conservative, and the results considering parameter failure based on vector-universal generating function reflect the operation characteristics of the thermodynamic system better. (authors)

  8. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    OpenAIRE

    Jean-Louis Dornstetter; Daniel Krob; Jean-Yves Thibon; Ekaterina A. Vassilieva

    2002-01-01

    This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  9. Application of a data-mining method based on Bayesian networks to lesion-deficit analysis

    Science.gov (United States)

    Herskovits, Edward H.; Gerring, Joan P.

    2003-01-01

    Although lesion-deficit analysis (LDA) has provided extensive information about structure-function associations in the human brain, LDA has suffered from the difficulties inherent to the analysis of spatial data, i.e., there are many more variables than subjects, and data may be difficult to model using standard distributions, such as the normal distribution. We herein describe a Bayesian method for LDA; this method is based on data-mining techniques that employ Bayesian networks to represent structure-function associations. These methods are computationally tractable, and can represent complex, nonlinear structure-function associations. When applied to the evaluation of data obtained from a study of the psychiatric sequelae of traumatic brain injury in children, this method generates a Bayesian network that demonstrates complex, nonlinear associations among lesions in the left caudate, right globus pallidus, right side of the corpus callosum, right caudate, and left thalamus, and subsequent development of attention-deficit hyperactivity disorder, confirming and extending our previous statistical analysis of these data. Furthermore, analysis of simulated data indicates that methods based on Bayesian networks may be more sensitive and specific for detecting associations among categorical variables than methods based on chi-square and Fisher exact statistics.

  10. Explicit symplectic algorithms based on generating functions for charged particle dynamics

    Science.gov (United States)

    Zhang, Ruili; Qin, Hong; Tang, Yifa; Liu, Jian; He, Yang; Xiao, Jianyuan

    2016-07-01

    Dynamics of a charged particle in the canonical coordinates is a Hamiltonian system, and the well-known symplectic algorithm has been regarded as the de facto method for numerical integration of Hamiltonian systems due to its long-term accuracy and fidelity. For long-term simulations with high efficiency, explicit symplectic algorithms are desirable. However, it is generally believed that explicit symplectic algorithms are only available for sum-separable Hamiltonians, and this restriction limits the application of explicit symplectic algorithms to charged particle dynamics. To overcome this difficulty, we combine the familiar sum-split method and a generating function method to construct second- and third-order explicit symplectic algorithms for dynamics of charged particle. The generating function method is designed to generate explicit symplectic algorithms for product-separable Hamiltonian with form of H (x ,p ) =pif (x ) or H (x ,p ) =xig (p ) . Applied to the simulations of charged particle dynamics, the explicit symplectic algorithms based on generating functions demonstrate superiorities in conservation and efficiency.

  11. Influence function method for fast estimation of BWR core performance

    International Nuclear Information System (INIS)

    Rahnema, F.; Martin, C.L.; Parkos, G.R.; Williams, R.D.

    1993-01-01

    The model, which is based on the influence function method, provides rapid estimate of important quantities such as margins to fuel operating limits, the effective multiplication factor, nodal power and void and bundle flow distributions as well as the traversing in-core probe (TIP) and local power range monitor (LPRM) readings. The fast model has been incorporated into GE's three-dimensional core monitoring system (3D Monicore). In addition to its predicative capability, the model adapts to LPRM readings in the monitoring mode. Comparisons have shown that the agreement between the results of the fast method and those of the standard 3D Monicore is within a few percent. (orig.)

  12. MHCcluster, a method for functional clustering of MHC molecules

    DEFF Research Database (Denmark)

    Thomsen, Martin Christen Frølund; Lundegaard, Claus; Buus, Søren

    2013-01-01

    The identification of peptides binding to major histocompatibility complexes (MHC) is a critical step in the understanding of T cell immune responses. The human MHC genomic region (HLA) is extremely polymorphic comprising several thousand alleles, many encoding a distinct molecule. The potentially...... binding specificity. The method has a flexible web interface that allows the user to include any MHC of interest in the analysis. The output consists of a static heat map and graphical tree-based visualizations of the functional relationship between MHC variants and a dynamic TreeViewer interface where...

  13. Topology optimization based on spline-based meshfree method using topological derivatives

    International Nuclear Information System (INIS)

    Hur, Junyoung; Youn, Sung-Kie; Kang, Pilseong

    2017-01-01

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  14. Topology optimization based on spline-based meshfree method using topological derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Hur, Junyoung; Youn, Sung-Kie [KAIST, Daejeon (Korea, Republic of); Kang, Pilseong [Korea Research Institute of Standards and Science, Daejeon (Korea, Republic of)

    2017-05-15

    Spline-based meshfree method (SBMFM) is originated from the Isogeometric analysis (IGA) which integrates design and analysis through Non-uniform rational B-spline (NURBS) basis functions. SBMFM utilizes trimming technique of CAD system by representing the domain using NURBS curves. In this work, an explicit boundary topology optimization using SBMFM is presented with an effective boundary update scheme. There have been similar works in this subject. However unlike the previous works where semi-analytic method for calculating design sensitivities is employed, the design update is done by using topological derivatives. In this research, the topological derivative is used to derive the sensitivity of boundary curves and for the creation of new holes. Based on the values of topological derivatives, the shape of boundary curves is updated. Also, the topological change is achieved by insertion and removal of the inner holes. The presented approach is validated through several compliance minimization problems.

  15. Effects of gross motor function and manual function levels on performance-based ADL motor skills of children with spastic cerebral palsy.

    Science.gov (United States)

    Park, Myoung-Ok

    2017-02-01

    [Purpose] The purpose of this study was to determine effects of Gross Motor Function Classification System and Manual Ability Classification System levels on performance-based motor skills of children with spastic cerebral palsy. [Subjects and Methods] Twenty-three children with cerebral palsy were included. The Assessment of Motor and Process Skills was used to evaluate performance-based motor skills in daily life. Gross motor function was assessed using Gross Motor Function Classification Systems, and manual function was measured using the Manual Ability Classification System. [Results] Motor skills in daily activities were significantly different on Gross Motor Function Classification System level and Manual Ability Classification System level. According to the results of multiple regression analysis, children categorized as Gross Motor Function Classification System level III scored lower in terms of performance based motor skills than Gross Motor Function Classification System level I children. Also, when analyzed with respect to Manual Ability Classification System level, level II was lower than level I, and level III was lower than level II in terms of performance based motor skills. [Conclusion] The results of this study indicate that performance-based motor skills differ among children categorized based on Gross Motor Function Classification System and Manual Ability Classification System levels of cerebral palsy.

  16. Numerical solution to generalized Burgers'-Fisher equation using Exp-function method hybridized with heuristic computation.

    Science.gov (United States)

    Malik, Suheel Abdullah; Qureshi, Ijaz Mansoor; Amir, Muhammad; Malik, Aqdas Naveed; Haq, Ihsanul

    2015-01-01

    In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE) through substitution is converted into a nonlinear ordinary differential equation (NODE). The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA) is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM), homotopy perturbation method (HPM), and optimal homotopy asymptotic method (OHAM), show that the suggested scheme is fairly accurate and viable for solving such problems.

  17. Numerical solution to generalized Burgers'-Fisher equation using Exp-function method hybridized with heuristic computation.

    Directory of Open Access Journals (Sweden)

    Suheel Abdullah Malik

    Full Text Available In this paper, a new heuristic scheme for the approximate solution of the generalized Burgers'-Fisher equation is proposed. The scheme is based on the hybridization of Exp-function method with nature inspired algorithm. The given nonlinear partial differential equation (NPDE through substitution is converted into a nonlinear ordinary differential equation (NODE. The travelling wave solution is approximated by the Exp-function method with unknown parameters. The unknown parameters are estimated by transforming the NODE into an equivalent global error minimization problem by using a fitness function. The popular genetic algorithm (GA is used to solve the minimization problem, and to achieve the unknown parameters. The proposed scheme is successfully implemented to solve the generalized Burgers'-Fisher equation. The comparison of numerical results with the exact solutions, and the solutions obtained using some traditional methods, including adomian decomposition method (ADM, homotopy perturbation method (HPM, and optimal homotopy asymptotic method (OHAM, show that the suggested scheme is fairly accurate and viable for solving such problems.

  18. BSSF: a fingerprint based ultrafast binding site similarity search and function analysis server

    Directory of Open Access Journals (Sweden)

    Jiang Hualiang

    2010-01-01

    Full Text Available Abstract Background Genome sequencing and post-genomics projects such as structural genomics are extending the frontier of the study of sequence-structure-function relationship of genes and their products. Although many sequence/structure-based methods have been devised with the aim of deciphering this delicate relationship, there still remain large gaps in this fundamental problem, which continuously drives researchers to develop novel methods to extract relevant information from sequences and structures and to infer the functions of newly identified genes by genomics technology. Results Here we present an ultrafast method, named BSSF(Binding Site Similarity & Function, which enables researchers to conduct similarity searches in a comprehensive three-dimensional binding site database extracted from PDB structures. This method utilizes a fingerprint representation of the binding site and a validated statistical Z-score function scheme to judge the similarity between the query and database items, even if their similarities are only constrained in a sub-pocket. This fingerprint based similarity measurement was also validated on a known binding site dataset by comparing with geometric hashing, which is a standard 3D similarity method. The comparison clearly demonstrated the utility of this ultrafast method. After conducting the database searching, the hit list is further analyzed to provide basic statistical information about the occurrences of Gene Ontology terms and Enzyme Commission numbers, which may benefit researchers by helping them to design further experiments to study the query proteins. Conclusions This ultrafast web-based system will not only help researchers interested in drug design and structural genomics to identify similar binding sites, but also assist them by providing further analysis of hit list from database searching.

  19. PatchSurfers: Two methods for local molecular property-based binding ligand prediction.

    Science.gov (United States)

    Shin, Woong-Hee; Bures, Mark Gregory; Kihara, Daisuke

    2016-01-15

    Protein function prediction is an active area of research in computational biology. Function prediction can help biologists make hypotheses for characterization of genes and help interpret biological assays, and thus is a productive area for collaboration between experimental and computational biologists. Among various function prediction methods, predicting binding ligand molecules for a target protein is an important class because ligand binding events for a protein are usually closely intertwined with the proteins' biological function, and also because predicted binding ligands can often be directly tested by biochemical assays. Binding ligand prediction methods can be classified into two types: those which are based on protein-protein (or pocket-pocket) comparison, and those that compare a target pocket directly to ligands. Recently, our group proposed two computational binding ligand prediction methods, Patch-Surfer, which is a pocket-pocket comparison method, and PL-PatchSurfer, which compares a pocket to ligand molecules. The two programs apply surface patch-based descriptions to calculate similarity or complementarity between molecules. A surface patch is characterized by physicochemical properties such as shape, hydrophobicity, and electrostatic potentials. These properties on the surface are represented using three-dimensional Zernike descriptors (3DZD), which are based on a series expansion of a 3 dimensional function. Utilizing 3DZD for describing the physicochemical properties has two main advantages: (1) rotational invariance and (2) fast comparison. Here, we introduce Patch-Surfer and PL-PatchSurfer with an emphasis on PL-PatchSurfer, which is more recently developed. Illustrative examples of PL-PatchSurfer performance on binding ligand prediction as well as virtual drug screening are also provided. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Study of a method to solve the one speed, three dimensional transport equation using the finite element method and the associated Legendre function

    International Nuclear Information System (INIS)

    Fernandes, A.

    1991-01-01

    A method to solve three dimensional neutron transport equation and it is based on the original work suggested by J.K. Fletcher (42, 43). The angular dependence of the flux is approximated by associated Legendre functions and the finite element method is applied to the space components is presented. When the angular flux, the scattering cross section and the neutrons source are expanded in associated Legendre functions, the first order neutron transport equation is reduced to a coupled set of second order diffusion like equations. These equations are solved in an iterative way by the finite element method to the moments. (author)

  1. A Copula-Based Method for Estimating Shear Strength Parameters of Rock Mass

    Directory of Open Access Journals (Sweden)

    Da Huang

    2014-01-01

    Full Text Available The shear strength parameters (i.e., the internal friction coefficient f and cohesion c are very important in rock engineering, especially for the stability analysis and reinforcement design of slopes and underground caverns. In this paper, a probabilistic method, Copula-based method, is proposed for estimating the shear strength parameters of rock mass. The optimal Copula functions between rock mass quality Q and f, Q and c for the marbles are established based on the correlation analyses of the results of 12 sets of in situ tests in the exploration adits of Jinping I-Stage Hydropower Station. Although the Copula functions are derived from the in situ tests for the marbles, they can be extended to be applied to other types of rock mass with similar geological and mechanical properties. For another 9 sets of in situ tests as an extensional application, by comparison with the results from Hoek-Brown criterion, the estimated values of f and c from the Copula-based method achieve better accuracy. Therefore, the proposed Copula-based method is an effective tool in estimating rock strength parameters.

  2. Recent Progress in Producing Lignin-Based Carbon Fibers for Functional Applications

    Energy Technology Data Exchange (ETDEWEB)

    Paul, Ryan [GrafTech International Holdings Inc.; Burwell, Deanna [GrafTech International Holdings Inc.; Dai, Xuliang [GrafTech International Holdings Inc.; Naskar, Amit [Oak Ridge National Laboratory; Gallego, Nidia [Oak Ridge National Laboratory; Akato, Kokouvi [Oak Ridge National Laboratory

    2015-10-29

    Lignin, a biopolymer, has been investigated as a renewable and low-cost carbon fiber precursor since the 1960s. Although successful lab-scale production of lignin-based carbon fibers has been reported, there are currently not any commercial producers. This paper will highlight some of the known challenges with converting lignin-based precursors into carbon fiber, and the reported methods for purifying and modifying lignin to improve it as a precursor. Several of the challenges with lignin are related to its diversity in chemical structure and purity, depending on its biomass source (e.g. hardwood, softwood, grasses) and extraction method (e.g. organosolv, kraft). In order to make progress in this field, GrafTech and Oak Ridge National Laboratory are collaborating to develop lignin-based carbon fiber technology and to demonstrate it in functional applications, as part of a cooperative agreement with the DOE Advanced Manufacturing Office. The progress made to date with producing lignin-based carbon fiber for functional applications, as well as developing and qualifying a supply chain and value proposition, are also highlighted.

  3. Level set method for image segmentation based on moment competition

    Science.gov (United States)

    Min, Hai; Wang, Xiao-Feng; Huang, De-Shuang; Jin, Jing; Wang, Hong-Zhi; Li, Hai

    2015-05-01

    We propose a level set method for image segmentation which introduces the moment competition and weakly supervised information into the energy functional construction. Different from the region-based level set methods which use force competition, the moment competition is adopted to drive the contour evolution. Here, a so-called three-point labeling scheme is proposed to manually label three independent points (weakly supervised information) on the image. Then the intensity differences between the three points and the unlabeled pixels are used to construct the force arms for each image pixel. The corresponding force is generated from the global statistical information of a region-based method and weighted by the force arm. As a result, the moment can be constructed and incorporated into the energy functional to drive the evolving contour to approach the object boundary. In our method, the force arm can take full advantage of the three-point labeling scheme to constrain the moment competition. Additionally, the global statistical information and weakly supervised information are successfully integrated, which makes the proposed method more robust than traditional methods for initial contour placement and parameter setting. Experimental results with performance analysis also show the superiority of the proposed method on segmenting different types of complicated images, such as noisy images, three-phase images, images with intensity inhomogeneity, and texture images.

  4. Feature-Based Classification of Amino Acid Substitutions outside Conserved Functional Protein Domains

    Directory of Open Access Journals (Sweden)

    Branislava Gemovic

    2013-01-01

    Full Text Available There are more than 500 amino acid substitutions in each human genome, and bioinformatics tools irreplaceably contribute to determination of their functional effects. We have developed feature-based algorithm for the detection of mutations outside conserved functional domains (CFDs and compared its classification efficacy with the most commonly used phylogeny-based tools, PolyPhen-2 and SIFT. The new algorithm is based on the informational spectrum method (ISM, a feature-based technique, and statistical analysis. Our dataset contained neutral polymorphisms and mutations associated with myeloid malignancies from epigenetic regulators ASXL1, DNMT3A, EZH2, and TET2. PolyPhen-2 and SIFT had significantly lower accuracies in predicting the effects of amino acid substitutions outside CFDs than expected, with especially low sensitivity. On the other hand, only ISM algorithm showed statistically significant classification of these sequences. It outperformed PolyPhen-2 and SIFT by 15% and 13%, respectively. These results suggest that feature-based methods, like ISM, are more suitable for the classification of amino acid substitutions outside CFDs than phylogeny-based tools.

  5. GA Based Optimal Feature Extraction Method for Functional Data Classification

    OpenAIRE

    Jun Wan; Zehua Chen; Yingwu Chen; Zhidong Bai

    2010-01-01

    Classification is an interesting problem in functional data analysis (FDA), because many science and application problems end up with classification problems, such as recognition, prediction, control, decision making, management, etc. As the high dimension and high correlation in functional data (FD), it is a key problem to extract features from FD whereas keeping its global characters, which relates to the classification efficiency and precision to heavens. In this paper...

  6. Performance analysis of demodulation with diversity -- A combinatorial approach I: Symmetric function theoretical methods

    Directory of Open Access Journals (Sweden)

    Jean-Louis Dornstetter

    2002-12-01

    Full Text Available This paper is devoted to the presentation of a combinatorial approach, based on the theory of symmetric functions, for analyzing the performance of a family of demodulation methods used in mobile telecommunications.

  7. A Clustering-Based Automatic Transfer Function Design for Volume Visualization

    Directory of Open Access Journals (Sweden)

    Tianjin Zhang

    2016-01-01

    Full Text Available The two-dimensional transfer functions (TFs designed based on intensity-gradient magnitude (IGM histogram are effective tools for the visualization and exploration of 3D volume data. However, traditional design methods usually depend on multiple times of trial-and-error. We propose a novel method for the automatic generation of transfer functions by performing the affinity propagation (AP clustering algorithm on the IGM histogram. Compared with previous clustering algorithms that were employed in volume visualization, the AP clustering algorithm has much faster convergence speed and can achieve more accurate clustering results. In order to obtain meaningful clustering results, we introduce two similarity measurements: IGM similarity and spatial similarity. These two similarity measurements can effectively bring the voxels of the same tissue together and differentiate the voxels of different tissues so that the generated TFs can assign different optical properties to different tissues. Before performing the clustering algorithm on the IGM histogram, we propose to remove noisy voxels based on the spatial information of voxels. Our method does not require users to input the number of clusters, and the classification and visualization process is automatic and efficient. Experiments on various datasets demonstrate the effectiveness of the proposed method.

  8. Full Waveform Inversion Using an Energy-Based Objective Function with Efficient Calculation of the Gradient

    KAUST Repository

    Choi, Yun Seok

    2017-05-26

    Full waveform inversion (FWI) using an energy-based objective function has the potential to provide long wavelength model information even without low frequency in the data. However, without the back-propagation method (adjoint-state method), its implementation is impractical for the model size of general seismic survey. We derive the gradient of the energy-based objective function using the back-propagation method to make its FWI feasible. We also raise the energy signal to the power of a small positive number to properly handle the energy signal imbalance as a function of offset. Examples demonstrate that the proposed FWI algorithm provides a convergent long wavelength structure model even without low-frequency information, which can be used as a good starting model for the subsequent conventional FWI.

  9. Assessment of Methods to Consolidate Iodine-Loaded Silver-Functionalized Silica Aerogel

    Energy Technology Data Exchange (ETDEWEB)

    Matyas, Josef; Engler, Robert K.

    2013-09-01

    The U.S. Department of Energy is currently investigating alternative sorbents for the removal and immobilization of radioiodine from the gas streams in a nuclear fuel reprocessing plant. One of these new sorbents, Ag0-functionalized silica aerogels, shows great promise as a potential replacement for Ag-bearing mordenites because of its high selectivity and sorption capacity for iodine. Moreover, a feasible consolidation of iodine-loaded Ag0-functionalized silica aerogels to a durable SiO2-based waste form makes this aerogel an attractive choice for sequestering radioiodine. This report provides a preliminary assessment of the methods that can be used to consolidate iodine-loaded Ag0-functionalized silica aerogels into a final waste form. In particular, it focuses on experimental investigation of densification of as prepared Ag0-functionalized silica aerogels powders, with or without organic moiety and with or without sintering additive (colloidal silica), with three commercially available techniques: 1) hot uniaxial pressing (HUP), 2) hot isostatic pressing (HIP), and 3) spark plasma sintering (SPS). The densified products were evaluated with helium gas pycnometer for apparent density, with the Archimedes method for apparent density and open porosity, and with high-resolution scanning electron microscopy and energy dispersive spectroscopy (SEM-EDS) for the extent of densification and distribution of individual elements. The preliminary investigation of HUP, HIP, and SPS showed that these sintering methods can effectively consolidate powders of Ag0-functionalized silica aerogel into products of near-theoretical density. Also, removal of organic moiety and adding 5.6 mass% of colloidal silica to Ag0-functionalized silica aerogel powders before processing provided denser products. Furthermore, the ram travel data for SPS indicated that rapid consolidation of powders can be performed at temperatures below 950°C.

  10. A multivariate quadrature based moment method for LES based modeling of supersonic combustion

    Science.gov (United States)

    Donde, Pratik; Koo, Heeseok; Raman, Venkat

    2012-07-01

    The transported probability density function (PDF) approach is a powerful technique for large eddy simulation (LES) based modeling of scramjet combustors. In this approach, a high-dimensional transport equation for the joint composition-enthalpy PDF needs to be solved. Quadrature based approaches provide deterministic Eulerian methods for solving the joint-PDF transport equation. In this work, it is first demonstrated that the numerical errors associated with LES require special care in the development of PDF solution algorithms. The direct quadrature method of moments (DQMOM) is one quadrature-based approach developed for supersonic combustion modeling. This approach is shown to generate inconsistent evolution of the scalar moments. Further, gradient-based source terms that appear in the DQMOM transport equations are severely underpredicted in LES leading to artificial mixing of fuel and oxidizer. To overcome these numerical issues, a semi-discrete quadrature method of moments (SeQMOM) is formulated. The performance of the new technique is compared with the DQMOM approach in canonical flow configurations as well as a three-dimensional supersonic cavity stabilized flame configuration. The SeQMOM approach is shown to predict subfilter statistics accurately compared to the DQMOM approach.

  11. Concomitant prediction of function and fold at the domain level with GO-based profiles.

    Science.gov (United States)

    Lopez, Daniel; Pazos, Florencio

    2013-01-01

    Predicting the function of newly sequenced proteins is crucial due to the pace at which these raw sequences are being obtained. Almost all resources for predicting protein function assign functional terms to whole chains, and do not distinguish which particular domain is responsible for the allocated function. This is not a limitation of the methodologies themselves but it is due to the fact that in the databases of functional annotations these methods use for transferring functional terms to new proteins, these annotations are done on a whole-chain basis. Nevertheless, domains are the basic evolutionary and often functional units of proteins. In many cases, the domains of a protein chain have distinct molecular functions, independent from each other. For that reason resources with functional annotations at the domain level, as well as methodologies for predicting function for individual domains adapted to these resources are required.We present a methodology for predicting the molecular function of individual domains, based on a previously developed database of functional annotations at the domain level. The approach, which we show outperforms a standard method based on sequence searches in assigning function, concomitantly predicts the structural fold of the domains and can give hints on the functionally important residues associated to the predicted function.

  12. Gene function prediction based on Gene Ontology Hierarchy Preserving Hashing.

    Science.gov (United States)

    Zhao, Yingwen; Fu, Guangyuan; Wang, Jun; Guo, Maozu; Yu, Guoxian

    2018-02-23

    Gene Ontology (GO) uses structured vocabularies (or terms) to describe the molecular functions, biological roles, and cellular locations of gene products in a hierarchical ontology. GO annotations associate genes with GO terms and indicate the given gene products carrying out the biological functions described by the relevant terms. However, predicting correct GO annotations for genes from a massive set of GO terms as defined by GO is a difficult challenge. To combat with this challenge, we introduce a Gene Ontology Hierarchy Preserving Hashing (HPHash) based semantic method for gene function prediction. HPHash firstly measures the taxonomic similarity between GO terms. It then uses a hierarchy preserving hashing technique to keep the hierarchical order between GO terms, and to optimize a series of hashing functions to encode massive GO terms via compact binary codes. After that, HPHash utilizes these hashing functions to project the gene-term association matrix into a low-dimensional one and performs semantic similarity based gene function prediction in the low-dimensional space. Experimental results on three model species (Homo sapiens, Mus musculus and Rattus norvegicus) for interspecies gene function prediction show that HPHash performs better than other related approaches and it is robust to the number of hash functions. In addition, we also take HPHash as a plugin for BLAST based gene function prediction. From the experimental results, HPHash again significantly improves the prediction performance. The codes of HPHash are available at: http://mlda.swu.edu.cn/codes.php?name=HPHash. Copyright © 2018 Elsevier Inc. All rights reserved.

  13. Bayesian risk-based decision method for model validation under uncertainty

    International Nuclear Information System (INIS)

    Jiang Xiaomo; Mahadevan, Sankaran

    2007-01-01

    This paper develops a decision-making methodology for computational model validation, considering the risk of using the current model, data support for the current model, and cost of acquiring new information to improve the model. A Bayesian decision theory-based method is developed for this purpose, using a likelihood ratio as the validation metric for model assessment. An expected risk or cost function is defined as a function of the decision costs, and the likelihood and prior of each hypothesis. The risk is minimized through correctly assigning experimental data to two decision regions based on the comparison of the likelihood ratio with a decision threshold. A Bayesian validation metric is derived based on the risk minimization criterion. Two types of validation tests are considered: pass/fail tests and system response value measurement tests. The methodology is illustrated for the validation of reliability prediction models in a tension bar and an engine blade subjected to high cycle fatigue. The proposed method can effectively integrate optimal experimental design into model validation to simultaneously reduce the cost and improve the accuracy of reliability model assessment

  14. Network reliability analysis of complex systems using a non-simulation-based method

    International Nuclear Information System (INIS)

    Kim, Youngsuk; Kang, Won-Hee

    2013-01-01

    Civil infrastructures such as transportation, water supply, sewers, telecommunications, and electrical and gas networks often establish highly complex networks, due to their multiple source and distribution nodes, complex topology, and functional interdependence between network components. To understand the reliability of such complex network system under catastrophic events such as earthquakes and to provide proper emergency management actions under such situation, efficient and accurate reliability analysis methods are necessary. In this paper, a non-simulation-based network reliability analysis method is developed based on the Recursive Decomposition Algorithm (RDA) for risk assessment of generic networks whose operation is defined by the connections of multiple initial and terminal node pairs. The proposed method has two separate decomposition processes for two logical functions, intersection and union, and combinations of these processes are used for the decomposition of any general system event with multiple node pairs. The proposed method is illustrated through numerical network examples with a variety of system definitions, and is applied to a benchmark gas transmission pipe network in Memphis TN to estimate the seismic performance and functional degradation of the network under a set of earthquake scenarios.

  15. FUSION SEGMENTATION METHOD BASED ON FUZZY THEORY FOR COLOR IMAGES

    Directory of Open Access Journals (Sweden)

    J. Zhao

    2017-09-01

    Full Text Available The image segmentation method based on two-dimensional histogram segments the image according to the thresholds of the intensity of the target pixel and the average intensity of its neighborhood. This method is essentially a hard-decision method. Due to the uncertainties when labeling the pixels around the threshold, the hard-decision method can easily get the wrong segmentation result. Therefore, a fusion segmentation method based on fuzzy theory is proposed in this paper. We use membership function to model the uncertainties on each color channel of the color image. Then, we segment the color image according to the fuzzy reasoning. The experiment results show that our proposed method can get better segmentation results both on the natural scene images and optical remote sensing images compared with the traditional thresholding method. The fusion method in this paper can provide new ideas for the information extraction of optical remote sensing images and polarization SAR images.

  16. Multiquark masses and wave functions through modified Green's function Monte Carlo method

    International Nuclear Information System (INIS)

    Kerbikov, B.O.; Polikarpov, M.I.; Shevchenko, L.V.

    1987-01-01

    The Modified Green's function Monte Carlo method (MGFMC) is used to calculate the masses and ground-state wave functions of multiquark systems in the potential model. The previously developed MGFMC is generalized in order to treat systems containing quarks with inequal masses. The obtained results are presented with the Cornell potential for the masses and the wave functions of light and heavy flavoured baryons and multiquark states (N=6, 9, 12) made of light quarks

  17. Structure-based Markov random field model for representing evolutionary constraints on functional sites.

    Science.gov (United States)

    Jeong, Chan-Seok; Kim, Dongsup

    2016-02-24

    Elucidating the cooperative mechanism of interconnected residues is an important component toward understanding the biological function of a protein. Coevolution analysis has been developed to model the coevolutionary information reflecting structural and functional constraints. Recently, several methods have been developed based on a probabilistic graphical model called the Markov random field (MRF), which have led to significant improvements for coevolution analysis; however, thus far, the performance of these models has mainly been assessed by focusing on the aspect of protein structure. In this study, we built an MRF model whose graphical topology is determined by the residue proximity in the protein structure, and derived a novel positional coevolution estimate utilizing the node weight of the MRF model. This structure-based MRF method was evaluated for three data sets, each of which annotates catalytic site, allosteric site, and comprehensively determined functional site information. We demonstrate that the structure-based MRF architecture can encode the evolutionary information associated with biological function. Furthermore, we show that the node weight can more accurately represent positional coevolution information compared to the edge weight. Lastly, we demonstrate that the structure-based MRF model can be reliably built with only a few aligned sequences in linear time. The results show that adoption of a structure-based architecture could be an acceptable approximation for coevolution modeling with efficient computation complexity.

  18. Analysis of the robustness of network-based disease-gene prioritization methods reveals redundancy in the human interactome and functional diversity of disease-genes.

    Directory of Open Access Journals (Sweden)

    Emre Guney

    Full Text Available Complex biological systems usually pose a trade-off between robustness and fragility where a small number of perturbations can substantially disrupt the system. Although biological systems are robust against changes in many external and internal conditions, even a single mutation can perturb the system substantially, giving rise to a pathophenotype. Recent advances in identifying and analyzing the sequential variations beneath human disorders help to comprehend a systemic view of the mechanisms underlying various disease phenotypes. Network-based disease-gene prioritization methods rank the relevance of genes in a disease under the hypothesis that genes whose proteins interact with each other tend to exhibit similar phenotypes. In this study, we have tested the robustness of several network-based disease-gene prioritization methods with respect to the perturbations of the system using various disease phenotypes from the Online Mendelian Inheritance in Man database. These perturbations have been introduced either in the protein-protein interaction network or in the set of known disease-gene associations. As the network-based disease-gene prioritization methods are based on the connectivity between known disease-gene associations, we have further used these methods to categorize the pathophenotypes with respect to the recoverability of hidden disease-genes. Our results have suggested that, in general, disease-genes are connected through multiple paths in the human interactome. Moreover, even when these paths are disturbed, network-based prioritization can reveal hidden disease-gene associations in some pathophenotypes such as breast cancer, cardiomyopathy, diabetes, leukemia, parkinson disease and obesity to a greater extend compared to the rest of the pathophenotypes tested in this study. Gene Ontology (GO analysis highlighted the role of functional diversity for such diseases.

  19. A SYNTHESIS METHOD OF BASIC TERNARY BENT-SQUARES BASED ON THE TRIAD SHIFT OPERATOR

    Directory of Open Access Journals (Sweden)

    O. N. Zhdanov

    2017-01-01

    Full Text Available Practical application of advanced algebraic constructions in modern communication systems based on MC-CDMA (Multi Code Code Division Multiple Access technology and in cryptography necessitates their further research. One of the most commonly used advanced algebraic construction is the binary bent-function having a uniform amplitude spectrum of the Walsh-Hadamard transform and, accordingly, having the maximal distance from the codewords of affine code. In addition to the binary bent-functions researchers are currently focuses on the development of synthesis methods of their many-valued analogues. In particular, one of the most effective methods for the synthesis of many-valued bent-functions is the method based on the Agievich bent-squares. In this paper, we developed a regular synthesis method of the ternary bent-squares on the basis of an arbitrary spectral vector and the regular operator of the triad shift. The classification of spectral vectors of lengths N = 3 and N = 9 is performed. On the basis of spectral classification more precise definition of many-valued bent-sequences is given, taking into account the existence of the phenomenon of many-valued bent-sequences for the length, determined by odd power of base. The paper results are valuable for practical use: the development of new constant amplitude codes for MC-CDMA technology, cryptographic primitives, data compression algorithms, signal structures, algorithms of block and stream encryption, based on advanced principles of many-valued logic. The developed bent-squares design method is also a basis for further theoretical research: development of methods of the permutation of rows and columns of basic bent-squares and their sign coding, synthesis of composite bent-squares. In addition, the data on the spectral classification of vectors give the task of constructing the synthesis methods of bent-functions of lengths N = 32k+1, k Є ℕ.

  20. Reliability Evaluation of Bridges Based on Nonprobabilistic Response Surface Limit Method

    OpenAIRE

    Chen, Xuyong; Chen, Qian; Bian, Xiaoya; Fan, Jianping

    2017-01-01

    Due to many uncertainties in nonprobabilistic reliability assessment of bridges, the limit state function is generally unknown. The traditional nonprobabilistic response surface method is a lengthy and oscillating iteration process and leads to difficultly solving the nonprobabilistic reliability index. This article proposes a nonprobabilistic response surface limit method based on the interval model. The intention of this method is to solve the upper and lower limits of the nonprobabilistic ...

  1. Characterizing Bonding Patterns in Diradicals and Triradicals by Density-Based Wave Function Analysis: A Uniform Approach.

    Science.gov (United States)

    Orms, Natalie; Rehn, Dirk R; Dreuw, Andreas; Krylov, Anna I

    2018-02-13

    Density-based wave function analysis enables unambiguous comparisons of the electronic structure computed by different methods and removes ambiguity of orbital choices. We use this tool to investigate the performance of different spin-flip methods for several prototypical diradicals and triradicals. In contrast to previous calibration studies that focused on energy gaps between high- and low spin-states, we focus on the properties of the underlying wave functions, such as the number of effectively unpaired electrons. Comparison of different density functional and wave function theory results provides insight into the performance of the different methods when applied to strongly correlated systems such as polyradicals. We show that canonical molecular orbitals for species like large copper-containing diradicals fail to correctly represent the underlying electronic structure due to highly non-Koopmans character, while density-based analysis of the same wave function delivers a clear picture of the bonding pattern.

  2. Reliability analysis based on a novel density estimation method for structures with correlations

    Directory of Open Access Journals (Sweden)

    Baoyu LI

    2017-06-01

    Full Text Available Estimating the Probability Density Function (PDF of the performance function is a direct way for structural reliability analysis, and the failure probability can be easily obtained by integration in the failure domain. However, efficiently estimating the PDF is still an urgent problem to be solved. The existing fractional moment based maximum entropy has provided a very advanced method for the PDF estimation, whereas the main shortcoming is that it limits the application of the reliability analysis method only to structures with independent inputs. While in fact, structures with correlated inputs always exist in engineering, thus this paper improves the maximum entropy method, and applies the Unscented Transformation (UT technique to compute the fractional moments of the performance function for structures with correlations, which is a very efficient moment estimation method for models with any inputs. The proposed method can precisely estimate the probability distributions of performance functions for structures with correlations. Besides, the number of function evaluations of the proposed method in reliability analysis, which is determined by UT, is really small. Several examples are employed to illustrate the accuracy and advantages of the proposed method.

  3. Functions of Russian University During Formation of Innovation-Based Economy

    Directory of Open Access Journals (Sweden)

    Malika A. Kurdova

    2017-09-01

    Full Text Available Introduction: the formation of innovation-based model of the Russian economy promotes the appearance of new functions of higher education institutions’ activities aimed at sustainable development. The article analyzes various classifications of the above functions in the changed conditions of modern Russia. Materials and Methods: the authors draw on the publications by famous scientists, methods of logical analysis and synthesis, generalisation, sociological and statistical studies: a survey, expert evaluation method, documentation analysis, etc. Results: the authors’ classification of higher education institutions’ functions formed during transition to innovation-oriented model of economic development for training innovation-oriented experts is presented. The aim of new approaches is to generate innovative ideas, to transfer knowledge, to foster the skills of entrepreneurship, to ensure the competitiveness and the employability of graduates. The analysis of implementation of new functions on the example of the universities of the Penza region of the Russian Federation is made. Discussion and Conclusions: the modernisation of higher education, accompanied by update of its content and functions, involves the formation of a new national University model that reflects the specifics of modern stage of the country’s socio-economic development. Therefore changes in higher education affecting the processes of functioning of higher educational institutions are revealed. The influence of rapidly changing factors in the external and internal environment led to the formation of new and transformation of basic functions of Russian universities, the effective implementation of which will contribute to improving the quality of professional training of specialists and formation of innovative potential of new economy.

  4. New Multi-Criteria Group Decision-Making Method Based on Vague Set Theory

    OpenAIRE

    Kuo-Sui Lin

    2016-01-01

    In light of the deficiencies and limitations for existing score functions, Lin has proposed a more effective and reasonable new score function for measuring vague values. By using Lin’s score function and a new weighted aggregation score function, an algorithm for multi-criteria group decision-making method was proposed to solve vague set based group decision-making problems under vague environments. Finally, a numerical example was illustrated to show the effectiveness of the proposed multi-...

  5. Determination of acoustical transfer functions using an impulse method

    Science.gov (United States)

    MacPherson, J.

    1985-02-01

    The Transfer Function of a system may be defined as the relationship of the output response to the input of a system. Whilst recent advances in digital processing systems have enabled Impulse Transfer Functions to be determined by computation of the Fast Fourier Transform, there has been little work done in applying these techniques to room acoustics. Acoustical Transfer Functions have been determined for auditoria, using an impulse method. The technique is based on the computation of the Fast Fourier Transform (FFT) of a non-ideal impulsive source, both at the source and at the receiver point. The Impulse Transfer Function (ITF) is obtained by dividing the FFT at the receiver position by the FFT of the source. This quantity is presented both as linear frequency scale plots and also as synthesized one-third octave band data. The technique enables a considerable quantity of data to be obtained from a small number of impulsive signals recorded in the field, thereby minimizing the time and effort required on site. As the characteristics of the source are taken into account in the calculation, the choice of impulsive source is non-critical. The digital analysis equipment required for the analysis is readily available commercially.

  6. Analytic function expansion nodal method for nuclear reactor core design

    International Nuclear Information System (INIS)

    Noh, Hae Man

    1995-02-01

    than the analytic function. The second variation of the AFEN method we developed is the AFEN/PEN hybrid method. This method is designed especially for the multigroup reactor analysis. This hybrid method solves the diffusion equations for the fast energy groups by the PEN method, and those for the thermal energy groups by the AFEN method. This method is based on the observation that the fast group neutron flux distributions are generally so smooth that they can be approximated by a high-order polynomial and that, on the other hand, the thermal fluxes require the analytic function expansion for the representation of their strong gradients near the interface between assemblies having different neutronic properties. The results of benchmark problems on which this method was tested indicate that performance of the hybrid method is much better than that of the PEN method and is nearly the same to that of the AFEN method. In order for the AFEN method and its variations to be used in analyzing the neutron behavior in an actual reactor core, we also developed a new burnup correction model to reduce the errors in nodal flux distributions induced by the intranodal burnup gradients. It is essential for the nodal methods to maintain their accuracy in fuel depletion analysis. The burnup correction model developed in this study homogenizes equivalently the node with the burnup-induced cross section variations into the homogeneous node with the equivalent parameters such as the flux-volume-weighted constant cross sections and the discontinuity factors. The results of a benchmark problem show that this model eliminates almost all the errors in the nodal unknowns which are induced by the intranodal burnup gradients

  7. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [University of Texas at Dallas; Feng, Cong [University of Texas at Dallas; Wang, Zhenke [University of Texas at Dallas; Zhang, Jie [University of Texas at Dallas

    2018-02-01

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power and currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start-time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.

  8. Multi person detection and tracking based on hierarchical level-set method

    Science.gov (United States)

    Khraief, Chadia; Benzarti, Faouzi; Amiri, Hamid

    2018-04-01

    In this paper, we propose an efficient unsupervised method for mutli-person tracking based on hierarchical level-set approach. The proposed method uses both edge and region information in order to effectively detect objects. The persons are tracked on each frame of the sequence by minimizing an energy functional that combines color, texture and shape information. These features are enrolled in covariance matrix as region descriptor. The present method is fully automated without the need to manually specify the initial contour of Level-set. It is based on combined person detection and background subtraction methods. The edge-based is employed to maintain a stable evolution, guide the segmentation towards apparent boundaries and inhibit regions fusion. The computational cost of level-set is reduced by using narrow band technique. Many experimental results are performed on challenging video sequences and show the effectiveness of the proposed method.

  9. One-way hash function based on hyper-chaotic cellular neural network

    International Nuclear Information System (INIS)

    Yang Qunting; Gao Tiegang

    2008-01-01

    The design of an efficient one-way hash function with good performance is a hot spot in modern cryptography researches. In this paper, a hash function construction method based on cell neural network with hyper-chaos characteristics is proposed. First, the chaos sequence is gotten by iterating cellular neural network with Runge–Kutta algorithm, and then the chaos sequence is iterated with the message. The hash code is obtained through the corresponding transform of the latter chaos sequence. Simulation and analysis demonstrate that the new method has the merit of convenience, high sensitivity to initial values, good hash performance, especially the strong stability. (general)

  10. Theoretical method for determining particle distribution functions of classical systems

    International Nuclear Information System (INIS)

    Johnson, E.

    1980-01-01

    An equation which involves the triplet distribution function and the three-particle direct correlation function is obtained. This equation was derived using an analogue of the Ornstein--Zernike equation. The new equation is used to develop a variational method for obtaining the triplet distribution function of uniform one-component atomic fluids from the pair distribution function. The variational method may be used with the first and second equations in the YBG hierarchy to obtain pair and triplet distribution functions. It should be easy to generalize the results to the n-particle distribution function

  11. Numerical methods for hyperbolic differential functional problems

    Directory of Open Access Journals (Sweden)

    Roman Ciarski

    2008-01-01

    Full Text Available The paper deals with the initial boundary value problem for quasilinear first order partial differential functional systems. A general class of difference methods for the problem is constructed. Theorems on the error estimate of approximate solutions for difference functional systems are presented. The convergence results are proved by means of consistency and stability arguments. A numerical example is given.

  12. Exact solutions for nonlinear evolution equations using Exp-function method

    International Nuclear Information System (INIS)

    Bekir, Ahmet; Boz, Ahmet

    2008-01-01

    In this Letter, the Exp-function method is used to construct solitary and soliton solutions of nonlinear evolution equations. The Klein-Gordon, Burger-Fisher and Sharma-Tasso-Olver equations are chosen to illustrate the effectiveness of the method. The method is straightforward and concise, and its applications are promising. The Exp-function method presents a wider applicability for handling nonlinear wave equations

  13. Sliding mode control-based linear functional observers for discrete-time stochastic systems

    Science.gov (United States)

    Singh, Satnesh; Janardhanan, Sivaramakrishnan

    2017-11-01

    Sliding mode control (SMC) is one of the most popular techniques to stabilise linear discrete-time stochastic systems. However, application of SMC becomes difficult when the system states are not available for feedback. This paper presents a new approach to design a SMC-based functional observer for discrete-time stochastic systems. The functional observer is based on the Kronecker product approach. Existence conditions and stability analysis of the proposed observer are given. The control input is estimated by a novel linear functional observer. This approach leads to a non-switching type of control, thereby eliminating the fundamental cause of chatter. Furthermore, the functional observer is designed in such a way that the effect of process and measurement noise is minimised. Simulation example is given to illustrate and validate the proposed design method.

  14. [The method of quality function deployment --QFD-- in nursing services planning].

    Science.gov (United States)

    Matsuda, L M; Evora, Y D; Boan, F S

    2000-10-01

    "Focus on the client" is the posture that must be adopted in order to offer quality products. Based on the Total Quality Management approach, the Quality Function Deployment method (QFD) is a tool to achieve this goal. The purpose of this study is to create a proposal for planning the nursing services following the steps and actions of this methodology. The basic procedure was to survey the necessity of 106 hospitalized patients. Data were deployed using the seventeen steps proposed. Results showed that the interaction is more important than the technique according to the clients and also that this method enables the implementation of quality in nursing care.

  15. Effect of the sequence data deluge on the performance of methods for detecting protein functional residues.

    Science.gov (United States)

    Garrido-Martín, Diego; Pazos, Florencio

    2018-02-27

    The exponential accumulation of new sequences in public databases is expected to improve the performance of all the approaches for predicting protein structural and functional features. Nevertheless, this was never assessed or quantified for some widely used methodologies, such as those aimed at detecting functional sites and functional subfamilies in protein multiple sequence alignments. Using raw protein sequences as only input, these approaches can detect fully conserved positions, as well as those with a family-dependent conservation pattern. Both types of residues are routinely used as predictors of functional sites and, consequently, understanding how the sequence content of the databases affects them is relevant and timely. In this work we evaluate how the growth and change with time in the content of sequence databases affect five sequence-based approaches for detecting functional sites and subfamilies. We do that by recreating historical versions of the multiple sequence alignments that would have been obtained in the past based on the database contents at different time points, covering a period of 20 years. Applying the methods to these historical alignments allows quantifying the temporal variation in their performance. Our results show that the number of families to which these methods can be applied sharply increases with time, while their ability to detect potentially functional residues remains almost constant. These results are informative for the methods' developers and final users, and may have implications in the design of new sequencing initiatives.

  16. The interpolation method of stochastic functions and the stochastic variational principle

    International Nuclear Information System (INIS)

    Liu Xianbin; Chen Qiu

    1993-01-01

    Uncertainties have been attaching more importance to increasingly in modern engineering structural design. Viewed on an appropriate scale, the inherent physical attributes (material properties) of many structural systems always exhibit some patterns of random variation in space and time, generally the random variation shows a small parameter fluctuation. For a linear mechanical system, the random variation is modeled as a random one of a linear partial differential operator and, in stochastic finite element method, a random variation of a stiffness matrix. Besides the stochasticity of the structural physical properties, the influences of random loads which always represent themselves as the random boundary conditions bring about much more complexities in structural analysis. Now the stochastic finite element method or the probabilistic finite element method is used to study the structural systems with random physical parameters, whether or not the loads are random. Differing from the general finite element theory, the main difficulty which the stochastic finite element method faces is the inverse operation of stochastic operators and stochastic matrices, since the inverse operators and the inverse matrices are statistically correlated to the random parameters and random loads. So far, many efforts have been made to obtain the reasonably approximate expressions of the inverse operators and inverse matrices, such as Perturbation Method, Neumann Expansion Method, Galerkin Method (in appropriate Hilbert Spaces defined for random functions), Orthogonal Expansion Method. Among these methods, Perturbation Method appear to be the most available. The advantage of these methods is that the fairly accurate response statistics can be obtained under the condition of the finite information of the input. However, the second-order statistics obtained by use of Perturbation Method and Neumann Expansion Method are not always the appropriate ones, because the relevant second

  17. Improved Real-time Denoising Method Based on Lifting Wavelet Transform

    Directory of Open Access Journals (Sweden)

    Liu Zhaohua

    2014-06-01

    Full Text Available Signal denoising can not only enhance the signal to noise ratio (SNR but also reduce the effect of noise. In order to satisfy the requirements of real-time signal denoising, an improved semisoft shrinkage real-time denoising method based on lifting wavelet transform was proposed. The moving data window technology realizes the real-time wavelet denoising, which employs wavelet transform based on lifting scheme to reduce computational complexity. Also hyperbolic threshold function and recursive threshold computing can ensure the dynamic characteristics of the system, in addition, it can improve the real-time calculating efficiency as well. The simulation results show that the semisoft shrinkage real-time denoising method has quite a good performance in comparison to the traditional methods, namely soft-thresholding and hard-thresholding. Therefore, this method can solve more practical engineering problems.

  18. A Method to Measure the Bracelet Based on Feature Energy

    Science.gov (United States)

    Liu, Hongmin; Li, Lu; Wang, Zhiheng; Huo, Zhanqiang

    2017-12-01

    To measure the bracelet automatically, a novel method based on feature energy is proposed. Firstly, the morphological method is utilized to preprocess the image, and the contour consisting of a concentric circle is extracted. Then, a feature energy function, which is relevant to the distances from one pixel to the edge points, is defined taking into account the geometric properties of the concentric circle. The input image is subsequently transformed to the feature energy distribution map (FEDM) by computing the feature energy of each pixel. The center of the concentric circle is thus located by detecting the maximum on the FEDM; meanwhile, the radii of the concentric circle are determined according to the feature energy function of the center pixel. Finally, with the use of a calibration template, the internal diameter and thickness of the bracelet are measured. The experimental results show that the proposed method can measure the true sizes of the bracelet accurately with the simplicity, directness and robustness compared to the existing methods.

  19. Grey Language Hesitant Fuzzy Group Decision Making Method Based on Kernel and Grey Scale.

    Science.gov (United States)

    Li, Qingsheng; Diao, Yuzhu; Gong, Zaiwu; Hu, Aqin

    2018-03-02

    Based on grey language multi-attribute group decision making, a kernel and grey scale scoring function is put forward according to the definition of grey language and the meaning of the kernel and grey scale. The function introduces grey scale into the decision-making method to avoid information distortion. This method is applied to the grey language hesitant fuzzy group decision making, and the grey correlation degree is used to sort the schemes. The effectiveness and practicability of the decision-making method are further verified by the industry chain sustainable development ability evaluation example of a circular economy. Moreover, its simplicity and feasibility are verified by comparing it with the traditional grey language decision-making method and the grey language hesitant fuzzy weighted arithmetic averaging (GLHWAA) operator integration method after determining the index weight based on the grey correlation.

  20. An efficient digital signal processing method for RRNS-based DS-CDMA systems

    Directory of Open Access Journals (Sweden)

    Peter Olsovsky

    2017-09-01

    Full Text Available This paper deals with an efficient method for achieving low power and high speed in advanced Direct-Sequence Code Division Multiple-Access (DS-CDMA wireless communication systems based on the Residue Number System (RNS. A modified algorithm for multiuser DS-CDMA signal generation in MATLAB is proposed and investigated. The most important characteristics of the generated PN code are also presented. Subsequently, a DS-CDMA system based on the combination of the RNS or the so-called Redundant Residue Number System (RRNS is proposed. The enhanced method using a spectrally efficient 8-PSK data modulation scheme to improve the bandwidth efficiency for RRNS-based DS-CDMA systems is presented. By using the C-measure (complexity measure of the error detection function, it is possible to estimate the size of the circuit. Error detection function in RRNSs can be efficiently implemented by LookUp Table (LUT cascades.

  1. Cone Beam X-ray Luminescence Computed Tomography Based on Bayesian Method.

    Science.gov (United States)

    Zhang, Guanglei; Liu, Fei; Liu, Jie; Luo, Jianwen; Xie, Yaoqin; Bai, Jing; Xing, Lei

    2017-01-01

    X-ray luminescence computed tomography (XLCT), which aims to achieve molecular and functional imaging by X-rays, has recently been proposed as a new imaging modality. Combining the principles of X-ray excitation of luminescence-based probes and optical signal detection, XLCT naturally fuses functional and anatomical images and provides complementary information for a wide range of applications in biomedical research. In order to improve the data acquisition efficiency of previously developed narrow-beam XLCT, a cone beam XLCT (CB-XLCT) mode is adopted here to take advantage of the useful geometric features of cone beam excitation. Practically, a major hurdle in using cone beam X-ray for XLCT is that the inverse problem here is seriously ill-conditioned, hindering us to achieve good image quality. In this paper, we propose a novel Bayesian method to tackle the bottleneck in CB-XLCT reconstruction. The method utilizes a local regularization strategy based on Gaussian Markov random field to mitigate the ill-conditioness of CB-XLCT. An alternating optimization scheme is then used to automatically calculate all the unknown hyperparameters while an iterative coordinate descent algorithm is adopted to reconstruct the image with a voxel-based closed-form solution. Results of numerical simulations and mouse experiments show that the self-adaptive Bayesian method significantly improves the CB-XLCT image quality as compared with conventional methods.

  2. A second-order unconstrained optimization method for canonical-ensemble density-functional methods

    Science.gov (United States)

    Nygaard, Cecilie R.; Olsen, Jeppe

    2013-03-01

    A second order converging method of ensemble optimization (SOEO) in the framework of Kohn-Sham Density-Functional Theory is presented, where the energy is minimized with respect to an ensemble density matrix. It is general in the sense that the number of fractionally occupied orbitals is not predefined, but rather it is optimized by the algorithm. SOEO is a second order Newton-Raphson method of optimization, where both the form of the orbitals and the occupation numbers are optimized simultaneously. To keep the occupation numbers between zero and two, a set of occupation angles is defined, from which the occupation numbers are expressed as trigonometric functions. The total number of electrons is controlled by a built-in second order restriction of the Newton-Raphson equations, which can be deactivated in the case of a grand-canonical ensemble (where the total number of electrons is allowed to change). To test the optimization method, dissociation curves for diatomic carbon are produced using different functionals for the exchange-correlation energy. These curves show that SOEO favors symmetry broken pure-state solutions when using functionals with exact exchange such as Hartree-Fock and Becke three-parameter Lee-Yang-Parr. This is explained by an unphysical contribution to the exact exchange energy from interactions between fractional occupations. For functionals without exact exchange, such as local density approximation or Becke Lee-Yang-Parr, ensemble solutions are favored at interatomic distances larger than the equilibrium distance. Calculations on the chromium dimer are also discussed. They show that SOEO is able to converge to ensemble solutions for systems that are more complicated than diatomic carbon.

  3. Real reproduction and evaluation of color based on BRDF method

    Science.gov (United States)

    Qin, Feng; Yang, Weiping; Yang, Jia; Li, Hongning; Luo, Yanlin; Long, Hongli

    2013-12-01

    It is difficult to reproduce the original color of targets really in different illuminating environment using the traditional methods. So a function which can reconstruct the characteristics of reflection about every point on the surface of target is required urgently to improve the authenticity of color reproduction, which known as the Bidirectional Reflectance Distribution Function(BRDF). A method of color reproduction based on the BRDF measurement is introduced in this paper. Radiometry is combined with the colorimetric theories to measure the irradiance and radiance of GretagMacbeth 24 ColorChecker by using PR-715 Radiation Spectrophotometer of PHOTO RESEARCH, Inc, USA. The BRDF and BRF (Bidirectional Reflectance Factor) values of every color piece corresponding to the reference area are calculated according to irradiance and radiance, thus color tristimulus values of 24 ColorChecker are reconstructed. The results reconstructed by BRDF method are compared with values calculated by the reflectance using PR-715, at last, the chromaticity coordinates in color space and color difference between each other are analyzed. The experimental result shows average color difference and sample standard deviation between the method proposed in this paper and traditional reconstruction method depended on reflectance are 2.567 and 1.3049 respectively. The conclusion indicates that the method of color reproduction based on BRDF has the more obvious advantages to describe the color information of object than the reflectance in hemisphere space through the theoretical and experimental analysis. This method proposed in this paper is effective and feasible during the research of reproducing the chromaticity.

  4. Imaging of brain function based on the analysis of functional ...

    African Journals Online (AJOL)

    Objective: This Study observed the relevant brain areas activated by acupuncture at the Taichong acupoint (LR3) and analyzed the functional connectivity among brain areas using resting state functional magnetic resonance imaging (fMRI) to explore the acupoint specificity of the Taichong acupoint. Methods: A total of 45 ...

  5. A logic circuit for solving linear function by digital method

    International Nuclear Information System (INIS)

    Ma Yonghe

    1986-01-01

    A mathematical method for determining the linear relation of physical quantity with rediation intensity is described. A logic circuit has been designed for solving linear function by digital method. Some applications and the circuit function are discussed

  6. A novel method to calibrate DOI function of a PET detector with a dual-ended-scintillator readout

    International Nuclear Information System (INIS)

    Shao Yiping; Yao Rutao; Ma Tianyu

    2008-01-01

    The detection of depth-of-interaction (DOI) is a critical detector capability to improve the PET spatial resolution uniformity across the field-of-view and will significantly enhance, in particular, small bore system performance for brain, breast, and small animal imaging. One promising technique of DOI detection is to use dual-ended-scintillator readout that uses two photon sensors to detect scintillation light from both ends of a scintillator array and estimate DOI based on the ratio of signals (similar to Anger logic). This approach needs a careful DOI function calibration to establish accurate relationship between DOI and signal ratios, and to recalibrate if the detection condition is shifted due to the drift of sensor gain, bias variations, or degraded optical coupling, etc. However, the current calibration method that uses coincident events to locate interaction positions inside a single scintillator crystal has severe drawbacks, such as complicated setup, long and repetitive measurements, and being prone to errors from various possible misalignments among the source and detector components. This method is also not practically suitable to calibrate multiple DOI functions of a crystal array. To solve these problems, a new method has been developed that requires only a uniform flood source to irradiate a crystal array without the need to locate the interaction positions, and calculates DOI functions based solely on the uniform probability distribution of interactions over DOI positions without knowledge or assumption of detector responses. Simulation and experiment have been studied to validate the new method, and the results show that the new method, with a simple setup and one single measurement, can provide consistent and accurate DOI functions for the entire array of multiple scintillator crystals. This will enable an accurate, simple, and practical DOI function calibration for the PET detectors based on the design of dual-ended-scintillator readout. In

  7. HMM-Based Gene Annotation Methods

    Energy Technology Data Exchange (ETDEWEB)

    Haussler, David; Hughey, Richard; Karplus, Keven

    1999-09-20

    Development of new statistical methods and computational tools to identify genes in human genomic DNA, and to provide clues to their functions by identifying features such as transcription factor binding sites, tissue, specific expression and splicing patterns, and remove homologies at the protein level with genes of known function.

  8. Approximation methods for the partition functions of anharmonic systems

    International Nuclear Information System (INIS)

    Lew, P.; Ishida, T.

    1979-07-01

    The analytical approximations for the classical, quantum mechanical and reduced partition functions of the diatomic molecule oscillating internally under the influence of the Morse potential have been derived and their convergences have been tested numerically. This successful analytical method is used in the treatment of anharmonic systems. Using Schwinger perturbation method in the framework of second quantization formulism, the reduced partition function of polyatomic systems can be put into an expression which consists separately of contributions from the harmonic terms, Morse potential correction terms and interaction terms due to the off-diagonal potential coefficients. The calculated results of the reduced partition function from the approximation method on the 2-D and 3-D model systems agree well with the numerical exact calculations

  9. Zero Field Splitting of the chalcogen diatomics using relativistic correlated wave-function methods

    DEFF Research Database (Denmark)

    Rota, Jean-Baptiste; Knecht, Stefan; Fleig, Timo

    2011-01-01

    The spectrum arising from the (π*)2 configuration of the chalcogen dimers, namely the X21, a2 and b0+ states, is calculated using Wave-Function Theory (WFT) based methods. Two-component (2c) and four-component (4c) MultiReference Configuration Interaction (MRCI) and Fock-Space Coupled Cluster (FSCC......) methods are used as well as two-step methods Spin-Orbit Complete Active Space Perturbation Theory at 2nd order (SO-CASPT2) and Spin-Orbit Difference Dedicated Configuration Interaction (SODDCI). The energy of the X21 state corresponds to the Zero-Field Splitting (ZFS) of the ground state spin triplet...

  10. Development of one-energy group, two-dimensional, frequency dependent detector adjoint function based on the nodal method

    International Nuclear Information System (INIS)

    Khericha, Soli T.

    2000-01-01

    One-energy group, two-dimensional computer code was developed to calculate the response of a detector to a vibrating absorber in a reactor core. A concept of local/global components, based on the frequency dependent detector adjoint function, and a nodalization technique were utilized. The frequency dependent detector adjoint functions presented by complex equations were expanded into real and imaginary parts. In the nodalization technique, the flux is expanded into polynomials about the center point of each node. The phase angle and the magnitude of the one-energy group detector adjoint function were calculated for a detector located in the center of a 200x200 cm reactor using a two-dimensional nodalization technique, the computer code EXTERMINATOR, and the analytical solution. The purpose of this research was to investigate the applicability of a polynomial nodal model technique to the calculations of the real and the imaginary parts of the detector adjoint function for one-energy group two-dimensional polynomial nodal model technique. From the results as discussed earlier, it is concluded that the nodal model technique can be used to calculate the detector adjoint function and the phase angle. Using the computer code developed for nodal model technique, the magnitude of one energy group frequency dependent detector adjoint function and the phase angle were calculated for the detector located in the center of a 200x200 cm homogenous reactor. The real part of the detector adjoint function was compared with the results obtained from the EXTERMINATOR computer code as well as the analytical solution based on a double sine series expansion using the classical Green's Function solution. The values were found to be less than 1% greater at 20 cm away from the source region and about 3% greater closer to the source compared to the values obtained from the analytical solution and the EXTERMINATOR code. The currents at the node interface matched within 1% of the average

  11. Accident Analysis and Barrier Function (AEB) Method. Manual for Incident Analysis

    International Nuclear Information System (INIS)

    Svenson, Ola

    2000-02-01

    The Accident Analysis and Barrier Function (AEB) Method models an accident or incident as a series of interactions between human and technical systems. In the sequence of human and technical errors leading to an accident there is, in principle, a possibility to arrest the development between each two successive errors. This can be done by a barrier function which, for example, can stop an operator from making an error. A barrier function can be performed by one or several barrier function systems. To illustrate, a mechanical system, a computer system or another operator can all perform a given barrier function to stop an operator from making an error. The barrier function analysis consists of analysis of suggested improvements, the effectiveness of the improvements, the costs of implementation, probability of implementation, the cost of maintaining the barrier function, the probability that maintenance will be kept up to standards and the generalizability of the suggested improvement. The AEB method is similar to the US method called HPES, but differs from that method in different ways. To exemplify, the AEB method has more emphasis on technical errors than HPES. In contrast to HPES that describes a series of events, the AEB method models only errors. This gives a more focused analysis making it well suited for checking other HPES-type accident analyses. However, the AEB method is a generic and stand-alone method that has been applied in other fields than nuclear power, such as, in traffic accident analyses

  12. Accident Analysis and Barrier Function (AEB) Method. Manual for Incident Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Svenson, Ola [Stockholm Univ. (Sweden). Dept. of Psychology

    2000-02-01

    The Accident Analysis and Barrier Function (AEB) Method models an accident or incident as a series of interactions between human and technical systems. In the sequence of human and technical errors leading to an accident there is, in principle, a possibility to arrest the development between each two successive errors. This can be done by a barrier function which, for example, can stop an operator from making an error. A barrier function can be performed by one or several barrier function systems. To illustrate, a mechanical system, a computer system or another operator can all perform a given barrier function to stop an operator from making an error. The barrier function analysis consists of analysis of suggested improvements, the effectiveness of the improvements, the costs of implementation, probability of implementation, the cost of maintaining the barrier function, the probability that maintenance will be kept up to standards and the generalizability of the suggested improvement. The AEB method is similar to the US method called HPES, but differs from that method in different ways. To exemplify, the AEB method has more emphasis on technical errors than HPES. In contrast to HPES that describes a series of events, the AEB method models only errors. This gives a more focused analysis making it well suited for checking other HPES-type accident analyses. However, the AEB method is a generic and stand-alone method that has been applied in other fields than nuclear power, such as, in traffic accident analyses.

  13. Developing a Clustering-Based Empirical Bayes Analysis Method for Hotspot Identification

    Directory of Open Access Journals (Sweden)

    Yajie Zou

    2017-01-01

    Full Text Available Hotspot identification (HSID is a critical part of network-wide safety evaluations. Typical methods for ranking sites are often rooted in using the Empirical Bayes (EB method to estimate safety from both observed crash records and predicted crash frequency based on similar sites. The performance of the EB method is highly related to the selection of a reference group of sites (i.e., roadway segments or intersections similar to the target site from which safety performance functions (SPF used to predict crash frequency will be developed. As crash data often contain underlying heterogeneity that, in essence, can make them appear to be generated from distinct subpopulations, methods are needed to select similar sites in a principled manner. To overcome this possible heterogeneity problem, EB-based HSID methods that use common clustering methodologies (e.g., mixture models, K-means, and hierarchical clustering to select “similar” sites for building SPFs are developed. Performance of the clustering-based EB methods is then compared using real crash data. Here, HSID results, when computed on Texas undivided rural highway cash data, suggest that all three clustering-based EB analysis methods are preferred over the conventional statistical methods. Thus, properly classifying the road segments for heterogeneous crash data can further improve HSID accuracy.

  14. Cross-organism learning method to discover new gene functionalities.

    Science.gov (United States)

    Domeniconi, Giacomo; Masseroli, Marco; Moro, Gianluca; Pinoli, Pietro

    2016-04-01

    Knowledge of gene and protein functions is paramount for the understanding of physiological and pathological biological processes, as well as in the development of new drugs and therapies. Analyses for biomedical knowledge discovery greatly benefit from the availability of gene and protein functional feature descriptions expressed through controlled terminologies and ontologies, i.e., of gene and protein biomedical controlled annotations. In the last years, several databases of such annotations have become available; yet, these valuable annotations are incomplete, include errors and only some of them represent highly reliable human curated information. Computational techniques able to reliably predict new gene or protein annotations with an associated likelihood value are thus paramount. Here, we propose a novel cross-organisms learning approach to reliably predict new functionalities for the genes of an organism based on the known controlled annotations of the genes of another, evolutionarily related and better studied, organism. We leverage a new representation of the annotation discovery problem and a random perturbation of the available controlled annotations to allow the application of supervised algorithms to predict with good accuracy unknown gene annotations. Taking advantage of the numerous gene annotations available for a well-studied organism, our cross-organisms learning method creates and trains better prediction models, which can then be applied to predict new gene annotations of a target organism. We tested and compared our method with the equivalent single organism approach on different gene annotation datasets of five evolutionarily related organisms (Homo sapiens, Mus musculus, Bos taurus, Gallus gallus and Dictyostelium discoideum). Results show both the usefulness of the perturbation method of available annotations for better prediction model training and a great improvement of the cross-organism models with respect to the single-organism ones

  15. Technical Note: Impact of the geometry dependence of the ion chamber detector response function on a convolution-based method to address the volume averaging effect

    Energy Technology Data Exchange (ETDEWEB)

    Barraclough, Brendan; Lebron, Sharon [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 and J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Li, Jonathan G.; Fan, Qiyong; Liu, Chihray; Yan, Guanghua, E-mail: yangua@shands.ufl.edu [Department of Radiation Oncology, University of Florida, Gainesville, Florida 32608 (United States)

    2016-05-15

    Purpose: To investigate the geometry dependence of the detector response function (DRF) of three commonly used scanning ionization chambers and its impact on a convolution-based method to address the volume averaging effect (VAE). Methods: A convolution-based approach has been proposed recently to address the ionization chamber VAE. It simulates the VAE in the treatment planning system (TPS) by iteratively convolving the calculated beam profiles with the DRF while optimizing the beam model. Since the convolved and the measured profiles are subject to the same VAE, the calculated profiles match the implicit “real” ones when the optimization converges. Three DRFs (Gaussian, Lorentzian, and parabolic function) were used for three ionization chambers (CC04, CC13, and SNC125c) in this study. Geometry dependent/independent DRFs were obtained by minimizing the difference between the ionization chamber-measured profiles and the diode-measured profiles convolved with the DRFs. These DRFs were used to obtain eighteen beam models for a commercial TPS. Accuracy of the beam models were evaluated by assessing the 20%–80% penumbra width difference (PWD) between the computed and diode-measured beam profiles. Results: The convolution-based approach was found to be effective for all three ionization chambers with significant improvement for all beam models. Up to 17% geometry dependence of the three DRFs was observed for the studied ionization chambers. With geometry dependent DRFs, the PWD was within 0.80 mm for the parabolic function and CC04 combination and within 0.50 mm for other combinations; with geometry independent DRFs, the PWD was within 1.00 mm for all cases. When using the Gaussian function as the DRF, accounting for geometry dependence led to marginal improvement (PWD < 0.20 mm) for CC04; the improvement ranged from 0.38 to 0.65 mm for CC13; for SNC125c, the improvement was slightly above 0.50 mm. Conclusions: Although all three DRFs were found adequate to

  16. DNA-based stable isotope probing: a link between community structure and function

    International Nuclear Information System (INIS)

    Uhlik, Ondrej; Jecna, Katerina; Leigh, Mary Beth; Mackova, Martina; Macek, Tomas

    2009-01-01

    DNA-based molecular techniques permit the comprehensive determination of microbial diversity but generally do not reveal the relationship between the identity and the function of microorganisms. The first direct molecular technique to enable the linkage of phylogeny with function is DNA-based stable isotope probing (DNA-SIP). Applying this method first helped describe the utilization of simple compounds, such as methane, methanol or glucose and has since been used to detect microbial communities active in the utilization of a wide variety of compounds, including various xenobiotics. The principle of the method lies in providing 13C-labeled substrate to a microbial community and subsequent analyses of the 13C-DNA isolated from the community. Isopycnic centrifugation permits separating 13C-labeled DNA of organisms that utilized the substrate from 12C-DNA of the inactive majority. As the whole metagenome of active populations is isolated, its follow-up analysis provides successful taxonomic identification as well as the potential for functional gene analyses. Because of its power, DNA-SIP has become one of the leading techniques of microbial ecology research. But from other point of view, it is a labor-intensive method that requires careful attention to detail during each experimental step in order to avoid misinterpretation of results.

  17. On the trial functions in nested element method

    International Nuclear Information System (INIS)

    Altiparmakov, D.V.

    1985-01-01

    The R-function method is applied to the multidimensional steady-state neutron diffusion equation. Using a variational principle the nested element approximation is formulated. Trial functions taking into account the geometrical shape of material regions are constructed. The influence of both the surrounding regions and the corner singularities at the external boundary is incorporated into the approximate solution. Benchmark calculations show that such an approximation can yield satisfactory results. Moreover, in the case of complex geometry, the presented approach would result in a significant reduction of the number of unknowns compared to other methods

  18. MCDM based evaluation and ranking of commercial off-the-shelf using fuzzy based matrix method

    Directory of Open Access Journals (Sweden)

    Rakesh Garg

    2017-04-01

    Full Text Available In today’s scenario, software has become an essential component in all kinds of systems. The size and the complexity of the software increases with a corresponding increase in its functionality, hence leads to the development of the modular software systems. Software developers emphasize on the concept of component based software engineering (CBSE for the development of modular software systems. The CBSE concept consists of dividing the software into a number of modules; selecting Commercial Off-the-Shelf (COTS for each module; and finally integrating the modules to develop the final software system. The selection of COTS for any module plays a vital role in software development. To address the problem of selection of COTS, a framework for ranking and selection of various COTS components for any software system based on expert opinion elicitation and fuzzy-based matrix methodology is proposed in this research paper. The selection problem is modeled as a multi-criteria decision making (MCDM problem. The evaluation criteria are identified through extensive literature study and the COTS components are ranked based on these identified and selected evaluation criteria using the proposed methods according to the value of a permanent function of their criteria matrices. The methodology is explained through an example and is validated by comparing with an existing method.

  19. Simplified Method for Predicting a Functional Class of Proteins in Transcription Factor Complexes

    KAUST Repository

    Piatek, Marek J.

    2013-07-12

    Background:Initiation of transcription is essential for most of the cellular responses to environmental conditions and for cell and tissue specificity. This process is regulated through numerous proteins, their ligands and mutual interactions, as well as interactions with DNA. The key such regulatory proteins are transcription factors (TFs) and transcription co-factors (TcoFs). TcoFs are important since they modulate the transcription initiation process through interaction with TFs. In eukaryotes, transcription requires that TFs form different protein complexes with various nuclear proteins. To better understand transcription regulation, it is important to know the functional class of proteins interacting with TFs during transcription initiation. Such information is not fully available, since not all proteins that act as TFs or TcoFs are yet annotated as such, due to generally partial functional annotation of proteins. In this study we have developed a method to predict, using only sequence composition of the interacting proteins, the functional class of human TF binding partners to be (i) TF, (ii) TcoF, or (iii) other nuclear protein. This allows for complementing the annotation of the currently known pool of nuclear proteins. Since only the knowledge of protein sequences is required in addition to protein interaction, the method should be easily applicable to many species.Results:Based on experimentally validated interactions between human TFs with different TFs, TcoFs and other nuclear proteins, our two classification systems (implemented as a web-based application) achieve high accuracies in distinguishing TFs and TcoFs from other nuclear proteins, and TFs from TcoFs respectively.Conclusion:As demonstrated, given the fact that two proteins are capable of forming direct physical interactions and using only information about their sequence composition, we have developed a completely new method for predicting a functional class of TF interacting protein partners

  20. Provisional-Ideal-Point-Based Multi-objective Optimization Method for Drone Delivery Problem

    Science.gov (United States)

    Omagari, Hiroki; Higashino, Shin-Ichiro

    2018-04-01

    In this paper, we proposed a new evolutionary multi-objective optimization method for solving drone delivery problems (DDP). It can be formulated as a constrained multi-objective optimization problem. In our previous research, we proposed the "aspiration-point-based method" to solve multi-objective optimization problems. However, this method needs to calculate the optimal values of each objective function value in advance. Moreover, it does not consider the constraint conditions except for the objective functions. Therefore, it cannot apply to DDP which has many constraint conditions. To solve these issues, we proposed "provisional-ideal-point-based method." The proposed method defines a "penalty value" to search for feasible solutions. It also defines a new reference solution named "provisional-ideal point" to search for the preferred solution for a decision maker. In this way, we can eliminate the preliminary calculations and its limited application scope. The results of the benchmark test problems show that the proposed method can generate the preferred solution efficiently. The usefulness of the proposed method is also demonstrated by applying it to DDP. As a result, the delivery path when combining one drone and one truck drastically reduces the traveling distance and the delivery time compared with the case of using only one truck.

  1. Function combined method for design innovation of children's bike

    Science.gov (United States)

    Wu, Xiaoli; Qiu, Tingting; Chen, Huijuan

    2013-03-01

    As children mature, bike products for children in the market develop at the same time, and the conditions are frequently updated. Certain problems occur when using a bike, such as cycle overlapping, repeating function, and short life cycle, which go against the principles of energy conservation and the environmental protection intensive design concept. In this paper, a rational multi-function method of design through functional superposition, transformation, and technical implementation is proposed. An organic combination of frog-style scooter and children's tricycle is developed using the multi-function method. From the ergonomic perspective, the paper elaborates on the body size of children aged 5 to 12 and effectively extracts data for a multi-function children's bike, which can be used for gliding and riding. By inverting the body, parts can be interchanged between the handles and the pedals of the bike. Finally, the paper provides a detailed analysis of the components and structural design, body material, and processing technology of the bike. The study of Industrial Product Innovation Design provides an effective design method to solve the bicycle problems, extends the function problems, improves the product market situation, and enhances the energy saving feature while implementing intensive product development effectively at the same time.

  2. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali; Pä tzold, Matthias

    2012-01-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  3. An improved method for estimating the frequency correlation function

    KAUST Repository

    Chelli, Ali

    2012-04-01

    For time-invariant frequency-selective channels, the transfer function is a superposition of waves having different propagation delays and path gains. In order to estimate the frequency correlation function (FCF) of such channels, the frequency averaging technique can be utilized. The obtained FCF can be expressed as a sum of auto-terms (ATs) and cross-terms (CTs). The ATs are caused by the autocorrelation of individual path components. The CTs are due to the cross-correlation of different path components. These CTs have no physical meaning and leads to an estimation error. We propose a new estimation method aiming to improve the estimation accuracy of the FCF of a band-limited transfer function. The basic idea behind the proposed method is to introduce a kernel function aiming to reduce the CT effect, while preserving the ATs. In this way, we can improve the estimation of the FCF. The performance of the proposed method and the frequency averaging technique is analyzed using a synthetically generated transfer function. We show that the proposed method is more accurate than the frequency averaging technique. The accurate estimation of the FCF is crucial for the system design. In fact, we can determine the coherence bandwidth from the FCF. The exact knowledge of the coherence bandwidth is beneficial in both the design as well as optimization of frequency interleaving and pilot arrangement schemes. © 2012 IEEE.

  4. Frequency response function (FRF) based updating of a laser spot welded structure

    Science.gov (United States)

    Zin, M. S. Mohd; Rani, M. N. Abdul; Yunus, M. A.; Sani, M. S. M.; Wan Iskandar Mirza, W. I. I.; Mat Isa, A. A.

    2018-04-01

    The objective of this paper is to present frequency response function (FRF) based updating as a method for matching the finite element (FE) model of a laser spot welded structure with a physical test structure. The FE model of the welded structure was developed using CQUAD4 and CWELD element connectors, and NASTRAN was used to calculate the natural frequencies, mode shapes and FRF. Minimization of the discrepancies between the finite element and experimental FRFs was carried out using the exceptional numerical capability of NASTRAN Sol 200. The experimental work was performed under free-free boundary conditions using LMS SCADAS. Avast improvement in the finite element FRF was achieved using the frequency response function (FRF) based updating with two different objective functions proposed.

  5. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed; Meier, Stuart Kurt

    2013-01-01

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  6. Inferring biological functions of guanylyl cyclases with computational methods

    KAUST Repository

    Alquraishi, May Majed

    2013-09-03

    A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.

  7. A hybrid method for the parallel computation of Green's functions

    International Nuclear Information System (INIS)

    Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric

    2009-01-01

    Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.

  8. Stepwise Analysis of Differential Item Functioning Based on Multiple-Group Partial Credit Model.

    Science.gov (United States)

    Muraki, Eiji

    1999-01-01

    Extended an Item Response Theory (IRT) method for detection of differential item functioning to the partial credit model and applied the method to simulated data using a stepwise procedure. Then applied the stepwise DIF analysis based on the multiple-group partial credit model to writing trend data from the National Assessment of Educational…

  9. Functional properties of edible agar-based and starch-based films for food quality preservation.

    Science.gov (United States)

    Phan, The D; Debeaufort, F; Luu, D; Voilley, A

    2005-02-23

    Edible films made of agar (AG), cassava starch (CAS), normal rice starch (NRS), and waxy (glutinous) rice starch (WRS) were elaborated and tested for a potential use as edible packaging or coating. Their water vapor permeabilities (WVP) were comparable with those of most of the polysaccharide-based films and with some protein-based films. Depending on the environmental moisture pressure, the WVP of the films varies and remains constant when the relative humidity (RH) is >84%. Equilibrium sorption isotherms of these films have been measured; the Guggenheim-Anderson-de Boer (GAB) model was used to describe the sorption isotherm and contributed to a better knowledge of hydration properties. Surface hydrophobicity and wettability of these films were also investigated using the sessile drop contact angle method. The results obtained suggested the migration of the lipid fraction toward evaporation surface during film drying. Among these polysaccharide-based films, AG-based film and CAS-based film displayed more interesting mechanical properties: they are transparent, clear, homogeneous, flexible, and easily handled. NRS- and WRS-based films were relatively brittle and have a low tension resistance. Microstructure of film cross section was observed by environmental scanning electron microscopy to better understand the effect of the structure on the functional properties. The results suggest that AG-based film and CAS-based films, which show better functional properties, are promising systems to be used as food packaging or coating instead of NRS- and WRS-based films.

  10. A dental implant-based registration method for measuring mandibular kinematics using cone beam computed tomography-based fluoroscopy.

    Science.gov (United States)

    Lin, Cheng-Chung; Chen, Chien-Chih; Chen, Yunn-Jy; Lu, Tung-Wu; Hong, Shih-Wun

    2014-01-01

    This study aimed to develop and evaluate experimentally an implant-based registration method for measuring three-dimensional (3D) kinematics of the mandible and dental implants in the mandible based on dental cone beam computed tomography (CBCT), modified to include fluoroscopic function. The proposed implant-based registration method was based on the registration of CBCT data of implants/bones with single-plane fluoroscopy images. Seven registration conditions that included one to three implants were evaluated experimentally for their performance in a cadaveric porcine headmodel. The implant-based registration method was shown to have measurement errors (SD) of less than -0.2 (0.3) mm, 1.1 (2.2) mm, and 0.7 degrees (1.3 degrees) for the in-plane translation, out-of-plane translation, and all angular components, respectively, regardless of the number of implants used. The corresponding errors were reduced to less than -0.1 (0.1) mm, -0.3 (1.7) mm, and 0.5 degree (0.4 degree) when three implants were used. An implant-based registration method was developed to measure the 3D kinematics of the mandible/implants. With its high accuracy and reliability, the new method will be useful for measuring the 3D motion of the bones/implants for relevant applications.

  11. Correlation based method for comparing and reconstructing quasi-identical two-dimensional structures

    International Nuclear Information System (INIS)

    Mejia-Barbosa, Y.

    2000-03-01

    We show a method for comparing and reconstructing two similar amplitude-only structures, which are composed by the same number of identical apertures. The structures are two-dimensional and differ only in the location of one of the apertures. The method is based on a subtraction algorithm, which involves the auto-correlations and cross-correlation functions of the compared structures. Experimental results illustrate the feasibility of the method. (author)

  12. Soft Sensing of Key State Variables in Fermentation Process Based on Relevance Vector Machine with Hybrid Kernel Function

    Directory of Open Access Journals (Sweden)

    Xianglin ZHU

    2014-06-01

    Full Text Available To resolve the online detection difficulty of some important state variables in fermentation process with traditional instruments, a soft sensing modeling method based on relevance vector machine (RVM with a hybrid kernel function is presented. Based on the characteristic analysis of two commonly-used kernel functions, that is, local Gaussian kernel function and global polynomial kernel function, a hybrid kernel function combing merits of Gaussian kernel function and polynomial kernel function is constructed. To design optimal parameters of this kernel function, the particle swarm optimization (PSO algorithm is applied. The proposed modeling method is used to predict the value of cell concentration in the Lysine fermentation process. Simulation results show that the presented hybrid-kernel RVM model has a better accuracy and performance than the single kernel RVM model.

  13. Methods library of embedded R functions at Statistics Norway

    Directory of Open Access Journals (Sweden)

    Øyvind Langsrud

    2017-11-01

    Full Text Available Statistics Norway is modernising the production processes. An important element in this work is a library of functions for statistical computations. In principle, the functions in such a methods library can be programmed in several languages. A modernised production environment demand that these functions can be reused for different statistics products, and that they are embedded within a common IT system. The embedding should be done in such a way that the users of the methods do not need to know the underlying programming language. As a proof of concept, Statistics Norway soon has established a methods library offering a limited number of methods for macro-editing, imputation and confidentiality. This is done within an area of municipal statistics with R as the only programming language. This paper presents the details and experiences from this work. The problem of fitting real word applications to simple and strict standards is discussed and exemplified by the development of solutions to regression imputation and table suppression.

  14. LEGO: a novel method for gene set over-representation analysis by incorporating network-based gene weights.

    Science.gov (United States)

    Dong, Xinran; Hao, Yun; Wang, Xiao; Tian, Weidong

    2016-01-11

    Pathway or gene set over-representation analysis (ORA) has become a routine task in functional genomics studies. However, currently widely used ORA tools employ statistical methods such as Fisher's exact test that reduce a pathway into a list of genes, ignoring the constitutive functional non-equivalent roles of genes and the complex gene-gene interactions. Here, we develop a novel method named LEGO (functional Link Enrichment of Gene Ontology or gene sets) that takes into consideration these two types of information by incorporating network-based gene weights in ORA analysis. In three benchmarks, LEGO achieves better performance than Fisher and three other network-based methods. To further evaluate LEGO's usefulness, we compare LEGO with five gene expression-based and three pathway topology-based methods using a benchmark of 34 disease gene expression datasets compiled by a recent publication, and show that LEGO is among the top-ranked methods in terms of both sensitivity and prioritization for detecting target KEGG pathways. In addition, we develop a cluster-and-filter approach to reduce the redundancy among the enriched gene sets, making the results more interpretable to biologists. Finally, we apply LEGO to two lists of autism genes, and identify relevant gene sets to autism that could not be found by Fisher.

  15. Managerial Methods Based on Analysis, Recommended to a Boarding House

    Directory of Open Access Journals (Sweden)

    Solomia Andreş

    2015-06-01

    Full Text Available The paper presents a few theoretical and practical contributions regarding the implementing of analysis based methods, respectively a SWOT and an economic analysis, from the perspective and the demands of a firm management which functions with profits due to the activity of a boarding house. The two types of managerial methods recommended to the firm offer real and complex information necessary for the knowledge of the firm status and the elaboration of prediction for the maintaining of business viability.

  16. Image Inpainting Based on Coherence Transport with Adapted Distance Functions

    KAUST Repository

    März, Thomas

    2011-01-01

    We discuss an extension of our method image inpainting based on coherence transport. For the latter method the pixels of the inpainting domain have to be serialized into an ordered list. Until now, to induce the serialization we have used the distance to boundary map. But there are inpainting problems where the distance to boundary serialization causes unsatisfactory inpainting results. In the present work we demonstrate cases where we can resolve the difficulties by employing other distance functions which better suit the problem at hand. © 2011 Society for Industrial and Applied Mathematics.

  17. Reference Function Based Spatiotemporal Fuzzy Logic Control Design Using Support Vector Regression Learning

    Directory of Open Access Journals (Sweden)

    Xian-Xia Zhang

    2013-01-01

    Full Text Available This paper presents a reference function based 3D FLC design methodology using support vector regression (SVR learning. The concept of reference function is introduced to 3D FLC for the generation of 3D membership functions (MF, which enhance the capability of the 3D FLC to cope with more kinds of MFs. The nonlinear mathematical expression of the reference function based 3D FLC is derived, and spatial fuzzy basis functions are defined. Via relating spatial fuzzy basis functions of a 3D FLC to kernel functions of an SVR, an equivalence relationship between a 3D FLC and an SVR is established. Therefore, a 3D FLC can be constructed using the learned results of an SVR. Furthermore, the universal approximation capability of the proposed 3D fuzzy system is proven in terms of the finite covering theorem. Finally, the proposed method is applied to a catalytic packed-bed reactor and simulation results have verified its effectiveness.

  18. Ionospheric forecasting model using fuzzy logic-based gradient descent method

    Directory of Open Access Journals (Sweden)

    D. Venkata Ratnam

    2017-09-01

    Full Text Available Space weather phenomena cause satellite to ground or satellite to aircraft transmission outages over the VHF to L-band frequency range, particularly in the low latitude region. Global Positioning System (GPS is primarily susceptible to this form of space weather. Faulty GPS signals are attributed to ionospheric error, which is a function of Total Electron Content (TEC. Importantly, precise forecasts of space weather conditions and appropriate hazard observant cautions required for ionospheric space weather observations are limited. In this paper, a fuzzy logic-based gradient descent method has been proposed to forecast the ionospheric TEC values. In this technique, membership functions have been tuned based on the gradient descent estimated values. The proposed algorithm has been tested with the TEC data of two geomagnetic storms in the low latitude station of KL University, Guntur, India (16.44°N, 80.62°E. It has been found that the gradient descent method performs well and the predicted TEC values are close to the original TEC measurements.

  19. Comparing and improving reconstruction methods for proxies based on compositional data

    Science.gov (United States)

    Nolan, C.; Tipton, J.; Booth, R.; Jackson, S. T.; Hooten, M.

    2017-12-01

    Many types of studies in paleoclimatology and paleoecology involve compositional data. Often, these studies aim to use compositional data to reconstruct an environmental variable of interest; the reconstruction is usually done via the development of a transfer function. Transfer functions have been developed using many different methods. Existing methods tend to relate the compositional data and the reconstruction target in very simple ways. Additionally, the results from different methods are rarely compared. Here we seek to address these two issues. First, we introduce a new hierarchical Bayesian multivariate gaussian process model; this model allows for the relationship between each species in the compositional dataset and the environmental variable to be modeled in a way that captures the underlying complexities. Then, we compare this new method to machine learning techniques and commonly used existing methods. The comparisons are based on reconstructing the water table depth history of Caribou Bog (an ombrotrophic Sphagnum peat bog in Old Town, Maine, USA) from a new 7500 year long record of testate amoebae assemblages. The resulting reconstructions from different methods diverge in both their resulting means and uncertainties. In particular, uncertainty tends to be drastically underestimated by some common methods. These results will help to improve inference of water table depth from testate amoebae. Furthermore, this approach can be applied to test and improve inferences of past environmental conditions from a broad array of paleo-proxies based on compositional data

  20. EPC: A Provably Secure Permutation Based Compression Function

    DEFF Research Database (Denmark)

    Bagheri, Nasour; Gauravaram, Praveen; Naderi, Majid

    2010-01-01

    The security of permutation-based hash functions in the ideal permutation model has been studied when the input-length of compression function is larger than the input-length of the permutation function. In this paper, we consider permutation based compression functions that have input lengths sh...

  1. Probabilistic Wind Power Ramp Forecasting Based on a Scenario Generation Method: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Qin [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Florita, Anthony R [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Krishnan, Venkat K [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Hodge, Brian S [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Cui, Mingjian [Univ. of Texas-Dallas, Richardson, TX (United States); Feng, Cong [Univ. of Texas-Dallas, Richardson, TX (United States); Wang, Zhenke [Univ. of Texas-Dallas, Richardson, TX (United States); Zhang, Jie [Univ. of Texas-Dallas, Richardson, TX (United States)

    2017-08-31

    Wind power ramps (WPRs) are particularly important in the management and dispatch of wind power, and they are currently drawing the attention of balancing authorities. With the aim to reduce the impact of WPRs for power system operations, this paper develops a probabilistic ramp forecasting method based on a large number of simulated scenarios. An ensemble machine learning technique is first adopted to forecast the basic wind power forecasting scenario and calculate the historical forecasting errors. A continuous Gaussian mixture model (GMM) is used to fit the probability distribution function (PDF) of forecasting errors. The cumulative distribution function (CDF) is analytically deduced. The inverse transform method based on Monte Carlo sampling and the CDF is used to generate a massive number of forecasting error scenarios. An optimized swinging door algorithm is adopted to extract all the WPRs from the complete set of wind power forecasting scenarios. The probabilistic forecasting results of ramp duration and start time are generated based on all scenarios. Numerical simulations on publicly available wind power data show that within a predefined tolerance level, the developed probabilistic wind power ramp forecasting method is able to predict WPRs with a high level of sharpness and accuracy.

  2. The extended hyperbolic function method and exact solutions of the long-short wave resonance equations

    International Nuclear Information System (INIS)

    Shang Yadong

    2008-01-01

    The extended hyperbolic functions method for nonlinear wave equations is presented. Based on this method, we obtain a multiple exact explicit solutions for the nonlinear evolution equations which describe the resonance interaction between the long wave and the short wave. The solutions obtained in this paper include (a) the solitary wave solutions of bell-type for S and L, (b) the solitary wave solutions of kink-type for S and bell-type for L, (c) the solitary wave solutions of a compound of the bell-type and the kink-type for S and L, (d) the singular travelling wave solutions, (e) periodic travelling wave solutions of triangle function types, and solitary wave solutions of rational function types. The variety of structure to the exact solutions of the long-short wave equation is illustrated. The methods presented here can also be used to obtain exact solutions of nonlinear wave equations in n dimensions

  3. Digital Resonant Controller based on Modified Tustin Discretization Method

    Directory of Open Access Journals (Sweden)

    STOJIC, D.

    2016-11-01

    Full Text Available Resonant controllers are used in power converter voltage and current control due to their simplicity and accuracy. However, digital implementation of resonant controllers introduces problems related to zero and pole mapping from the continuous to the discrete time domain. Namely, some discretization methods introduce significant errors in the digital controller resonant frequency, resulting in the loss of the asymptotic AC reference tracking, especially at high resonant frequencies. The delay compensation typical for resonant controllers can also be compromised. Based on the existing analysis, it can be concluded that the Tustin discretization with frequency prewarping represents a preferable choice from the point of view of the resonant frequency accuracy. However, this discretization method has a shortcoming in applications that require real-time frequency adaptation, since complex trigonometric evaluation is required for each frequency change. In order to overcome this problem, in this paper the modified Tustin discretization method is proposed based on the Taylor series approximation of the frequency prewarping function. By comparing the novel discretization method with commonly used two-integrator-based proportional-resonant (PR digital controllers, it is shown that the resulting digital controller resonant frequency and time delay compensation errors are significantly reduced for the novel controller.

  4. A comparison of density functional theory and coupled cluster methods for the calculation of electric dipole polarizability gradients of methane

    DEFF Research Database (Denmark)

    Paidarová, Ivana; Sauer, Stephan P. A.

    2012-01-01

    We have compared the performance of density functional theory (DFT) using five different exchange-correlation functionals with four coupled cluster theory based wave function methods in the calculation of geometrical derivatives of the polarizability tensor of methane. The polarizability gradient...

  5. A Waterline Extraction Method from Remote Sensing Image Based on Quad-tree and Multiple Active Contour Model

    Directory of Open Access Journals (Sweden)

    YU Jintao

    2016-09-01

    Full Text Available After the characteristics of geodesic active contour model (GAC, Chan-Vese model(CV and local binary fitting model(LBF are analyzed, and the active contour model based on regions and edges is combined with image segmentation method based on quad-tree, a waterline extraction method based on quad-tree and multiple active contour model is proposed in this paper. Firstly, the method provides an initial contour according to quad-tree segmentation. Secondly, a new signed pressure force(SPF function based on global image statistics information of CV model and local image statistics information of LBF model has been defined, and then ,the edge stopping function(ESF is replaced by the proposed SPF function, which solves the problem such as evolution stopped in advance and excessive evolution. Finally, the selective binary and Gaussian filtering level set method is used to avoid reinitializing and regularization to improve the evolution efficiency. The experimental results show that this method can effectively extract the weak edges and serious concave edges, and owns some properties such as sub-pixel accuracy, high efficiency and reliability for waterline extraction.

  6. Microscopically Based Nuclear Energy Functionals

    International Nuclear Information System (INIS)

    Bogner, S. K.

    2009-01-01

    A major goal of the SciDAC project 'Building a Universal Nuclear Energy Density Functional' is to develop next-generation nuclear energy density functionals that give controlled extrapolations away from stability with improved performance across the mass table. One strategy is to identify missing physics in phenomenological Skyrme functionals based on our understanding of the underlying internucleon interactions and microscopic many-body theory. In this contribution, I describe ongoing efforts to use the density matrix expansion of Negele and Vautherin to incorporate missing finite-range effects from the underlying two- and three-nucleon interactions into phenomenological Skyrme functionals.

  7. FUN-LDA: A Latent Dirichlet Allocation Model for Predicting Tissue-Specific Functional Effects of Noncoding Variation: Methods and Applications.

    Science.gov (United States)

    Backenroth, Daniel; He, Zihuai; Kiryluk, Krzysztof; Boeva, Valentina; Pethukova, Lynn; Khurana, Ekta; Christiano, Angela; Buxbaum, Joseph D; Ionita-Laza, Iuliana

    2018-05-03

    We describe a method based on a latent Dirichlet allocation model for predicting functional effects of noncoding genetic variants in a cell-type- and/or tissue-specific way (FUN-LDA). Using this unsupervised approach, we predict tissue-specific functional effects for every position in the human genome in 127 different tissues and cell types. We demonstrate the usefulness of our predictions by using several validation experiments. Using eQTL data from several sources, including the GTEx project, Geuvadis project, and TwinsUK cohort, we show that eQTLs in specific tissues tend to be most enriched among the predicted functional variants in relevant tissues in Roadmap. We further show how these integrated functional scores can be used for (1) deriving the most likely cell or tissue type causally implicated for a complex trait by using summary statistics from genome-wide association studies and (2) estimating a tissue-based correlation matrix of various complex traits. We found large enrichment of heritability in functional components of relevant tissues for various complex traits, and FUN-LDA yielded higher enrichment estimates than existing methods. Finally, using experimentally validated functional variants from the literature and variants possibly implicated in disease by previous studies, we rigorously compare FUN-LDA with state-of-the-art functional annotation methods and show that FUN-LDA has better prediction accuracy and higher resolution than these methods. In particular, our results suggest that tissue- and cell-type-specific functional prediction methods tend to have substantially better prediction accuracy than organism-level prediction methods. Scores for each position in the human genome and for each ENCODE and Roadmap tissue are available online (see Web Resources). Copyright © 2018 American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  8. A Modified Generalized Laguerre-Gauss Collocation Method for Fractional Neutral Functional-Differential Equations on the Half-Line

    Directory of Open Access Journals (Sweden)

    Ali H. Bhrawy

    2014-01-01

    Full Text Available The modified generalized Laguerre-Gauss collocation (MGLC method is applied to obtain an approximate solution of fractional neutral functional-differential equations with proportional delays on the half-line. The proposed technique is based on modified generalized Laguerre polynomials and Gauss quadrature integration of such polynomials. The main advantage of the present method is to reduce the solution of fractional neutral functional-differential equations into a system of algebraic equations. Reasonable numerical results are achieved by choosing few modified generalized Laguerre-Gauss collocation points. Numerical results demonstrate the accuracy, efficiency, and versatility of the proposed method on the half-line.

  9. [Nonparametric method of estimating survival functions containing right-censored and interval-censored data].

    Science.gov (United States)

    Xu, Yonghong; Gao, Xiaohuan; Wang, Zhengxi

    2014-04-01

    Missing data represent a general problem in many scientific fields, especially in medical survival analysis. Dealing with censored data, interpolation method is one of important methods. However, most of the interpolation methods replace the censored data with the exact data, which will distort the real distribution of the censored data and reduce the probability of the real data falling into the interpolation data. In order to solve this problem, we in this paper propose a nonparametric method of estimating the survival function of right-censored and interval-censored data and compare its performance to SC (self-consistent) algorithm. Comparing to the average interpolation and the nearest neighbor interpolation method, the proposed method in this paper replaces the right-censored data with the interval-censored data, and greatly improves the probability of the real data falling into imputation interval. Then it bases on the empirical distribution theory to estimate the survival function of right-censored and interval-censored data. The results of numerical examples and a real breast cancer data set demonstrated that the proposed method had higher accuracy and better robustness for the different proportion of the censored data. This paper provides a good method to compare the clinical treatments performance with estimation of the survival data of the patients. This pro vides some help to the medical survival data analysis.

  10. Uncertainties of Predictions from Parton Distribution Functions 1, the Lagrange Multiplier Method

    CERN Document Server

    Stump, D R; Brock, R; Casey, D; Huston, J; Kalk, J; Lai, H L; Tung, W K

    2002-01-01

    We apply the Lagrange Multiplier method to study the uncertainties of physical predictions due to the uncertainties of parton distribution functions (PDFs), using the cross section for W production at a hadron collider as an archetypal example. An effective chi-squared function based on the CTEQ global QCD analysis is used to generate a series of PDFs, each of which represents the best fit to the global data for some specified value of the cross section. By analyzing the likelihood of these "alterative hypotheses", using available information on errors from the individual experiments, we estimate that the fractional uncertainty of the cross section due to current experimental input to the PDF analysis is approximately 4% at the Tevatron, and 10% at the LHC. We give sets of PDFs corresponding to these up and down variations of the cross section. We also present similar results on Z production at the colliders. Our method can be applied to any combination of physical variables in precision QCD phenomenology, an...

  11. convergent methods for calculating thermodynamic Green functions

    OpenAIRE

    Bowen, S. P.; Williams, C. D.; Mancini, J. D.

    1984-01-01

    A convergent method of approximating thermodynamic Green functions is outlined briefly. The method constructs a sequence of approximants which converges independently of the strength of the Hamiltonian's coupling constants. Two new concepts associated with the approximants are introduced: the resolving power of the approximation, and conditional creation (annihilation) operators. These ideas are illustrated on an exactly soluble model and a numerical example. A convergent expression for the s...

  12. Creep analysis by the path function method

    International Nuclear Information System (INIS)

    Akin, J.E.; Pardue, R.M.

    1977-01-01

    The finite element method has become a common analysis procedure for the creep analysis of structures. The most recent programs are designed to handle a general class of material properties and are able to calculate elastic, plastic, and creep components of strain under general loading histories. The constant stress approach is too crude a model to accurately represent the actual behaviour of the stress for large time steps. The true path of a point in the effective stress-effective strain (sigmasup(e)-epsilonsup(c)) plane is often one in which the slope is rapidly changing. Thus the stress level quickly moves away from the initial stress level and then gradually approaches the final one. The result is that the assumed constant stress level quickly becomes inaccurate. What is required is a better method of approximation of the true path in the sigmasup(e)-epsilonsup(c) space. The method described here is called the path function approach because it employs an assumed function to estimate the motion of points in the sigmasup(e)-epsilonsup(c) space. (Auth.)

  13. A method for synthesizing response functions of NaI detectors to gamma rays

    International Nuclear Information System (INIS)

    Sie, S.H.

    1978-08-01

    A simple method of parametrizing the response function of NaI detectors to gamma rays is described, based on decomposition of the pulse-height spectrum into components associated with the actual detection processes. Smooth dependence of the derived parameters on the gamma-ray energy made it possible to generate a lineshape for any gamma-ray energy by suitable interpolation techniques. The method is applied in analysis of spectra measured with a 7.6 x 7.6 cm NaI detector in continuum gamma-ray study following (HI,xn) reaction

  14. Network-based ranking methods for prediction of novel disease associated microRNAs.

    Science.gov (United States)

    Le, Duc-Hau

    2015-10-01

    Many studies have shown roles of microRNAs on human disease and a number of computational methods have been proposed to predict such associations by ranking candidate microRNAs according to their relevance to a disease. Among them, machine learning-based methods usually have a limitation in specifying non-disease microRNAs as negative training samples. Meanwhile, network-based methods are becoming dominant since they well exploit a "disease module" principle in microRNA functional similarity networks. Of which, random walk with restart (RWR) algorithm-based method is currently state-of-the-art. The use of this algorithm was inspired from its success in predicting disease gene because the "disease module" principle also exists in protein interaction networks. Besides, many algorithms designed for webpage ranking have been successfully applied in ranking disease candidate genes because web networks share topological properties with protein interaction networks. However, these algorithms have not yet been utilized for disease microRNA prediction. We constructed microRNA functional similarity networks based on shared targets of microRNAs, and then we integrated them with a microRNA functional synergistic network, which was recently identified. After analyzing topological properties of these networks, in addition to RWR, we assessed the performance of (i) PRINCE (PRIoritizatioN and Complex Elucidation), which was proposed for disease gene prediction; (ii) PageRank with Priors (PRP) and K-Step Markov (KSM), which were used for studying web networks; and (iii) a neighborhood-based algorithm. Analyses on topological properties showed that all microRNA functional similarity networks are small-worldness and scale-free. The performance of each algorithm was assessed based on average AUC values on 35 disease phenotypes and average rankings of newly discovered disease microRNAs. As a result, the performance on the integrated network was better than that on individual ones. In

  15. Investigation of MLE in nonparametric estimation methods of reliability function

    International Nuclear Information System (INIS)

    Ahn, Kwang Won; Kim, Yoon Ik; Chung, Chang Hyun; Kim, Kil Yoo

    2001-01-01

    There have been lots of trials to estimate a reliability function. In the ESReDA 20 th seminar, a new method in nonparametric way was proposed. The major point of that paper is how to use censored data efficiently. Generally there are three kinds of approach to estimate a reliability function in nonparametric way, i.e., Reduced Sample Method, Actuarial Method and Product-Limit (PL) Method. The above three methods have some limits. So we suggest an advanced method that reflects censored information more efficiently. In many instances there will be a unique maximum likelihood estimator (MLE) of an unknown parameter, and often it may be obtained by the process of differentiation. It is well known that the three methods generally used to estimate a reliability function in nonparametric way have maximum likelihood estimators that are uniquely exist. So, MLE of the new method is derived in this study. The procedure to calculate a MLE is similar just like that of PL-estimator. The difference of the two is that in the new method, the mass (or weight) of each has an influence of the others but the mass in PL-estimator not

  16. Airy function approach and Numerov method to study the anharmonic oscillator potentials V(x = Ax2α + Bx2

    Directory of Open Access Journals (Sweden)

    N. Al Sdran

    2016-06-01

    Full Text Available The numerical solutions of the time independent Schrödinger equation of different one-dimensional potentials forms are sometime achieved by the asymptotic iteration method. Its importance appears, for example, on its efficiency to describe vibrational system in quantum mechanics. In this paper, the Airy function approach and the Numerov method have been used and presented to study the oscillator anharmonic potential V(x = Ax2α + Bx2, (A>0, B<0, with (α = 2 for quadratic, (α =3 for sextic and (α =4 for octic anharmonic oscillators. The Airy function approach is based on the replacement of the real potential V(x by a piecewise-linear potential v(x, while, the Numerov method is based on the discretization of the wave function on the x-axis. The first energies levels have been calculated and the wave functions for the sextic system have been evaluated. These specific values are unlimited by the magnitude of A, B and α. It’s found that the obtained results are in good agreement with the previous results obtained by the asymptotic iteration method for α =3.

  17. Modal parameter identification based on combining transmissibility functions and blind source separation techniques

    Science.gov (United States)

    Araújo, Iván Gómez; Sánchez, Jesús Antonio García; Andersen, Palle

    2018-05-01

    Transmissibility-based operational modal analysis is a recent and alternative approach used to identify the modal parameters of structures under operational conditions. This approach is advantageous compared with traditional operational modal analysis because it does not make any assumptions about the excitation spectrum (i.e., white noise with a flat spectrum). However, common methodologies do not include a procedure to extract closely spaced modes with low signal-to-noise ratios. This issue is relevant when considering that engineering structures generally have closely spaced modes and that their measured responses present high levels of noise. Therefore, to overcome these problems, a new combined method for modal parameter identification is proposed in this work. The proposed method combines blind source separation (BSS) techniques and transmissibility-based methods. Here, BSS techniques were used to recover source signals, and transmissibility-based methods were applied to estimate modal information from the recovered source signals. To achieve this combination, a new method to define a transmissibility function was proposed. The suggested transmissibility function is based on the relationship between the power spectral density (PSD) of mixed signals and the PSD of signals from a single source. The numerical responses of a truss structure with high levels of added noise and very closely spaced modes were processed using the proposed combined method to evaluate its ability to identify modal parameters in these conditions. Colored and white noise excitations were used for the numerical example. The proposed combined method was also used to evaluate the modal parameters of an experimental test on a structure containing closely spaced modes. The results showed that the proposed combined method is capable of identifying very closely spaced modes in the presence of noise and, thus, may be potentially applied to improve the identification of damping ratios.

  18. A Novel Method of Interestingness Measures for Association Rules Mining Based on Profit

    Directory of Open Access Journals (Sweden)

    Chunhua Ju

    2015-01-01

    Full Text Available Association rules mining is an important topic in the domain of data mining and knowledge discovering. Some papers have presented several interestingness measure methods; the most typical are Support, Confidence, Lift, Improve, and so forth. But their limitations are obvious, like no objective criterion, lack of statistical base, disability of defining negative relationship, and so forth. This paper proposes three new methods, Bi-lift, Bi-improve, and Bi-confidence, for Lift, Improve, and Confidence, respectively. Then, on the basis of utility function and the executing cost of rules, we propose interestingness function based on profit (IFBP considering subjective preferences and characteristics of specific application object. Finally, a novel measure framework is proposed to improve the traditional one through experimental analysis. In conclusion, the new methods and measure framework are prior to the traditional ones in the aspects of objective criterion, comprehensive definition, and practical application.

  19. Direct functionalization of pristine single-walled carbon nanotubes by diazonium-based method with various five-membered S- or N- heteroaromatic amines

    International Nuclear Information System (INIS)

    Leinonen, Heli; Lajunen, Marja

    2012-01-01

    Reactivity of five-membered, variously substituted, heteroaromatic diazonium salts was studied toward pristine single-walled carbon nanotubes (SWCNTs), prepared by high-pressure CO conversion (HiPCO) method. Average size range of individual HiPCO SWCNTs was 0.8–1.2 nm (diameter) and 100–1,000 nm (length). Functionalizations were performed by a one-pot diazotization–dediazotization method with methyl-2-aminothiophene-3-carboxylate, 2-aminothiophene-3-carbonitrile, 2-aminoimidazole sulfate, or 3-aminopyrazole in acetic acid using sodium nitrite at room temperature or by heating. According to Raman and Fourier transform infrared spectroscopy, all used heterocyclic diazonium salts formed a covalent bond with SWCNTs and yielded new kinds of five-membered heterocycle-functionalized SWCNTs. Methyl-2-thiophenyl-3-carboxylate-functionalized SWCNTs formed a highly soluble, stable dispersion in tetrahydrofuran (THF), 3-pyrazoyl-functionalized SWCNTs in ethanol, and 2-imidazoyl- or 2-thiophenyl-3-carbonitrile-functionalized SWCNTs in ethanol and THF. The thermogravimetric analysis as well as energy-filtered transmission electron microscopy imaging of the products confirmed the successful functionalization of SWCNTs.

  20. Direct functionalization of pristine single-walled carbon nanotubes by diazonium-based method with various five-membered S- or N- heteroaromatic amines

    Energy Technology Data Exchange (ETDEWEB)

    Leinonen, Heli; Lajunen, Marja, E-mail: marja.lajunen@oulu.fi [University of Oulu, Department of Chemistry (Finland)

    2012-09-15

    Reactivity of five-membered, variously substituted, heteroaromatic diazonium salts was studied toward pristine single-walled carbon nanotubes (SWCNTs), prepared by high-pressure CO conversion (HiPCO) method. Average size range of individual HiPCO SWCNTs was 0.8-1.2 nm (diameter) and 100-1,000 nm (length). Functionalizations were performed by a one-pot diazotization-dediazotization method with methyl-2-aminothiophene-3-carboxylate, 2-aminothiophene-3-carbonitrile, 2-aminoimidazole sulfate, or 3-aminopyrazole in acetic acid using sodium nitrite at room temperature or by heating. According to Raman and Fourier transform infrared spectroscopy, all used heterocyclic diazonium salts formed a covalent bond with SWCNTs and yielded new kinds of five-membered heterocycle-functionalized SWCNTs. Methyl-2-thiophenyl-3-carboxylate-functionalized SWCNTs formed a highly soluble, stable dispersion in tetrahydrofuran (THF), 3-pyrazoyl-functionalized SWCNTs in ethanol, and 2-imidazoyl- or 2-thiophenyl-3-carbonitrile-functionalized SWCNTs in ethanol and THF. The thermogravimetric analysis as well as energy-filtered transmission electron microscopy imaging of the products confirmed the successful functionalization of SWCNTs.

  1. Direct functionalization of pristine single-walled carbon nanotubes by diazonium-based method with various five-membered S- or N- heteroaromatic amines

    Science.gov (United States)

    Leinonen, Heli; Lajunen, Marja

    2012-09-01

    Reactivity of five-membered, variously substituted, heteroaromatic diazonium salts was studied toward pristine single-walled carbon nanotubes (SWCNTs), prepared by high-pressure CO conversion (HiPCO) method. Average size range of individual HiPCO SWCNTs was 0.8-1.2 nm (diameter) and 100-1,000 nm (length). Functionalizations were performed by a one-pot diazotization-dediazotization method with methyl-2-aminothiophene-3-carboxylate, 2-aminothiophene-3-carbonitrile, 2-aminoimidazole sulfate, or 3-aminopyrazole in acetic acid using sodium nitrite at room temperature or by heating. According to Raman and Fourier transform infrared spectroscopy, all used heterocyclic diazonium salts formed a covalent bond with SWCNTs and yielded new kinds of five-membered heterocycle-functionalized SWCNTs. Methyl-2-thiophenyl-3-carboxylate-functionalized SWCNTs formed a highly soluble, stable dispersion in tetrahydrofuran (THF), 3-pyrazoyl-functionalized SWCNTs in ethanol, and 2-imidazoyl- or 2-thiophenyl-3-carbonitrile-functionalized SWCNTs in ethanol and THF. The thermogravimetric analysis as well as energy-filtered transmission electron microscopy imaging of the products confirmed the successful functionalization of SWCNTs.

  2. A novel method for one-way hash function construction based on spatiotemporal chaos

    Energy Technology Data Exchange (ETDEWEB)

    Ren Haijun [College of Software Engineering, Chongqing University, Chongqing 400044 (China); State Key Laboratory of Power Transmission Equipment and System Security and New Technology, Chongqing University, Chongqing 400044 (China)], E-mail: jhren@cqu.edu.cn; Wang Yong; Xie Qing [Key Laboratory of Electronic Commerce and Logistics of Chongqing, Chongqing University of Posts and Telecommunications, Chongqing 400065 (China); Yang Huaqian [Department of Computer and Modern Education Technology, Chongqing Education of College, Chongqing 400067 (China)

    2009-11-30

    A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.

  3. A novel method for one-way hash function construction based on spatiotemporal chaos

    International Nuclear Information System (INIS)

    Ren Haijun; Wang Yong; Xie Qing; Yang Huaqian

    2009-01-01

    A novel hash algorithm based on a spatiotemporal chaos is proposed. The original message is first padded with zeros if needed. Then it is divided into a number of blocks each contains 32 bytes. In the hashing process, each block is partitioned into eight 32-bit values and input into the spatiotemporal chaotic system. Then, after iterating the system for four times, the next block is processed by the same way. To enhance the confusion and diffusion effect, the cipher block chaining (CBC) mode is adopted in the algorithm. The hash value is obtained from the final state value of the spatiotemporal chaotic system. Theoretic analyses and numerical simulations both show that the proposed hash algorithm possesses good statistical properties, strong collision resistance and high efficiency, as required by practical keyed hash functions.

  4. Advancing density functional theory to finite temperatures: methods and applications in steel design.

    Science.gov (United States)

    Hickel, T; Grabowski, B; Körmann, F; Neugebauer, J

    2012-02-08

    The performance of materials such as steels, their high strength and formability, is based on an impressive variety of competing mechanisms on the microscopic/atomic scale (e.g. dislocation gliding, solid solution hardening, mechanical twinning or structural phase transformations). Whereas many of the currently available concepts to describe these mechanisms are based on empirical and experimental data, it becomes more and more apparent that further improvement of materials needs to be based on a more fundamental level. Recent progress for methods based on density functional theory (DFT) now makes the exploration of chemical trends, the determination of parameters for phenomenological models and the identification of new routes for the optimization of steel properties feasible. A major challenge in applying these methods to a true materials design is, however, the inclusion of temperature-driven effects on the desired properties. Therefore, a large range of computational tools has been developed in order to improve the capability and accuracy of first-principles methods in determining free energies. These combine electronic, vibrational and magnetic effects as well as structural defects in an integrated approach. Based on these simulation tools, one is now able to successfully predict mechanical and thermodynamic properties of metals with a hitherto not achievable accuracy.

  5. The method of images and Green's function for spherical domains

    International Nuclear Information System (INIS)

    Gutkin, Eugene; Newton, Paul K

    2004-01-01

    Motivated by problems in electrostatics and vortex dynamics, we develop two general methods for constructing Green's function for simply connected domains on the surface of the unit sphere. We prove a Riemann mapping theorem showing that such domains can be conformally mapped to the upper hemisphere. We then categorize all domains on the sphere for which Green's function can be constructed by an extension of the classical method of images. We illustrate our methods by several examples, such as the upper hemisphere, geodesic triangles, and latitudinal rectangles. We describe the point vortex motion in these domains, which is governed by a Hamiltonian determined by the Dirichlet Green's function

  6. Assessment and in vitro experiment of artificial anal sphincter system based on rebuilding the rectal sensation function.

    Science.gov (United States)

    Zan, Peng; Liu, Jinding; Jiang, Enyu; Wang, Hua

    2014-05-01

    In this paper, a novel artificial anal sphincter (AAS) system based on rebuilding the rectal sensation function is proposed to treat human fecal incontinence. The executive mechanism of the traditional AAS system was redesigned and integrated for a simpler structure and better durability. The novel executive mechanism uses a sandwich structure to simulate the basic function of the natural human anal sphincter. To rebuild the lost rectal sensation function caused by fecal incontinence, we propose a novel method for rebuilding the rectal sensation function based on an Optimal Wavelet Packet Basis (OWPB) using the Davies-Bouldin (DB) index and a support vector machine (SVM). OWPB using a DB index is used for feature vector extraction, while a SVM is adopted for pattern recognition.Furthermore, an in vitro experiment with the AAS system based on rectal sensation function rebuilding was carried out. Experimental results indicate that the novel executive mechanism can simulate the basic function of the natural human anal sphincter, and the proposed method is quite effective for rebuilding rectal sensation in patients.

  7. A novel flow-based parameter of collateral function assessed by intracoronary thermodilution.

    Science.gov (United States)

    Lindner, Markus; Felix, Stephan B; Empen, Klaus; Reffelmann, Thorsten

    2014-04-01

    Currently, many methods for quantitation of coronary collateral function are based on intracoronary pressure measurements distal of an occluded balloon, which do not fully account for the dynamic nature of collateral flow. Therefore, a flow-based parameter of coronary collateral function based upon principles of thermodilution was evaluated. In 26 patients with a high-grade coronary artery stenosis, intracoronary hemodynamics were analyzed by the RadiAnalyzer system (St Jude Medical), including fractional flow reserve (FFR), index of microcirculatory resistance (IMR), and the pressure-based collateral flow index (CFI) during balloon occlusion and hyperemia (intravenous adenosine). Moreover, immediately after an intracoronary bolus of room-temperature saline, the balloon was occluded and the intracoronary temperature distal to the balloon was analyzed over time. The slope of the temperature-time curve was calculated after logarithmic transformation as an index of collateral blood flow (CBFI). The coefficient of variation between two measurements of CBFI amounted to 11 ± 2%. In patients with CFI ≥0.25, CBFI amounted to 0.55 ± 0.09, whereas in those with CFI function, and should be evaluated in further studies.

  8. Green's function matching method for adjoining regions having different masses

    International Nuclear Information System (INIS)

    Morgenstern Horing, Norman J

    2006-01-01

    We present a primer on the method of Green's function matching for the determination of the global Schroedinger Green's function for all space subject to joining conditions at an interface between two (or more) separate parts of the region having different masses. The object of this technique is to determine the full space Schroedinger Green's function in terms of the individual Green's functions of the constituent parts taken as if they were themselves extended to all space. This analytical method has had successful applications in the theory of surface states, and remains of interest for nanostructures

  9. COMPANY VALUATION METHODS BASED ON PATRIMONY

    Directory of Open Access Journals (Sweden)

    SUCIU GHEORGHE

    2013-02-01

    Full Text Available The methods used for the company valuation can be divided into 3 main groups: methods based on patrimony,methods based on financial performance, methods based both on patrimony and on performance. The companyvaluation methods based on patrimony are implemented taking into account the balance sheet or the financialstatement. The financial statement refers to that type of balance in which the assets are arranged according to liquidity,and the liabilities according to their financial maturity date. The patrimonial methods are based on the principle thatthe value of the company equals that of the patrimony it owns. From a legal point of view, the patrimony refers to allthe rights and obligations of a company. The valuation of companies based on their financial performance can be donein 3 ways: the return value, the yield value, the present value of the cash flows. The mixed methods depend both onpatrimony and on financial performance or can make use of other methods.

  10. Anisotropy model for modern grain oriented electrical steel based on orientation distribution function

    Directory of Open Access Journals (Sweden)

    Fan Jiang

    2018-05-01

    Full Text Available Accurately modeling the anisotropic behavior of electrical steel is mandatory in order to perform good end simulations. Several approaches can be found in the literature for that purpose but the more often those methods are not able to deal with grain oriented electrical steel. In this paper, a method based on orientation distribution function is applied to modern grain oriented laminations. In particular, two solutions are proposed in order to increase the results accuracy. The first one consists in increasing the decomposition number of the cosine series on which the method is based. The second one consists in modifying the determination method of the terms belonging to this cosine series.

  11. Improving protein function prediction methods with integrated literature data

    Directory of Open Access Journals (Sweden)

    Gabow Aaron P

    2008-04-01

    Full Text Available Abstract Background Determining the function of uncharacterized proteins is a major challenge in the post-genomic era due to the problem's complexity and scale. Identifying a protein's function contributes to an understanding of its role in the involved pathways, its suitability as a drug target, and its potential for protein modifications. Several graph-theoretic approaches predict unidentified functions of proteins by using the functional annotations of better-characterized proteins in protein-protein interaction networks. We systematically consider the use of literature co-occurrence data, introduce a new method for quantifying the reliability of co-occurrence and test how performance differs across species. We also quantify changes in performance as the prediction algorithms annotate with increased specificity. Results We find that including information on the co-occurrence of proteins within an abstract greatly boosts performance in the Functional Flow graph-theoretic function prediction algorithm in yeast, fly and worm. This increase in performance is not simply due to the presence of additional edges since supplementing protein-protein interactions with co-occurrence data outperforms supplementing with a comparably-sized genetic interaction dataset. Through the combination of protein-protein interactions and co-occurrence data, the neighborhood around unknown proteins is quickly connected to well-characterized nodes which global prediction algorithms can exploit. Our method for quantifying co-occurrence reliability shows superior performance to the other methods, particularly at threshold values around 10% which yield the best trade off between coverage and accuracy. In contrast, the traditional way of asserting co-occurrence when at least one abstract mentions both proteins proves to be the worst method for generating co-occurrence data, introducing too many false positives. Annotating the functions with greater specificity is harder

  12. The exact solutions and approximate analytic solutions of the (2 + 1)-dimensional KP equation based on symmetry method.

    Science.gov (United States)

    Gai, Litao; Bilige, Sudao; Jie, Yingmo

    2016-01-01

    In this paper, we successfully obtained the exact solutions and the approximate analytic solutions of the (2 + 1)-dimensional KP equation based on the Lie symmetry, the extended tanh method and the homotopy perturbation method. In first part, we obtained the symmetries of the (2 + 1)-dimensional KP equation based on the Wu-differential characteristic set algorithm and reduced it. In the second part, we constructed the abundant exact travelling wave solutions by using the extended tanh method. These solutions are expressed by the hyperbolic functions, the trigonometric functions and the rational functions respectively. It should be noted that when the parameters are taken as special values, some solitary wave solutions are derived from the hyperbolic function solutions. Finally, we apply the homotopy perturbation method to obtain the approximate analytic solutions based on four kinds of initial conditions.

  13. Analysis of elastic-plastic problems using edge-based smoothed finite element method

    International Nuclear Information System (INIS)

    Cui, X.Y.; Liu, G.R.; Li, G.Y.; Zhang, G.Y.; Sun, G.Y.

    2009-01-01

    In this paper, an edge-based smoothed finite element method (ES-FEM) is formulated for stress field determination of elastic-plastic problems using triangular meshes, in which smoothing domains associated with the edges of the triangles are used for smoothing operations to improve the accuracy and the convergence rate of the method. The smoothed Galerkin weak form is adopted to obtain the discretized system equations, and the numerical integration becomes a simple summation over the edge-based smoothing domains. The pseudo-elastic method is employed for the determination of stress field and Hencky's total deformation theory is used to define effective elastic material parameters, which are treated as field variables and considered as functions of the final state of stress fields. The effective elastic material parameters are then obtained in an iterative manner based on the strain controlled projection method from the uniaxial material curve. Some numerical examples are investigated and excellent results have been obtained demonstrating the effectivity of the present method.

  14. Development of redesign method of production system based on QFD

    Science.gov (United States)

    Kondoh, Shinsuke; Umeda, Yasusi; Togawa, Hisashi

    In order to catch up with rapidly changing market environment, rapid and flexible redesign of production system is quite important. For effective and rapid redesign of production system, a redesign support system is eagerly needed. To this end, this paper proposes a redesign method of production system based on Quality Function Deployment (QFD). This method represents a designer's intention in the form of QFD, collects experts' knowledge as “Production Method (PM) modules,” and formulates redesign guidelines as seven redesign operations so as to support a designer to find out improvement ideas in a systematical manner. This paper also illustrates a redesign support tool of a production system we have developed based on this method, and demonstrates its feasibility with a practical example of a production system of a contact probe. A result from this example shows that comparable cost reduction to those of veteran designers can be achieved by a novice designer. From this result, we conclude our redesign method is effective and feasible for supporting redesign of a production system.

  15. Developing A Method of Learning English Speaking Skills Based on the Language Functions Used in the Food and Beverage Service

    Directory of Open Access Journals (Sweden)

    Denok Lestari

    2017-01-01

    Full Text Available This research is aimed to analyse language functions in  English, specifically those which are used in the context of Food and Beverage Service. The findings of the analysis related to the language functions are then applied in a teaching method which is designed to improve the students’ abilities in speaking English. There are two novelties in this research. The first one is  the theory of language functions which is reconstructed in accordance with the Food and Beverage Service context. Those language functions are: permisive (to soften utterances, to avoid repetition, and  to adjust intonation; interactive (to greet, to have small talks, and farewell; informative (to introduce, to show, to state, to explain, to ask, to agree, to reject, and to confirm; persuasive (to offer, to promise, to suggest, and to persuade; directive (to tell, to order, and to request; indicative (to praise, to complain, to thank, and to apologize. The second  novelty which is more practical is the design  of the ASRI method which consists of four basic components, namely: Aims (the purpose in communicating; Sequence (the operational procedure in handling guests in the restaurant; Role play (the simmulation activities in language learning; and Interaction (the interactive communications between participants. The method of ASRI with the application of the language functions in its ABCD procedure, namely Acquire, Brainstorm, Chance and Develop is proven to be effective in improving the students’ abilities in speaking English, specifically in the context of  Food and Beverage Service.

  16. Vibrational Spectroscopic Studies of Tenofovir Using Density Functional Theory Method

    Directory of Open Access Journals (Sweden)

    G. R. Ramkumaar

    2013-01-01

    Full Text Available A systematic vibrational spectroscopic assignment and analysis of tenofovir has been carried out by using FTIR and FT-Raman spectral data. The vibrational analysis was aided by electronic structure calculations—hybrid density functional methods (B3LYP/6-311++G(d,p, B3LYP/6-31G(d,p, and B3PW91/6-31G(d,p. Molecular equilibrium geometries, electronic energies, IR intensities, and harmonic vibrational frequencies have been computed. The assignments proposed based on the experimental IR and Raman spectra have been reviewed and complete assignment of the observed spectra have been proposed. UV-visible spectrum of the compound was also recorded and the electronic properties such as HOMO and LUMO energies and were determined by time-dependent DFT (TD-DFT method. The geometrical, thermodynamical parameters, and absorption wavelengths were compared with the experimental data. The B3LYP/6-311++G(d,p-, B3LYP/6-31G(d,p-, and B3PW91/6-31G(d,p-based NMR calculation procedure was also done. It was used to assign the 13C and 1H NMR chemical shift of tenofovir.

  17. Method of vacuum correlation functions: Results and prospects

    International Nuclear Information System (INIS)

    Badalian, A. M.; Simonov, Yu. A.; Shevchenko, V. I.

    2006-01-01

    Basic results obtained within the QCD method of vacuum correlation functions over the past 20 years in the context of investigations into strong-interaction physics at the Institute of Theoretical and Experimental Physics (ITEP, Moscow) are formulated Emphasis is placed primarily on the prospects of the general theory developed within QCD by employing both nonperturbative and perturbative methods. On the basis of ab initio arguments, it is shown that the lowest two field correlation functions play a dominant role in QCD dynamics. A quantitative theory of confinement and deconfinement, as well as of the spectra of light and heavy quarkonia, glueballs, and hybrids, is given in terms of these two correlation functions. Perturbation theory in a nonperturbative vacuum (background perturbation theory) plays a significant role, not possessing drawbacks of conventional perturbation theory and leading to the infrared freezing of the coupling constant α s

  18. Fast methods for spatially correlated multilevel functional data

    KAUST Repository

    Staicu, A.-M.

    2010-01-19

    We propose a new methodological framework for the analysis of hierarchical functional data when the functions at the lowest level of the hierarchy are correlated. For small data sets, our methodology leads to a computational algorithm that is orders of magnitude more efficient than its closest competitor (seconds versus hours). For large data sets, our algorithm remains fast and has no current competitors. Thus, in contrast to published methods, we can now conduct routine simulations, leave-one-out analyses, and nonparametric bootstrap sampling. Our methods are inspired by and applied to data obtained from a state-of-the-art colon carcinogenesis scientific experiment. However, our models are general and will be relevant to many new data sets where the object of inference are functions or images that remain dependent even after conditioning on the subject on which they are measured. Supplementary materials are available at Biostatistics online.

  19. METHOD OF GREEN FUNCTIONS IN MATHEMATICAL MODELLING FOR TWO-POINT BOUNDARY-VALUE PROBLEMS

    Directory of Open Access Journals (Sweden)

    E. V. Dikareva

    2015-01-01

    Full Text Available Summary. In many applied problems of control, optimization, system theory, theoretical and construction mechanics, for problems with strings and nods structures, oscillation theory, theory of elasticity and plasticity, mechanical problems connected with fracture dynamics and shock waves, the main instrument for study these problems is a theory of high order ordinary differential equations. This methodology is also applied for studying mathematical models in graph theory with different partitioning based on differential equations. Such equations are used for theoretical foundation of mathematical models but also for constructing numerical methods and computer algorithms. These models are studied with use of Green function method. In the paper first necessary theoretical information is included on Green function method for multi point boundary-value problems. The main equation is discussed, notions of multi-point boundary conditions, boundary functionals, degenerate and non-degenerate problems, fundamental matrix of solutions are introduced. In the main part the problem to study is formulated in terms of shocks and deformations in boundary conditions. After that the main results are formulated. In theorem 1 conditions for existence and uniqueness of solutions are proved. In theorem 2 conditions are proved for strict positivity and equal measureness for a pair of solutions. In theorem 3 existence and estimates are proved for the least eigenvalue, spectral properties and positivity of eigenfunctions. In theorem 4 the weighted positivity is proved for the Green function. Some possible applications are considered for a signal theory and transmutation operators.

  20. A shape-based quality evaluation and reconstruction method for electrical impedance tomography.

    Science.gov (United States)

    Antink, Christoph Hoog; Pikkemaat, Robert; Malmivuo, Jaakko; Leonhardt, Steffen

    2015-06-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community. In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed. Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images.

  1. A shape-based quality evaluation and reconstruction method for electrical impedance tomography

    International Nuclear Information System (INIS)

    Antink, Christoph Hoog; Pikkemaat, Robert; Leonhardt, Steffen; Malmivuo, Jaakko

    2015-01-01

    Linear methods of reconstruction play an important role in medical electrical impedance tomography (EIT) and there is a wide variety of algorithms based on several assumptions. With the Graz consensus reconstruction algorithm for EIT (GREIT), a novel linear reconstruction algorithm as well as a standardized framework for evaluating and comparing methods of reconstruction were introduced that found widespread acceptance in the community.In this paper, we propose a two-sided extension of this concept by first introducing a novel method of evaluation. Instead of being based on point-shaped resistivity distributions, we use 2759 pairs of real lung shapes for evaluation that were automatically segmented from human CT data. Necessarily, the figures of merit defined in GREIT were adjusted. Second, a linear method of reconstruction that uses orthonormal eigenimages as training data and a tunable desired point spread function are proposed.Using our novel method of evaluation, this approach is compared to the classical point-shaped approach. Results show that most figures of merit improve with the use of eigenimages as training data. Moreover, the possibility of tuning the reconstruction by modifying the desired point spread function is shown. Finally, the reconstruction of real EIT data shows that higher contrasts and fewer artifacts can be achieved in ventilation- and perfusion-related images. (paper)

  2. The effect of image enhancement on the statistical analysis of functional neuroimages : Wavelet-based denoising and Gaussian smoothing

    NARCIS (Netherlands)

    Wink, AM; Roerdink, JBTM; Sonka, M; Fitzpatrick, JM

    2003-01-01

    The quality of statistical analyses of functional neuroimages is studied after applying various preprocessing methods. We present wavelet-based denoising as an alternative to Gaussian smoothing, the standard denoising method in statistical parametric mapping (SPM). The wavelet-based denoising

  3. Improved WKB radial wave functions in several bases

    International Nuclear Information System (INIS)

    Durand, B.; Durand, L.; Department of Physics, University of Wisconsin, Madison, Wisconsin 53706)

    1986-01-01

    We develop approximate WKB-like solutions to the radial Schroedinger equation for problems with an angular momentum barrier using Riccati-Bessel, Coulomb, and harmonic-oscillator functions as basis functions. The solutions treat the angular momentum singularity near the origin more accurately in leading approximation than the standard WKB solutions based on sine waves. The solutions based on Riccati-Bessel and free Coulomb wave functions continue smoothly through the inner turning point and are appropriate for scattering problems. The solutions based on oscillator and bound Coulomb wave functions incorporate both turning points smoothly and are particularly appropriate for bound-state problems; no matching of piecewise solutions using Airy functions is necessary

  4. Heuristic method for searching global maximum of multimodal unknown function

    Energy Technology Data Exchange (ETDEWEB)

    Kamei, K; Araki, Y; Inoue, K

    1983-06-01

    The method is composed of three kinds of searches. They are called g (grasping)-mode search, f (finding)-mode search and c (confirming)-mode search. In the g-mode search and the c-mode search, a heuristic method is used which was extracted from search behaviors of human subjects. In f-mode search, the simplex method is used which is well known as a search method for unimodal unknown function. Each mode search and its transitions are shown in the form of flowchart. The numerical results for one-dimensional through six-dimensional multimodal functions prove the proposed search method to be an effective one. 11 references.

  5. ProLanGO: Protein Function Prediction Using Neural Machine Translation Based on a Recurrent Neural Network.

    Science.gov (United States)

    Cao, Renzhi; Freitas, Colton; Chan, Leong; Sun, Miao; Jiang, Haiqing; Chen, Zhangxin

    2017-10-17

    With the development of next generation sequencing techniques, it is fast and cheap to determine protein sequences but relatively slow and expensive to extract useful information from protein sequences because of limitations of traditional biological experimental techniques. Protein function prediction has been a long standing challenge to fill the gap between the huge amount of protein sequences and the known function. In this paper, we propose a novel method to convert the protein function problem into a language translation problem by the new proposed protein sequence language "ProLan" to the protein function language "GOLan", and build a neural machine translation model based on recurrent neural networks to translate "ProLan" language to "GOLan" language. We blindly tested our method by attending the latest third Critical Assessment of Function Annotation (CAFA 3) in 2016, and also evaluate the performance of our methods on selected proteins whose function was released after CAFA competition. The good performance on the training and testing datasets demonstrates that our new proposed method is a promising direction for protein function prediction. In summary, we first time propose a method which converts the protein function prediction problem to a language translation problem and applies a neural machine translation model for protein function prediction.

  6. Micro-seismic imaging using a source function independent full waveform inversion method

    Science.gov (United States)

    Wang, Hanchen; Alkhalifah, Tariq

    2018-03-01

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  7. Micro-seismic imaging using a source function independent full waveform inversion method

    KAUST Repository

    Wang, Hanchen

    2018-03-26

    At the heart of micro-seismic event measurements is the task to estimate the location of the source micro-seismic events, as well as their ignition times. The accuracy of locating the sources is highly dependent on the velocity model. On the other hand, the conventional micro-seismic source locating methods require, in many cases manual picking of traveltime arrivals, which do not only lead to manual effort and human interaction, but also prone to errors. Using full waveform inversion (FWI) to locate and image micro-seismic events allows for an automatic process (free of picking) that utilizes the full wavefield. However, full waveform inversion of micro-seismic events faces incredible nonlinearity due to the unknown source locations (space) and functions (time). We developed a source function independent full waveform inversion of micro-seismic events to invert for the source image, source function and the velocity model. It is based on convolving reference traces with these observed and modeled to mitigate the effect of an unknown source ignition time. The adjoint-state method is used to derive the gradient for the source image, source function and velocity updates. The extended image for the source wavelet in Z axis is extracted to check the accuracy of the inverted source image and velocity model. Also, angle gathers is calculated to assess the quality of the long wavelength component of the velocity model. By inverting for the source image, source wavelet and the velocity model simultaneously, the proposed method produces good estimates of the source location, ignition time and the background velocity for synthetic examples used here, like those corresponding to the Marmousi model and the SEG/EAGE overthrust model.

  8. A fuzzy method for improving the functionality of search engines based on user's web interactions

    Directory of Open Access Journals (Sweden)

    Farzaneh Kabirbeyk

    2015-04-01

    Full Text Available Web mining has been widely used to discover knowledge from various sources in the web. One of the important tools in web mining is mining of web user’s behavior that is considered as a way to discover the potential knowledge of web user’s interaction. Nowadays, Website personalization is regarded as a popular phenomenon among web users and it plays an important role in facilitating user access and provides information of users’ requirements based on their own interests. Extracting important features about web user behavior plays a significant role in web usage mining. Such features are page visit frequency in each session, visit duration, and dates of visiting a certain pages. This paper presents a method to predict user’s interest and to propose a list of pages based on their interests by identifying user’s behavior based on fuzzy techniques called fuzzy clustering method. Due to the user’s different interests and use of one or more interest at a time, user’s interest may belong to several clusters and fuzzy clustering provide a possible overlap. Using the resulted cluster helps extract fuzzy rules. This helps detecting user’s movement pattern and using neural network a list of suggested pages to the users is provided.

  9. On some methods of NPP functional diagnostics

    International Nuclear Information System (INIS)

    Babkin, N.A.

    1988-01-01

    Methods for NPP functional diagnosis, in which space and time dependences for controlled variable anomalous deviations change are used as characteristic features, are suggested. The methods are oriented for operative recognition of suddenly appearing defects and envelop quite a wide range of possible anomalous effects in an onject under diagnostics. Analysis of transients dynamic properties caused by a failure is realized according to the rules, which do not depend on the character of anomalous situation development

  10. Model of a tunneling current in a p-n junction based on armchair graphene nanoribbons - an Airy function approach and a transfer matrix method

    International Nuclear Information System (INIS)

    Suhendi, Endi; Syariati, Rifki; Noor, Fatimah A.; Khairurrijal; Kurniasih, Neny

    2014-01-01

    We modeled a tunneling current in a p-n junction based on armchair graphene nanoribbons (AGNRs) by using an Airy function approach (AFA) and a transfer matrix method (TMM). We used β-type AGNRs, in which its band gap energy and electron effective mass depends on its width as given by the extended Huckel theory. It was shown that the tunneling currents evaluated by employing the AFA are the same as those obtained under the TMM. Moreover, the calculated tunneling current was proportional to the voltage bias and inversely with temperature

  11. Exponential function method for solving nonlinear ordinary ...

    Indian Academy of Sciences (India)

    [14] introduced a new system of rational. 79 ..... Also, for k-power of function f (η), by induction, we have ..... reliability and efficiency of the method. .... electric field and the polarization effects are negligible and B(x) is assumed by Chaim [8] as.

  12. From free energy to expected energy: Improving energy-based value function approximation in reinforcement learning.

    Science.gov (United States)

    Elfwing, Stefan; Uchibe, Eiji; Doya, Kenji

    2016-12-01

    Free-energy based reinforcement learning (FERL) was proposed for learning in high-dimensional state and action spaces. However, the FERL method does only really work well with binary, or close to binary, state input, where the number of active states is fewer than the number of non-active states. In the FERL method, the value function is approximated by the negative free energy of a restricted Boltzmann machine (RBM). In our earlier study, we demonstrated that the performance and the robustness of the FERL method can be improved by scaling the free energy by a constant that is related to the size of network. In this study, we propose that RBM function approximation can be further improved by approximating the value function by the negative expected energy (EERL), instead of the negative free energy, as well as being able to handle continuous state input. We validate our proposed method by demonstrating that EERL: (1) outperforms FERL, as well as standard neural network and linear function approximation, for three versions of a gridworld task with high-dimensional image state input; (2) achieves new state-of-the-art results in stochastic SZ-Tetris in both model-free and model-based learning settings; and (3) significantly outperforms FERL and standard neural network function approximation for a robot navigation task with raw and noisy RGB images as state input and a large number of actions. Copyright © 2016 The Author(s). Published by Elsevier Ltd.. All rights reserved.

  13. Subspace-based optimization method for inverse scattering problems with an inhomogeneous background medium

    International Nuclear Information System (INIS)

    Chen, Xudong

    2010-01-01

    This paper proposes a version of the subspace-based optimization method to solve the inverse scattering problem with an inhomogeneous background medium where the known inhomogeneities are bounded in a finite domain. Although the background Green's function at each discrete point in the computational domain is not directly available in an inhomogeneous background scenario, the paper uses the finite element method to simultaneously obtain the Green's function at all discrete points. The essence of the subspace-based optimization method is that part of the contrast source is determined from the spectrum analysis without using any optimization, whereas the orthogonally complementary part is determined by solving a lower dimension optimization problem. This feature significantly speeds up the convergence of the algorithm and at the same time makes it robust against noise. Numerical simulations illustrate the efficacy of the proposed algorithm. The algorithm presented in this paper finds wide applications in nondestructive evaluation, such as through-wall imaging

  14. Surface functionalization of two-dimensional metal chalcogenides by Lewis acid-base chemistry

    Science.gov (United States)

    Lei, Sidong; Wang, Xifan; Li, Bo; Kang, Jiahao; He, Yongmin; George, Antony; Ge, Liehui; Gong, Yongji; Dong, Pei; Jin, Zehua; Brunetto, Gustavo; Chen, Weibing; Lin, Zuan-Tao; Baines, Robert; Galvão, Douglas S.; Lou, Jun; Barrera, Enrique; Banerjee, Kaustav; Vajtai, Robert; Ajayan, Pulickel

    2016-05-01

    Precise control of the electronic surface states of two-dimensional (2D) materials could improve their versatility and widen their applicability in electronics and sensing. To this end, chemical surface functionalization has been used to adjust the electronic properties of 2D materials. So far, however, chemical functionalization has relied on lattice defects and physisorption methods that inevitably modify the topological characteristics of the atomic layers. Here we make use of the lone pair electrons found in most of 2D metal chalcogenides and report a functionalization method via a Lewis acid-base reaction that does not alter the host structure. Atomic layers of n-type InSe react with Ti4+ to form planar p-type [Ti4+n(InSe)] coordination complexes. Using this strategy, we fabricate planar p-n junctions on 2D InSe with improved rectification and photovoltaic properties, without requiring heterostructure growth procedures or device fabrication processes. We also show that this functionalization approach works with other Lewis acids (such as B3+, Al3+ and Sn4+) and can be applied to other 2D materials (for example MoS2, MoSe2). Finally, we show that it is possible to use Lewis acid-base chemistry as a bridge to connect molecules to 2D atomic layers and fabricate a proof-of-principle dye-sensitized photosensing device.

  15. Variational second-order Moller-Plesset theory based on the Luttinger-Ward functional

    NARCIS (Netherlands)

    Dahlen, NE; von Barth, U

    2004-01-01

    In recent years there have been some rather successful applications of a new variational technique for calculating the total energies of electronic systems. The new method is based on many-body perturbation theory and uses the one-electron Green function as the basic "variable" rather than the wave

  16. Improved Cole parameter extraction based on the least absolute deviation method

    International Nuclear Information System (INIS)

    Yang, Yuxiang; Ni, Wenwen; Sun, Qiang; Wen, He; Teng, Zhaosheng

    2013-01-01

    The Cole function is widely used in bioimpedance spectroscopy (BIS) applications. Fitting the measured BIS data onto the model and then extracting the Cole parameters (R 0 , R ∞ , α and τ) is a common practice. Accurate extraction of the Cole parameters from the measured BIS data has great significance for evaluating the physiological or pathological status of biological tissue. The traditional least-squares (LS)-based curve fitting method for Cole parameter extraction is often sensitive to noise or outliers and becomes non-robust. This paper proposes an improved Cole parameter extraction based on the least absolute deviation (LAD) method. Comprehensive simulation experiments are carried out and the performances of the LAD method are compared with those of the LS method under the conditions of outliers, random noises and both disturbances. The proposed LAD method exhibits much better robustness under all circumstances, which demonstrates that the LAD method is deserving as an improved alternative to the LS method for Cole parameter extraction for its robustness to outliers and noises. (paper)

  17. Process identification method based on the Z transformation

    International Nuclear Information System (INIS)

    Zwingelstein, G.

    1968-01-01

    A simple method is described for identifying the transfer function of a linear retard-less system, based on the inversion of the Z transformation of the transmittance using a computer. It is assumed in this study that the signals at the entrance and at the exit of the circuit considered are of the deterministic type. The study includes: the theoretical principle of the inversion of the Z transformation, details about programming simulation, and identification of filters whose degrees vary from the first to the fifth order. (authors) [fr

  18. Life prediction for high temperature low cycle fatigue of two kinds of titanium alloys based on exponential function

    Science.gov (United States)

    Mu, G. Y.; Mi, X. Z.; Wang, F.

    2018-01-01

    The high temperature low cycle fatigue tests of TC4 titanium alloy and TC11 titanium alloy are carried out under strain controlled. The relationships between cyclic stress-life and strain-life are analyzed. The high temperature low cycle fatigue life prediction model of two kinds of titanium alloys is established by using Manson-Coffin method. The relationship between failure inverse number and plastic strain range presents nonlinear in the double logarithmic coordinates. Manson-Coffin method assumes that they have linear relation. Therefore, there is bound to be a certain prediction error by using the Manson-Coffin method. In order to solve this problem, a new method based on exponential function is proposed. The results show that the fatigue life of the two kinds of titanium alloys can be predicted accurately and effectively by using these two methods. Prediction accuracy is within ±1.83 times scatter zone. The life prediction capability of new methods based on exponential function proves more effective and accurate than Manson-Coffin method for two kinds of titanium alloys. The new method based on exponential function can give better fatigue life prediction results with the smaller standard deviation and scatter zone than Manson-Coffin method. The life prediction results of two methods for TC4 titanium alloy prove better than TC11 titanium alloy.

  19. Functional Independence and Quality of Life for Persons with Locomotor Disabilities in Institutional Based Rehabilitation and Community Based Rehabilitation - A Comparative Study

    Directory of Open Access Journals (Sweden)

    A Amarnath

    2012-12-01

    Full Text Available Purpose: To compare the functional independence and quality of life of persons with locomotor disabilities who undergo Institutional Based Rehabilitation (IBR and similar persons who undergo Community Based Rehabilitation (CBR. Methods: Purposive sampling was done. Thirty males with locomotor disabilities -15 from IBR and 15 from CBR- were selected. Both the groups were first administered the Functional Independence Measure (FIM questionnaire, followed by the Quality of Life (WHOQOL-BREF questionnaire.Results: There were no significant differencse between IBR and CBR with regard to functional independence  (t value = -1.810, P doi: 10.5463/dcid.v23i3.147

  20. Parameters extraction for perovskite solar cells based on Lambert W-function

    Directory of Open Access Journals (Sweden)

    Ge Junyu

    2016-01-01

    Full Text Available The behaviors of the solar cells are decided by the device parameters. Thus, it is necessary to extract these parameters to achieve the optimal working condition. Because the five-parameter model of solar cells has the implicit equation of current-voltage relationship, it is difficult to obtain the parameters with conventional methods. In this work, an optimized method is presented to extract device parameters from the actual test data of photovoltaic cell. Based on Lambert W-function, explicit formulation of the model can be deduced. The proposed technique takes suitable method of selecting sample points, which are used to calculate the values of the model parameters. By comparing with the Quasi-Newton method, the results verify accuracy and reliability of this method.

  1. NetGen: a novel network-based probabilistic generative model for gene set functional enrichment analysis.

    Science.gov (United States)

    Sun, Duanchen; Liu, Yinliang; Zhang, Xiang-Sun; Wu, Ling-Yun

    2017-09-21

    High-throughput experimental techniques have been dramatically improved and widely applied in the past decades. However, biological interpretation of the high-throughput experimental results, such as differential expression gene sets derived from microarray or RNA-seq experiments, is still a challenging task. Gene Ontology (GO) is commonly used in the functional enrichment studies. The GO terms identified via current functional enrichment analysis tools often contain direct parent or descendant terms in the GO hierarchical structure. Highly redundant terms make users difficult to analyze the underlying biological processes. In this paper, a novel network-based probabilistic generative model, NetGen, was proposed to perform the functional enrichment analysis. An additional protein-protein interaction (PPI) network was explicitly used to assist the identification of significantly enriched GO terms. NetGen achieved a superior performance than the existing methods in the simulation studies. The effectiveness of NetGen was explored further on four real datasets. Notably, several GO terms which were not directly linked with the active gene list for each disease were identified. These terms were closely related to the corresponding diseases when accessed to the curated literatures. NetGen has been implemented in the R package CopTea publicly available at GitHub ( http://github.com/wulingyun/CopTea/ ). Our procedure leads to a more reasonable and interpretable result of the functional enrichment analysis. As a novel term combination-based functional enrichment analysis method, NetGen is complementary to current individual term-based methods, and can help to explore the underlying pathogenesis of complex diseases.

  2. The Numerical Simulation of the Crack Elastoplastic Extension Based on the Extended Finite Element Method

    Directory of Open Access Journals (Sweden)

    Xia Xiaozhou

    2013-01-01

    Full Text Available In the frame of the extended finite element method, the exponent disconnected function is introduced to reflect the discontinuous characteristic of crack and the crack tip enrichment function which is made of triangular basis function, and the linear polar radius function is adopted to describe the displacement field distribution of elastoplastic crack tip. Where, the linear polar radius function form is chosen to decrease the singularity characteristic induced by the plastic yield zone of crack tip, and the triangle basis function form is adopted to describe the displacement distribution character with the polar angle of crack tip. Based on the displacement model containing the above enrichment displacement function, the increment iterative form of elastoplastic extended finite element method is deduced by virtual work principle. For nonuniform hardening material such as concrete, in order to avoid the nonsymmetry characteristic of stiffness matrix induced by the non-associate flowing of plastic strain, the plastic flowing rule containing cross item based on the least energy dissipation principle is adopted. Finally, some numerical examples show that the elastoplastic X-FEM constructed in this paper is of validity.

  3. Variational and PDE-Based Methods for Big Data Analysis, Classification and Image Processing Using Graphs

    Science.gov (United States)

    2015-01-01

    Assistant for Calculus (winter 2011) xii CHAPTER 1 Introduction We present several methods, outlined in Chapters 3-5, for image processing and data...local calculus formulation [103] to generalize the continuous formulation to a (non-local) discrete setting, while other non-local versions for...graph-based model based on the Ginzburg-Landau functional in their work [9]. To define the functional on a graph, the spatial gradient is replaced by a

  4. Arrival-time picking method based on approximate negentropy for microseismic data

    Science.gov (United States)

    Li, Yue; Ni, Zhuo; Tian, Yanan

    2018-05-01

    Accurate and dependable picking of the first arrival time for microseismic data is an important part in microseismic monitoring, which directly affects analysis results of post-processing. This paper presents a new method based on approximate negentropy (AN) theory for microseismic arrival time picking in condition of much lower signal-to-noise ratio (SNR). According to the differences in information characteristics between microseismic data and random noise, an appropriate approximation of negentropy function is selected to minimize the effect of SNR. At the same time, a weighted function of the differences between maximum and minimum value of AN spectrum curve is designed to obtain a proper threshold function. In this way, the region of signal and noise is distinguished to pick the first arrival time accurately. To demonstrate the effectiveness of AN method, we make many experiments on a series of synthetic data with different SNR from -1 dB to -12 dB and compare it with previously published Akaike information criterion (AIC) and short/long time average ratio (STA/LTA) methods. Experimental results indicate that these three methods can achieve well picking effect when SNR is from -1 dB to -8 dB. However, when SNR is as low as -8 dB to -12 dB, the proposed AN method yields more accurate and stable picking result than AIC and STA/LTA methods. Furthermore, the application results of real three-component microseismic data also show that the new method is superior to the other two methods in accuracy and stability.

  5. WebGimm: An integrated web-based platform for cluster analysis, functional analysis, and interactive visualization of results.

    Science.gov (United States)

    Joshi, Vineet K; Freudenberg, Johannes M; Hu, Zhen; Medvedovic, Mario

    2011-01-17

    Cluster analysis methods have been extensively researched, but the adoption of new methods is often hindered by technical barriers in their implementation and use. WebGimm is a free cluster analysis web-service, and an open source general purpose clustering web-server infrastructure designed to facilitate easy deployment of integrated cluster analysis servers based on clustering and functional annotation algorithms implemented in R. Integrated functional analyses and interactive browsing of both, clustering structure and functional annotations provides a complete analytical environment for cluster analysis and interpretation of results. The Java Web Start client-based interface is modeled after the familiar cluster/treeview packages making its use intuitive to a wide array of biomedical researchers. For biomedical researchers, WebGimm provides an avenue to access state of the art clustering procedures. For Bioinformatics methods developers, WebGimm offers a convenient avenue to deploy their newly developed clustering methods. WebGimm server, software and manuals can be freely accessed at http://ClusterAnalysis.org/.

  6. Analysis of QCD sum rule based on the maximum entropy method

    International Nuclear Information System (INIS)

    Gubler, Philipp

    2012-01-01

    QCD sum rule was developed about thirty years ago and has been used up to the present to calculate various physical quantities like hadrons. It has been, however, needed to assume 'pole + continuum' for the spectral function in the conventional analyses. Application of this method therefore came across with difficulties when the above assumption is not satisfied. In order to avoid this difficulty, analysis to make use of the maximum entropy method (MEM) has been developed by the present author. It is reported here how far this new method can be successfully applied. In the first section, the general feature of the QCD sum rule is introduced. In section 2, it is discussed why the analysis by the QCD sum rule based on the MEM is so effective. In section 3, the MEM analysis process is described, and in the subsection 3.1 likelihood function and prior probability are considered then in subsection 3.2 numerical analyses are picked up. In section 4, some cases of applications are described starting with ρ mesons, then charmoniums in the finite temperature and finally recent developments. Some figures of the spectral functions are shown. In section 5, summing up of the present analysis method and future view are given. (S. Funahashi)

  7. A Robust Service Selection Method Based on Uncertain QoS

    Directory of Open Access Journals (Sweden)

    Yanping Chen

    2016-01-01

    Full Text Available Nowadays, the number of Web services on the Internet is quickly increasing. Meanwhile, different service providers offer numerous services with the similar functions. Quality of Service (QoS has become an important factor used to select the most appropriate service for users. The most prominent QoS-based service selection models only take the certain attributes into account, which is an ideal assumption. In the real world, there are a large number of uncertain factors. In particular, at the runtime, QoS may become very poor or unacceptable. In order to solve the problem, a global service selection model based on uncertain QoS was proposed, including the corresponding normalization and aggregation functions, and then a robust optimization model adopted to transform the model. Experiment results show that the proposed method can effectively select services with high robustness and optimality.

  8. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    International Nuclear Information System (INIS)

    Gao, Z.; Xu, Y.; Downar, T.

    2013-01-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  9. Critical node treatment in the analytic function expansion method for Pin Power Reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Gao, Z. [Rice University, MS 318, 6100 Main Street, Houston, TX 77005 (United States); Xu, Y. [Argonne National Laboratory, 9700 South Case Ave., Argonne, IL 60439 (United States); Downar, T. [Department of Nuclear Engineering, University of Michigan, 2355 Bonisteel blvd., Ann Arbor, MI 48109 (United States)

    2013-07-01

    Pin Power Reconstruction (PPR) was implemented in PARCS using the eight term analytic function expansion method (AFEN). This method has been demonstrated to be both accurate and efficient. However, similar to all the methods involving analytic functions, such as the analytic node method (ANM) and AFEN for nodal solution, the use of AFEN for PPR also has potential numerical issue with critical nodes. The conventional analytic functions are trigonometric or hyperbolic sine or cosine functions with an angular frequency proportional to buckling. For a critic al node the buckling is zero and the sine functions becomes zero, and the cosine function become unity. In this case, the eight terms of the analytic functions are no longer distinguishable from ea ch other which makes their corresponding coefficients can no longer be determined uniquely. The mode flux distribution of critical node can be linear while the conventional analytic functions can only express a uniform distribution. If there is critical or near critical node in a plane, the reconstructed pin power distribution is often be shown negative or very large values using the conventional method. In this paper, we propose a new method to avoid the numerical problem wit h critical nodes which uses modified trigonometric or hyperbolic sine functions which are the ratio of trigonometric or hyperbolic sine and its angular frequency. If there are no critical or near critical nodes present, the new pin power reconstruction method with modified analytic functions are equivalent to the conventional analytic functions. The new method is demonstrated using the L336C5 benchmark problem. (authors)

  10. Real-time and wearable functional electrical stimulation system for volitional hand motor function control using the electromyography bridge method

    Directory of Open Access Journals (Sweden)

    Hai-peng Wang

    2017-01-01

    Full Text Available Voluntary participation of hemiplegic patients is crucial for functional electrical stimulation therapy. A wearable functional electrical stimulation system has been proposed for real-time volitional hand motor function control using the electromyography bridge method. Through a series of novel design concepts, including the integration of a detecting circuit and an analog-to-digital converter, a miniaturized functional electrical stimulation circuit technique, a low-power super-regeneration chip for wireless receiving, and two wearable armbands, a prototype system has been established with reduced size, power, and overall cost. Based on wrist joint torque reproduction and classification experiments performed on six healthy subjects, the optimized surface electromyography thresholds and trained logistic regression classifier parameters were statistically chosen to establish wrist and hand motion control with high accuracy. Test results showed that wrist flexion/extension, hand grasp, and finger extension could be reproduced with high accuracy and low latency. This system can build a bridge of information transmission between healthy limbs and paralyzed limbs, effectively improve voluntary participation of hemiplegic patients, and elevate efficiency of rehabilitation training.

  11. Improved method for prioritization of disease associated lncRNAs based on ceRNA theory and functional genomics data.

    Science.gov (United States)

    Wang, Peng; Guo, Qiuyan; Gao, Yue; Zhi, Hui; Zhang, Yan; Liu, Yue; Zhang, Jizhou; Yue, Ming; Guo, Maoni; Ning, Shangwei; Zhang, Guangmei; Li, Xia

    2017-01-17

    Although several computational models that predict disease-associated lncRNAs (long non-coding RNAs) exist, only a limited number of disease-associated lncRNAs are known. In this study, we mapped lncRNAs to their functional genomics context using competing endogenous RNAs (ceRNAs) theory. Based on the criteria that similar lncRNAs are likely involved in similar diseases, we proposed a disease lncRNA prioritization method, DisLncPri, to identify novel disease-lncRNA associations. Using a leave-one-out cross validation (LOOCV) strategy, DisLncPri achieved reliable area under curve (AUC) values of 0.89 and 0.87 for the LncRNADisease and Lnc2Cancer datasets that further improved to 0.90 and 0.89 by integrating a multiple rank fusion strategy. We found that DisLncPri had the highest rank enrichment score and AUC value in comparison to several other methods for case studies of alzheimer's disease, ovarian cancer, pancreatic cancer and gastric cancer. Several novel lncRNAs in the top ranks of these diseases were found to be newly verified by relevant databases or reported in recent studies. Prioritization of lncRNAs from a microarray (GSE53622) of oesophageal cancer patients highlighted ENSG00000226029 (top 2), a previously unidentified lncRNA as a potential prognostic biomarker. Our analysis thus indicates that DisLncPri is an excellent tool for identifying lncRNAs that could be novel biomarkers and therapeutic targets in a variety of human diseases.

  12. A method for determining customer requirement weights based on TFMF and TLR

    Science.gov (United States)

    Ai, Qingsong; Shu, Ting; Liu, Quan; Zhou, Zude; Xiao, Zheng

    2013-11-01

    'Customer requirements' (CRs) management plays an important role in enterprise systems (ESs) by processing customer-focused information. Quality function deployment (QFD) is one of the main CRs analysis methods. Because CR weights are crucial for the input of QFD, we developed a method for determining CR weights based on trapezoidal fuzzy membership function (TFMF) and 2-tuple linguistic representation (TLR). To improve the accuracy of CR weights, we propose to apply TFMF to describe CR weights so that they can be appropriately represented. Because the fuzzy logic is not capable of aggregating information without loss, TLR model is adopted as well. We first describe the basic concepts of TFMF and TLR and then introduce an approach to compute CR weights. Finally, an example is provided to explain and verify the proposed method.

  13. Methods in Logic Based Control

    DEFF Research Database (Denmark)

    Christensen, Georg Kronborg

    1999-01-01

    Desing and theory of Logic Based Control systems.Boolean Algebra, Karnaugh Map, Quine McClusky's algorithm. Sequential control design. Logic Based Control Method, Cascade Control Method. Implementation techniques: relay, pneumatic, TTL/CMOS,PAL and PLC- and Soft_PLC implementation. PLC...

  14. A hybrid filtering method based on a novel empirical mode decomposition for friction signals

    International Nuclear Information System (INIS)

    Li, Chengwei; Zhan, Liwei

    2015-01-01

    During a measurement, the measured signal usually contains noise. To remove the noise and preserve the important feature of the signal, we introduce a hybrid filtering method that uses a new intrinsic mode function (NIMF) and a modified Hausdorff distance. The NIMF is defined as the difference between the noisy signal and each intrinsic mode function (IMF), which is obtained by empirical mode decomposition (EMD), ensemble EMD, complementary ensemble EMD, or complete ensemble EMD with adaptive noise (CEEMDAN). The relevant mode selecting is based on the similarity between the first NIMF and the rest of the NIMFs. With this filtering method, the EMD and improved versions are used to filter the simulation and friction signals. The friction signal between an airplane tire and the runaway is recorded during a simulated airplane touchdown and features spikes of various amplitudes and noise. The filtering effectiveness of the four hybrid filtering methods are compared and discussed. The results show that the filtering method based on CEEMDAN outperforms other signal filtering methods. (paper)

  15. A Method Based on Dial's Algorithm for Multi-time Dynamic Traffic Assignment

    Directory of Open Access Journals (Sweden)

    Rongjie Kuang

    2014-03-01

    Full Text Available Due to static traffic assignment has poor performance in reflecting actual case and dynamic traffic assignment may incurs excessive compute cost, method of multi-time dynamic traffic assignment combining static and dynamic traffic assignment balances factors of precision and cost effectively. A method based on Dial's logit algorithm is proposed in the article to solve the dynamic stochastic user equilibrium problem in dynamic traffic assignment. Before that, a fitting function that can proximately reflect overloaded traffic condition of link is proposed and used to give corresponding model. Numerical example is given to illustrate heuristic procedure of method and to compare results with one of same example solved by other literature's algorithm. Results show that method based on Dial's algorithm is preferable to algorithm from others.

  16. Predictive Function Control for Communication-Based Train Control (CBTC Systems

    Directory of Open Access Journals (Sweden)

    Bing Bu

    2013-01-01

    Full Text Available In Communication-Based Train Control (CBTC systems, random transmission delays and packet drops are inevitable in the wireless networks, which could result in unnecessary traction, brakes or even emergency brakes of trains, losses of line capacity and passenger dissatisfaction. This paper applies predictive function control technology with a mixed H2/∞ control approach to improve the control performances. The controller is in the state feedback form and satisfies the requirement of quadratic input and state constraints. A linear matrix inequality (LMI approach is developed to solve the control problem. The proposed method attenuates disturbances by incorporating H2/∞ into the control scheme. The control command from the automatic train operation (ATO is included in the reward function to optimize the train's running profile. The influence of transmission delays and packet drops is alleviated through improving the performances of the controller. Simulation results show that the method is effective to improve the performances and robustness of CBTC systems.

  17. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze

    2017-04-24

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  18. Optimizing top precision performance measure of content-based image retrieval by learning similarity function

    KAUST Repository

    Liang, Ru-Ze; Shi, Lihui; Wang, Haoxiang; Meng, Jiandong; Wang, Jim Jing-Yan; Sun, Qingquan; Gu, Yi

    2017-01-01

    In this paper we study the problem of content-based image retrieval. In this problem, the most popular performance measure is the top precision measure, and the most important component of a retrieval system is the similarity function used to compare a query image against a database image. However, up to now, there is no existing similarity learning method proposed to optimize the top precision measure. To fill this gap, in this paper, we propose a novel similarity learning method to maximize the top precision measure. We model this problem as a minimization problem with an objective function as the combination of the losses of the relevant images ranked behind the top-ranked irrelevant image, and the squared Frobenius norm of the similarity function parameter. This minimization problem is solved as a quadratic programming problem. The experiments over two benchmark data sets show the advantages of the proposed method over other similarity learning methods when the top precision is used as the performance measure.

  19. Theory of direct-interband-transition line shapes based on Mori's method

    International Nuclear Information System (INIS)

    Sam Nyung Yi; Jai Yon Ryu; Ok Hee Chung; Joung Young Sug; Sang Don Choi; Yeon Choon Chung

    1987-01-01

    A theory of direct interband optical transition in the electron-phonon system is introduced on the basis of the Kubo formalism and by using Mori's method of calculation. The line shape functions are introduced in two different ways and are compared with those obtained by Choi and Chung based on Argyres and Sigel's projection technique

  20. A local level set method based on a finite element method for unstructured meshes

    International Nuclear Information System (INIS)

    Ngo, Long Cu; Choi, Hyoung Gwon

    2016-01-01

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time

  1. A local level set method based on a finite element method for unstructured meshes

    Energy Technology Data Exchange (ETDEWEB)

    Ngo, Long Cu; Choi, Hyoung Gwon [School of Mechanical Engineering, Seoul National University of Science and Technology, Seoul (Korea, Republic of)

    2016-12-15

    A local level set method for unstructured meshes has been implemented by using a finite element method. A least-square weighted residual method was employed for implicit discretization to solve the level set advection equation. By contrast, a direct re-initialization method, which is directly applicable to the local level set method for unstructured meshes, was adopted to re-correct the level set function to become a signed distance function after advection. The proposed algorithm was constructed such that the advection and direct reinitialization steps were conducted only for nodes inside the narrow band around the interface. Therefore, in the advection step, the Gauss–Seidel method was used to update the level set function using a node-by-node solution method. Some benchmark problems were solved by using the present local level set method. Numerical results have shown that the proposed algorithm is accurate and efficient in terms of computational time.

  2. Optimal protein library design using recombination or point mutations based on sequence-based scoring functions.

    Science.gov (United States)

    Pantazes, Robert J; Saraf, Manish C; Maranas, Costas D

    2007-08-01

    In this paper, we introduce and test two new sequence-based protein scoring systems (i.e. S1, S2) for assessing the likelihood that a given protein hybrid will be functional. By binning together amino acids with similar properties (i.e. volume, hydrophobicity and charge) the scoring systems S1 and S2 allow for the quantification of the severity of mismatched interactions in the hybrids. The S2 scoring system is found to be able to significantly functionally enrich a cytochrome P450 library over other scoring methods. Given this scoring base, we subsequently constructed two separate optimization formulations (i.e. OPTCOMB and OPTOLIGO) for optimally designing protein combinatorial libraries involving recombination or mutations, respectively. Notably, two separate versions of OPTCOMB are generated (i.e. model M1, M2) with the latter allowing for position-dependent parental fragment skipping. Computational benchmarking results demonstrate the efficacy of models OPTCOMB and OPTOLIGO to generate high scoring libraries of a prespecified size.

  3. Equation of teachers primary school course computer with learning method based on imaging

    Directory of Open Access Journals (Sweden)

    Елена Сергеевна Пучкова

    2011-03-01

    Full Text Available The paper considers the possibility of training future teachers with the rate of computer methods of teaching through the creation of visual imagery and operate them, еxamples of practice-oriented assignments, formative professional quality based on explicit and implicit use of a visual image, which decision is based on the cognitive function of visibility.

  4. Baryons with functional methods

    International Nuclear Information System (INIS)

    Fischer, Christian S.

    2017-01-01

    We summarise recent results on the spectrum of ground-state and excited baryons and their form factors in the framework of functional methods. As an improvement upon similar approaches we explicitly take into account the underlying momentum-dependent dynamics of the quark-gluon interaction that leads to dynamical chiral symmetry breaking. For light octet and decuplet baryons we find a spectrum in very good agreement with experiment, including the level ordering between the positive- and negative-parity nucleon states. Comparing the three-body framework with the quark-diquark approximation, we do not find significant differences in the spectrum for those states that have been calculated in both frameworks. This situation is different in the electromagnetic form factor of the Δ, which may serve to distinguish both pictures by comparison with experiment and lattice QCD.

  5. Stress assessment based on EEG univariate features and functional connectivity measures.

    Science.gov (United States)

    Alonso, J F; Romero, S; Ballester, M R; Antonijoan, R M; Mañanas, M A

    2015-07-01

    The biological response to stress originates in the brain but involves different biochemical and physiological effects. Many common clinical methods to assess stress are based on the presence of specific hormones and on features extracted from different signals, including electrocardiogram, blood pressure, skin temperature, or galvanic skin response. The aim of this paper was to assess stress using EEG-based variables obtained from univariate analysis and functional connectivity evaluation. Two different stressors, the Stroop test and sleep deprivation, were applied to 30 volunteers to find common EEG patterns related to stress effects. Results showed a decrease of the high alpha power (11 to 12 Hz), an increase in the high beta band (23 to 36 Hz, considered a busy brain indicator), and a decrease in the approximate entropy. Moreover, connectivity showed that the high beta coherence and the interhemispheric nonlinear couplings, measured by the cross mutual information function, increased significantly for both stressors, suggesting that useful stress indexes may be obtained from EEG-based features.

  6. Efficient and accurate Greedy Search Methods for mining functional modules in protein interaction networks.

    Science.gov (United States)

    He, Jieyue; Li, Chaojun; Ye, Baoliu; Zhong, Wei

    2012-06-25

    Most computational algorithms mainly focus on detecting highly connected subgraphs in PPI networks as protein complexes but ignore their inherent organization. Furthermore, many of these algorithms are computationally expensive. However, recent analysis indicates that experimentally detected protein complexes generally contain Core/attachment structures. In this paper, a Greedy Search Method based on Core-Attachment structure (GSM-CA) is proposed. The GSM-CA method detects densely connected regions in large protein-protein interaction networks based on the edge weight and two criteria for determining core nodes and attachment nodes. The GSM-CA method improves the prediction accuracy compared to other similar module detection approaches, however it is computationally expensive. Many module detection approaches are based on the traditional hierarchical methods, which is also computationally inefficient because the hierarchical tree structure produced by these approaches cannot provide adequate information to identify whether a network belongs to a module structure or not. In order to speed up the computational process, the Greedy Search Method based on Fast Clustering (GSM-FC) is proposed in this work. The edge weight based GSM-FC method uses a greedy procedure to traverse all edges just once to separate the network into the suitable set of modules. The proposed methods are applied to the protein interaction network of S. cerevisiae. Experimental results indicate that many significant functional modules are detected, most of which match the known complexes. Results also demonstrate that the GSM-FC algorithm is faster and more accurate as compared to other competing algorithms. Based on the new edge weight definition, the proposed algorithm takes advantages of the greedy search procedure to separate the network into the suitable set of modules. Experimental analysis shows that the identified modules are statistically significant. The algorithm can reduce the

  7. New strategy for surface functionalization of periodic mesoporous silica based on meso-HSiO1.5.

    Science.gov (United States)

    Xie, Zhuoying; Bai, Ling; Huang, Suwen; Zhu, Cun; Zhao, Yuanjin; Gu, Zhong-Ze

    2014-01-29

    Organic functionalization of periodic mesoporous silicas (PMSs) offers a way to improve their excellent properties and wide applications owing to their structural superiority. In this study, a new strategy for organic functionalization of PMSs is demonstrated by hydrosilylation of the recently discovered "impossible" periodic mesoporous hydridosilica, meso-HSiO1.5. This method overcomes the disadvantages of present pathways for organic functionalization of PMSs with organosilica. Moreover, compared to the traditional functionalization on the surface of porous silicon by hydrosilylation, the template-synthesized meso-HSiO1.5 is more flexible to access functional-groups-loaded PMSs with adjustable microstructures. The new method and materials will have wider applications based on both the structure and surface superiorities.

  8. Human Detection System by Fusing Depth Map-Based Method and Convolutional Neural Network-Based Method

    Directory of Open Access Journals (Sweden)

    Anh Vu Le

    2017-01-01

    Full Text Available In this paper, the depth images and the colour images provided by Kinect sensors are used to enhance the accuracy of human detection. The depth-based human detection method is fast but less accurate. On the other hand, the faster region convolutional neural network-based human detection method is accurate but requires a rather complex hardware configuration. To simultaneously leverage the advantages and relieve the drawbacks of each method, one master and one client system is proposed. The final goal is to make a novel Robot Operation System (ROS-based Perception Sensor Network (PSN system, which is more accurate and ready for the real time application. The experimental results demonstrate the outperforming of the proposed method compared with other conventional methods in the challenging scenarios.

  9. Direct electrochemical sensing of glucose using glucose oxidase immobilized on functionalized carbon nanotubes via a novel metal chelate-based affinity method

    International Nuclear Information System (INIS)

    Tu, X.; Zhao, Y.; Luo, S.; Luo, X.; Feng, L.

    2012-01-01

    We report on a novel amperometric glassy carbon biosensing electrode for glucose. It is based on the immobilization of a highly sensitive glucose oxidase (GOx) by affinity interaction on carbon nanotubes (CNTs) functionalized with iminodiacetic acid and metal chelates. The new technique for immobilization is exploiting the affinity of Co(II) ions to the histidine and cysteine moieties on the surface of GOx. The direct electrochemistry of immobilized GOx revealed that the functionalized CNTs greatly improve the direct electron transfer between GOx and the surface of the electrode to give a pair of well-defined and almost reversible redox peaks and undergoes fast heterogeneous electron transfer with a rate constant (k s) of 0. 59 s -1 . The GOx immobilized in this way fully retained its activity for the oxidation of glucose. The resulting biosensor is capable of detecting glucose at levels as low as 0.01 mM, and has excellent operational stability (with no decrease in the activity of enzyme over a 10 days period). The method of immobilizing GOx is easy and also provides a model technique for potential use with other redox enzymes and proteins. (author)

  10. Stand diameter distribution modelling and prediction based on Richards function.

    Directory of Open Access Journals (Sweden)

    Ai-guo Duan

    Full Text Available The objective of this study was to introduce application of the Richards equation on modelling and prediction of stand diameter distribution. The long-term repeated measurement data sets, consisted of 309 diameter frequency distributions from Chinese fir (Cunninghamia lanceolata plantations in the southern China, were used. Also, 150 stands were used as fitting data, the other 159 stands were used for testing. Nonlinear regression method (NRM or maximum likelihood estimates method (MLEM were applied to estimate the parameters of models, and the parameter prediction method (PPM and parameter recovery method (PRM were used to predict the diameter distributions of unknown stands. Four main conclusions were obtained: (1 R distribution presented a more accurate simulation than three-parametric Weibull function; (2 the parameters p, q and r of R distribution proved to be its scale, location and shape parameters, and have a deep relationship with stand characteristics, which means the parameters of R distribution have good theoretical interpretation; (3 the ordinate of inflection point of R distribution has significant relativity with its skewness and kurtosis, and the fitted main distribution range for the cumulative diameter distribution of Chinese fir plantations was 0.4∼0.6; (4 the goodness-of-fit test showed diameter distributions of unknown stands can be well estimated by applying R distribution based on PRM or the combination of PPM and PRM under the condition that only quadratic mean DBH or plus stand age are known, and the non-rejection rates were near 80%, which are higher than the 72.33% non-rejection rate of three-parametric Weibull function based on the combination of PPM and PRM.

  11. Adaptive method for multi-dimensional integration and selection of a base of chaos polynomials

    International Nuclear Information System (INIS)

    Crestaux, T.

    2011-01-01

    This research thesis addresses the propagation of uncertainty in numerical simulations and its processing within a probabilistic framework by a functional approach based on random variable functions. The author reports the use of the spectral method to represent random variables by development in polynomial chaos. More precisely, the author uses the method of non-intrusive projection which uses the orthogonality of Chaos Polynomials to compute the development coefficients by approximation of scalar products. The approach is applied to a cavity and to waste storage [fr

  12. Ankle-brachial index by automated method and renal function

    Directory of Open Access Journals (Sweden)

    Ricardo Pereira Silva

    2017-05-01

    Full Text Available Background The Ankle-brachial index (ABI is a non-invasive method used for the diagnosis of peripheral arterial occlusive disease (PAOD. Aims To determine the clinical features of patients submitted to ABI measurement by automatic method. To investigate association between ABI and renal function. Methods The present is a cross-sectional study. The study was performed in a private clinic in the city of Fortaleza (Ce- Brazil. For ABI analysis, we utilized automatic methodology using a Microlife device. Data collection took place from March 2012 to January 2016. During this period, ABI was measured in 375 patients aged >50 years, who had a diagnosis of hypertension, diabetes or vascular disease. Results Of the 375 patients, 18 were categorized as having abnormal ABI (4.8 per cent and 357 were normal ABI (95.2 per cent. Patients with abnormal ABI showed older mean age when compared to patients with normal ABI. Among patients with normal renal function, only 0.95 per cent showed abnormal ABI; among patients with abnormal renal function, 6 per cent showed abnormal ABI. Conclusion 1 No differences were observed when comparing the groups regarding gender or the prevalence of hypertension, diabetes, dyslipidaemia or CAD. 2 Group with abnormal ABI had renal function greater impairment.

  13. A quasiparticle-based multi-reference coupled-cluster method.

    Science.gov (United States)

    Rolik, Zoltán; Kállay, Mihály

    2014-10-07

    The purpose of this paper is to introduce a quasiparticle-based multi-reference coupled-cluster (MRCC) approach. The quasiparticles are introduced via a unitary transformation which allows us to represent a complete active space reference function and other elements of an orthonormal multi-reference (MR) basis in a determinant-like form. The quasiparticle creation and annihilation operators satisfy the fermion anti-commutation relations. On the basis of these quasiparticles, a generalization of the normal-ordered operator products for the MR case can be introduced as an alternative to the approach of Mukherjee and Kutzelnigg [Recent Prog. Many-Body Theor. 4, 127 (1995); Mukherjee and Kutzelnigg, J. Chem. Phys. 107, 432 (1997)]. Based on the new normal ordering any quasiparticle-based theory can be formulated using the well-known diagram techniques. Beyond the general quasiparticle framework we also present a possible realization of the unitary transformation. The suggested transformation has an exponential form where the parameters, holding exclusively active indices, are defined in a form similar to the wave operator of the unitary coupled-cluster approach. The definition of our quasiparticle-based MRCC approach strictly follows the form of the single-reference coupled-cluster method and retains several of its beneficial properties. Test results for small systems are presented using a pilot implementation of the new approach and compared to those obtained by other MR methods.

  14. A Regression-based K nearest neighbor algorithm for gene function prediction from heterogeneous data

    Directory of Open Access Journals (Sweden)

    Ruzzo Walter L

    2006-03-01

    Full Text Available Abstract Background As a variety of functional genomic and proteomic techniques become available, there is an increasing need for functional analysis methodologies that integrate heterogeneous data sources. Methods In this paper, we address this issue by proposing a general framework for gene function prediction based on the k-nearest-neighbor (KNN algorithm. The choice of KNN is motivated by its simplicity, flexibility to incorporate different data types and adaptability to irregular feature spaces. A weakness of traditional KNN methods, especially when handling heterogeneous data, is that performance is subject to the often ad hoc choice of similarity metric. To address this weakness, we apply regression methods to infer a similarity metric as a weighted combination of a set of base similarity measures, which helps to locate the neighbors that are most likely to be in the same class as the target gene. We also suggest a novel voting scheme to generate confidence scores that estimate the accuracy of predictions. The method gracefully extends to multi-way classification problems. Results We apply this technique to gene function prediction according to three well-known Escherichia coli classification schemes suggested by biologists, using information derived from microarray and genome sequencing data. We demonstrate that our algorithm dramatically outperforms the naive KNN methods and is competitive with support vector machine (SVM algorithms for integrating heterogenous data. We also show that by combining different data sources, prediction accuracy can improve significantly. Conclusion Our extension of KNN with automatic feature weighting, multi-class prediction, and probabilistic inference, enhance prediction accuracy significantly while remaining efficient, intuitive and flexible. This general framework can also be applied to similar classification problems involving heterogeneous datasets.

  15. A New Method for Deriving the Stellar Birth Function of Resolved Stellar Populations.

    Science.gov (United States)

    Gennaro, M.; Tchernyshyov, K.; Brown, T. M.; Gordon, K. D.

    2015-07-01

    We present a new method for deriving the stellar birth function (SBF) of resolved stellar populations. The SBF (stars born per unit mass, time, and metallicity) is the combination of the initial mass function (IMF), the star formation history (SFH), and the metallicity distribution function (MDF). The framework of our analysis is that of Poisson Point Processes (PPPs), a class of statistical models suitable when dealing with points (stars) in a multidimensional space (the measurement space of multiple photometric bands). The theory of PPPs easily accommodates the modeling of measurement errors as well as that of incompleteness. Our method avoids binning stars in the color-magnitude diagram and uses the whole likelihood function for each data point; combining the individual likelihoods allows the computation of the posterior probability for the population's SBF. Within the proposed framework it is possible to include nuisance parameters, such as distance and extinction, by specifying their prior distributions and marginalizing over them. The aim of this paper is to assess the validity of this new approach under a range of assumptions, using only simulated data. Forthcoming work will show applications to real data. Although it has a broad scope of possible applications, we have developed this method to study multi-band Hubble Space Telescope observations of the Milky Way Bulge. Therefore we will focus on simulations with characteristics similar to those of the Galactic Bulge. Based on observations made with the NASA/ESA Hubble Space Telescope, obtained at STScI, which is operated by AURA, Inc., under NASA contract NAS 5-26555.

  16. On a method for generating inequalities for the zeros of certain functions

    Science.gov (United States)

    Gatteschi, Luigi; Giordano, Carla

    2007-10-01

    In this paper we describe a general procedure which yields inequalities satisfied by the zeros of a given function. The method requires the knowledge of a two-term approximation of the function with bound for the error term. The method was successfully applied many years ago [L. Gatteschi, On the zeros of certain functions with application to Bessel functions, Nederl. Akad. Wetensch. Proc. Ser. 55(3)(1952), Indag. Math. 14(1952) 224-229] and more recently too [L. Gatteschi and C. Giordano, Error bounds for McMahon's asymptotic approximations of the zeros of the Bessel functions, Integral Transform Special Functions, 10(2000) 41-56], to the zeros of the Bessel functions of the first kind. Here, we present the results of the application of the method to get inequalities satisfied by the zeros of the derivative of the function . This function plays an important role in the asymptotic study of the stationary points of the solutions of certain differential equations.

  17. Functional discriminant method and neuronal net

    International Nuclear Information System (INIS)

    Minh-Quan Tran.

    1993-02-01

    The ZEUS detector at the ep storage ring HERA at DESY is equipped with a 3 level trigger system. This enormous effort is necessary to fight against the high proton beamgas background that was estimated to be at the level of 100 kHz. In this thesis two methods were investigated to calculate a trigger decision from a set of various trigger parameters. The Functional Discriminant Analysis evalutes a decision parameter that is optimized by means of a linear algebra technic. A method is shown how to determine the most important trigger parameters. A 'feed forward' neuralnetwork was analyzed in order to allow none lineare cuts in the n dimensinal configuration space spanned by the trigger parameters. The error back propagation method was used to teach the neural network. It is shown that both decision methods are able to abstract the important characteristics of event samples. As soon as they are tought they will seperate events from these classes even though they were not part of the training sample. (orig.) [de

  18. FPGA Acceleration of the phylogenetic likelihood function for Bayesian MCMC inference methods

    Directory of Open Access Journals (Sweden)

    Bakos Jason D

    2010-04-01

    Full Text Available Abstract Background Likelihood (ML-based phylogenetic inference has become a popular method for estimating the evolutionary relationships among species based on genomic sequence data. This method is used in applications such as RAxML, GARLI, MrBayes, PAML, and PAUP. The Phylogenetic Likelihood Function (PLF is an important kernel computation for this method. The PLF consists of a loop with no conditional behavior or dependencies between iterations. As such it contains a high potential for exploiting parallelism using micro-architectural techniques. In this paper, we describe a technique for mapping the PLF and supporting logic onto a Field Programmable Gate Array (FPGA-based co-processor. By leveraging the FPGA's on-chip DSP modules and the high-bandwidth local memory attached to the FPGA, the resultant co-processor can accelerate ML-based methods and outperform state-of-the-art multi-core processors. Results We use the MrBayes 3 tool as a framework for designing our co-processor. For large datasets, we estimate that our accelerated MrBayes, if run on a current-generation FPGA, achieves a 10× speedup relative to software running on a state-of-the-art server-class microprocessor. The FPGA-based implementation achieves its performance by deeply pipelining the likelihood computations, performing multiple floating-point operations in parallel, and through a natural log approximation that is chosen specifically to leverage a deeply pipelined custom architecture. Conclusions Heterogeneous computing, which combines general-purpose processors with special-purpose co-processors such as FPGAs and GPUs, is a promising approach for high-performance phylogeny inference as shown by the growing body of literature in this field. FPGAs in particular are well-suited for this task because of their low power consumption as compared to many-core processors and Graphics Processor Units (GPUs 1.

  19. Some Remarks on Exp-Function Method and Its Applications

    International Nuclear Information System (INIS)

    Aslan Ismail; Marinakis Vangelis

    2011-01-01

    Recently, many important nonlinear partial differential equations arising in the applied physical and mathematical sciences have been tackled by a popular approach, the so-called Exp-function method. In this paper, we present some shortcomings of this method by analyzing the results of recently published papers. We also discuss the possible improvement of the effectiveness of the method. (general)

  20. Hash function based on chaotic map lattices.

    Science.gov (United States)

    Wang, Shihong; Hu, Gang

    2007-06-01

    A new hash function system, based on coupled chaotic map dynamics, is suggested. By combining floating point computation of chaos and some simple algebraic operations, the system reaches very high bit confusion and diffusion rates, and this enables the system to have desired statistical properties and strong collision resistance. The chaos-based hash function has its advantages for high security and fast performance, and it serves as one of the most highly competitive candidates for practical applications of hash function for software realization and secure information communications in computer networks.

  1. Vibrationally resolved UV/Vis spectroscopy with time-dependent density functional based tight binding

    NARCIS (Netherlands)

    Ruger, R.; Niehaus, T.; van Lenthe, E.; Heine, T.; Visscher, L.

    2016-01-01

    We report a time-dependent density functional based tight-binding (TD-DFTB) scheme for the calculation of UV/Vis spectra, explicitly taking into account the excitation of nuclear vibrations via the adiabatic Hessian Franck-Condon method with a harmonic approximation for the nu- clear wavefunction.

  2. The improved business valuation model for RFID company based on the community mining method.

    Science.gov (United States)

    Li, Shugang; Yu, Zhaoxu

    2017-01-01

    Nowadays, the appetite for the investment and mergers and acquisitions (M&A) activity in RFID companies is growing rapidly. Although the huge number of papers have addressed the topic of business valuation models based on statistical methods or neural network methods, only a few are dedicated to constructing a general framework for business valuation that improves the performance with network graph (NG) and the corresponding community mining (CM) method. In this study, an NG based business valuation model is proposed, where real options approach (ROA) integrating CM method is designed to predict the company's net profit as well as estimate the company value. Three improvements are made in the proposed valuation model: Firstly, our model figures out the credibility of the node belonging to each community and clusters the network according to the evolutionary Bayesian method. Secondly, the improved bacterial foraging optimization algorithm (IBFOA) is adopted to calculate the optimized Bayesian posterior probability function. Finally, in IBFOA, bi-objective method is used to assess the accuracy of prediction, and these two objectives are combined into one objective function using a new Pareto boundary method. The proposed method returns lower forecasting error than 10 well-known forecasting models on 3 different time interval valuing tasks for the real-life simulation of RFID companies.

  3. A nodal method based on matrix-response method

    International Nuclear Information System (INIS)

    Rocamora Junior, F.D.; Menezes, A.

    1982-01-01

    A nodal method based in the matrix-response method, is presented, and its application to spatial gradient problems, such as those that exist in fast reactors, near the core - blanket interface, is investigated. (E.G.) [pt

  4. Environment-dependent crystal-field tight-binding based on density-functional theory

    International Nuclear Information System (INIS)

    Urban, Alexander

    2012-01-01

    Electronic structure calculations based on Kohn-Sham density-functional theory (DFT) allow the accurate prediction of chemical bonding and materials properties. Due to the high computational demand DFT calculations are, however, restricted to structures containing at most several hundreds of atoms, i.e., to length scales of a few nanometers. Though, many processes of technological relevance, for example in the field of nanoelectronics, are governed by phenomena that occur on a slightly larger length scale of up to 100 nanometers, which corresponds to tens of thousands of atoms. The semiempirical Slater-Koster tight-binding (TB) method makes it feasible to calculate the electronic structure of such large systems. In contrast to first-principles-based DFT, which is universally applicable to almost all chemical species, the TB method is based on parametrized models that are usually specialized for a particular application or for one certain class of compounds. Usually the model parameters (Slater-Koster tables) are empirically adjusted to reproduce either experimental reference data (e.g., geometries, elastic constants) or data from first-principles methods such as DFT. The construction of a new TB model is therefore connected with a considerable effort that is often contrasted by a low transferability of the parametrization. In this thesis we develop a systematic methodology for the derivation of accurate and transferable TB models from DFT calculations. Our procedure exploits the formal relationship between the two methods, according to which the TB total energy can be understood as a direct approximation of the Kohn--Sham energy functional. The concept of our method is different to previous approaches such as the DFTB method, since it allows to extract TB parameters from converged DFT wave functions and Hamiltonians of arbitrary reference structures. In the following the different subjects of this thesis are briefly summarized. We introduce a new technique for the

  5. Functional Assessment-Based Interventions: Focusing on the Environment and Considering Function

    Science.gov (United States)

    Oakes, Wendy Peia; Lane, Kathleen Lynne; Hirsch, Shanna Eisner

    2018-01-01

    It can be challenging for educators to select intervention tactics based on the function of the student's behavior. In this article, authors offer practical information on behavioral function and environmental-focused intervention ideas for educators developing behavior intervention plans. Ideas are organized according to the hypothesized function…

  6. Matrix-based system reliability method and applications to bridge networks

    International Nuclear Information System (INIS)

    Kang, W.-H.; Song Junho; Gardoni, Paolo

    2008-01-01

    Using a matrix-based system reliability (MSR) method, one can estimate the probabilities of complex system events by simple matrix calculations. Unlike existing system reliability methods whose complexity depends highly on that of the system event, the MSR method describes any general system event in a simple matrix form and therefore provides a more convenient way of handling the system event and estimating its probability. Even in the case where one has incomplete information on the component probabilities and/or the statistical dependence thereof, the matrix-based framework enables us to estimate the narrowest bounds on the system failure probability by linear programming. This paper presents the MSR method and applies it to a transportation network consisting of bridge structures. The seismic failure probabilities of bridges are estimated by use of the predictive fragility curves developed by a Bayesian methodology based on experimental data and existing deterministic models of the seismic capacity and demand. Using the MSR method, the probability of disconnection between each city/county and a critical facility is estimated. The probability mass function of the number of failed bridges is computed as well. In order to quantify the relative importance of bridges, the MSR method is used to compute the conditional probabilities of bridge failures given that there is at least one city disconnected from the critical facility. The bounds on the probability of disconnection are also obtained for cases with incomplete information

  7. A reward optimization method based on action subrewards in hierarchical reinforcement learning.

    Science.gov (United States)

    Fu, Yuchen; Liu, Quan; Ling, Xionghong; Cui, Zhiming

    2014-01-01

    Reinforcement learning (RL) is one kind of interactive learning methods. Its main characteristics are "trial and error" and "related reward." A hierarchical reinforcement learning method based on action subrewards is proposed to solve the problem of "curse of dimensionality," which means that the states space will grow exponentially in the number of features and low convergence speed. The method can reduce state spaces greatly and choose actions with favorable purpose and efficiency so as to optimize reward function and enhance convergence speed. Apply it to the online learning in Tetris game, and the experiment result shows that the convergence speed of this algorithm can be enhanced evidently based on the new method which combines hierarchical reinforcement learning algorithm and action subrewards. The "curse of dimensionality" problem is also solved to a certain extent with hierarchical method. All the performance with different parameters is compared and analyzed as well.

  8. [Soil carbohydrates: their determination methods and indication functions].

    Science.gov (United States)

    Zhang, Wei; Xie, Hongtu; He, Hongbo; Zheng, Lichen; Wang, Ge

    2006-08-01

    Soil carbohydrates are the important component of soil organic matter, and play an important role in soil aggregation formation. Their hydrolysis methods involve sulfur acid (H2SO4), hydrochloric acid (HCl), and trifluoroacetic acid (TFA) hydrolysis, and their determination methods include colorimetry, gas-liquid chromatography (GLC) , high performance liquid chromatography (HPLC), and high performance anion-exchange chromatography with pulsed amperometric detection (HPAE-PAD). This paper summarized the methods of carbohydrates' hydrolysis, purification and detection, with focus on the derived methods of GLC, and briefly introduced the indication functions of carbohydrates in soil organic matter turnover.

  9. Towards an Early Software Effort Estimation Based on Functional and Non-Functional Requirements

    Science.gov (United States)

    Kassab, Mohamed; Daneva, Maya; Ormandjieva, Olga

    The increased awareness of the non-functional requirements as a key to software project and product success makes explicit the need to include them in any software project effort estimation activity. However, the existing approaches to defining size-based effort relationships still pay insufficient attention to this need. This paper presents a flexible, yet systematic approach to the early requirements-based effort estimation, based on Non-Functional Requirements ontology. It complementarily uses one standard functional size measurement model and a linear regression technique. We report on a case study which illustrates the application of our solution approach in context and also helps evaluate our experiences in using it.

  10. Viability Study of a Safe Method for Health to Prepare Cement Pastes with Simultaneous Nanometric Functional Additions

    Directory of Open Access Journals (Sweden)

    M. A. de la Rubia

    2018-01-01

    Full Text Available The use of a mixing method based on a novel dry dispersion procedure that enables a proper mixing of simultaneous nanometric functional additions while avoiding the health risks derived from the exposure to nanoparticles is reported and compared with a common manual mixing in this work. Such a dry dispersion method allows a greater workability by avoiding problems associated with the dispersion of the particles. The two mixing methods have been used to prepare Portland cement CEM I 52.5R pastes with additions of nano-ZnO with bactericide properties and micro- or nanopozzolanic SiO2. The hydration process performed by both mixing methods is compared in order to determine the efficiency of using the method. The hydration analysis of these cement pastes is carried out at different ages (from one to twenty-eight days by means of differential thermal analysis and thermogravimetry (DTA-TG, X-ray diffraction (XRD, scanning electron microscopy (SEM, and Fourier transform infrared spectroscopy (FTIR analyses. Regardless of composition, all the mixtures of cement pastes obtained by the novel dispersion method showed a higher retardation of cement hydration at intermediate ages which did not occur at higher ages. In agreement with the resulting hydration behaviour, the use of this new dispersion method makes it possible to prepare homogeneous cement pastes with simultaneous functional nanoparticles which are physically supported on the larger particles of cement, avoiding exposure to the nanoparticles and therefore minimizing health risks. Manual mixing of cement-based materials with simultaneous nanometric functional nanoparticles on a large scale would make it difficult to obtain a homogenous material together with the health risks derived from the handling of nanoparticles.

  11. Grey situation group decision-making method based on prospect theory.

    Science.gov (United States)

    Zhang, Na; Fang, Zhigeng; Liu, Xiaqing

    2014-01-01

    This paper puts forward a grey situation group decision-making method on the basis of prospect theory, in view of the grey situation group decision-making problems that decisions are often made by multiple decision experts and those experts have risk preferences. The method takes the positive and negative ideal situation distance as reference points, defines positive and negative prospect value function, and introduces decision experts' risk preference into grey situation decision-making to make the final decision be more in line with decision experts' psychological behavior. Based on TOPSIS method, this paper determines the weight of each decision expert, sets up comprehensive prospect value matrix for decision experts' evaluation, and finally determines the optimal situation. At last, this paper verifies the effectiveness and feasibility of the method by means of a specific example.

  12. Measurement of renal function in a kidney donor: a comparison of creatinine-based and volume-based GFRs

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Don Kyoung; Choi, See Min; Jeong, Byong Chang; Seo, Seong Il; Jeon, Seong Soo; Lee, Hyun Moo; Choi, Han-Yong; Jeon, Hwang Gyun [Sungkyunkwan University School of Medicine, Department of Urology, Samsung Medical Center, Seoul (Korea, Republic of); Park, Bong Hee [The Catholic University of Korea College of Medicine, Department of Urology, Incheon St. Mary' s Hospital, Seoul (Korea, Republic of)

    2015-11-15

    We aimed to evaluate the performance of various GFR estimates compared with direct measurement of GFR (dGFR). We also sought to create a new formula for volume-based GFR (new-vGFR) using kidney volume determined by CT. GFR was measured using creatinine-based methods (MDRD, the Cockcroft-Gault equation, CKD-EPI formula, and the Mayo clinic formula) and the Herts method, which is volume-based (vGFR). We compared performance between GFR estimates and created a new vGFR model by multiple linear regression analysis. Among the creatinine-based GFR estimates, the MDRD and C-G equations were similarly associated with dGFR (correlation and concordance coefficients of 0.359 and 0.369 and 0.354 and 0.318, respectively). We developed the following new kidney volume-based GFR formula: 217.48-0.39XA + 0.25XW-0.46XH-54.01XsCr + 0.02XV-19.89 (if female) (A = age, W = weight, H = height, sCr = serum creatinine level, V = total kidney volume). The MDRD and CKD-EPI had relatively better accuracy than the other creatinine-based methods (30.7 % vs. 32.3 % within 10 % and 78.0 % vs. 73.0 % within 30 %, respectively). However, the new-vGFR formula had the most accurate results among all of the analyzed methods (37.4 % within 10 % and 84.6 % within 30 %). The new-vGFR can replace dGFR or creatinine-based GFR for assessing kidney function in donors and healthy individuals. (orig.)

  13. Measurement of renal function in a kidney donor: a comparison of creatinine-based and volume-based GFRs

    International Nuclear Information System (INIS)

    Choi, Don Kyoung; Choi, See Min; Jeong, Byong Chang; Seo, Seong Il; Jeon, Seong Soo; Lee, Hyun Moo; Choi, Han-Yong; Jeon, Hwang Gyun; Park, Bong Hee

    2015-01-01

    We aimed to evaluate the performance of various GFR estimates compared with direct measurement of GFR (dGFR). We also sought to create a new formula for volume-based GFR (new-vGFR) using kidney volume determined by CT. GFR was measured using creatinine-based methods (MDRD, the Cockcroft-Gault equation, CKD-EPI formula, and the Mayo clinic formula) and the Herts method, which is volume-based (vGFR). We compared performance between GFR estimates and created a new vGFR model by multiple linear regression analysis. Among the creatinine-based GFR estimates, the MDRD and C-G equations were similarly associated with dGFR (correlation and concordance coefficients of 0.359 and 0.369 and 0.354 and 0.318, respectively). We developed the following new kidney volume-based GFR formula: 217.48-0.39XA + 0.25XW-0.46XH-54.01XsCr + 0.02XV-19.89 (if female) (A = age, W = weight, H = height, sCr = serum creatinine level, V = total kidney volume). The MDRD and CKD-EPI had relatively better accuracy than the other creatinine-based methods (30.7 % vs. 32.3 % within 10 % and 78.0 % vs. 73.0 % within 30 %, respectively). However, the new-vGFR formula had the most accurate results among all of the analyzed methods (37.4 % within 10 % and 84.6 % within 30 %). The new-vGFR can replace dGFR or creatinine-based GFR for assessing kidney function in donors and healthy individuals. (orig.)

  14. a Comparison Study of Different Kernel Functions for Svm-Based Classification of Multi-Temporal Polarimetry SAR Data

    Science.gov (United States)

    Yekkehkhany, B.; Safari, A.; Homayouni, S.; Hasanlou, M.

    2014-10-01

    In this paper, a framework is developed based on Support Vector Machines (SVM) for crop classification using polarimetric features extracted from multi-temporal Synthetic Aperture Radar (SAR) imageries. The multi-temporal integration of data not only improves the overall retrieval accuracy but also provides more reliable estimates with respect to single-date data. Several kernel functions are employed and compared in this study for mapping the input space to higher Hilbert dimension space. These kernel functions include linear, polynomials and Radial Based Function (RBF). The method is applied to several UAVSAR L-band SAR images acquired over an agricultural area near Winnipeg, Manitoba, Canada. In this research, the temporal alpha features of H/A/α decomposition method are used in classification. The experimental tests show an SVM classifier with RBF kernel for three dates of data increases the Overall Accuracy (OA) to up to 3% in comparison to using linear kernel function, and up to 1% in comparison to a 3rd degree polynomial kernel function.

  15. Machine function based control code algebras

    NARCIS (Netherlands)

    Bergstra, J.A.

    Machine functions have been introduced by Earley and Sturgis in [6] in order to provide a mathematical foundation of the use of the T-diagrams proposed by Bratman in [5]. Machine functions describe the operation of a machine at a very abstract level. A theory of hardware and software based on

  16. An Intelligent Method for Structural Reliability Analysis Based on Response Surface

    Institute of Scientific and Technical Information of China (English)

    桂劲松; 刘红; 康海贵

    2004-01-01

    As water depth increases, the structural safety and reliability of a system become more and more important and challenging. Therefore, the structural reliability method must be applied in ocean engineering design such as offshore platform design. If the performance function is known in structural reliability analysis, the first-order second-moment method is often used. If the performance function could not be definitely expressed, the response surface method is always used because it has a very clear train of thought and simple programming. However, the traditional response surface method fits the response surface of quadratic polynomials where the problem of accuracy could not be solved, because the true limit state surface can be fitted well only in the area near the checking point. In this paper, an intelligent computing method based on the whole response surface is proposed, which can be used for the situation where the performance function could not be definitely expressed in structural reliability analysis. In this method, a response surface of the fuzzy neural network for the whole area should be constructed first, and then the structural reliability can be calculated by the genetic algorithm. In the proposed method, all the sample points for the training network come from the whole area, so the true limit state surface in the whole area can be fitted. Through calculational examples and comparative analysis, it can be known that the proposed method is much better than the traditional response surface method of quadratic polynomials, because, the amount of calculation of finite element analysis is largely reduced, the accuracy of calculation is improved,and the true limit state surface can be fitted very well in the whole area. So, the method proposed in this paper is suitable for engineering application.

  17. Solving the Fully Fuzzy Bilevel Linear Programming Problem through Deviation Degree Measures and a Ranking Function Method

    Directory of Open Access Journals (Sweden)

    Aihong Ren

    2016-01-01

    Full Text Available This paper is concerned with a class of fully fuzzy bilevel linear programming problems where all the coefficients and decision variables of both objective functions and the constraints are fuzzy numbers. A new approach based on deviation degree measures and a ranking function method is proposed to solve these problems. We first introduce concepts of the feasible region and the fuzzy optimal solution of a fully fuzzy bilevel linear programming problem. In order to obtain a fuzzy optimal solution of the problem, we apply deviation degree measures to deal with the fuzzy constraints and use a ranking function method of fuzzy numbers to rank the upper and lower level fuzzy objective functions. Then the fully fuzzy bilevel linear programming problem can be transformed into a deterministic bilevel programming problem. Considering the overall balance between improving objective function values and decreasing allowed deviation degrees, the computational procedure for finding a fuzzy optimal solution is proposed. Finally, a numerical example is provided to illustrate the proposed approach. The results indicate that the proposed approach gives a better optimal solution in comparison with the existing method.

  18. Orbital-dependent exchange-correlation functionals in density-functional theory realized by the FLAPW method

    Energy Technology Data Exchange (ETDEWEB)

    Betzinger, Markus

    2011-12-14

    In this thesis, we extended the applicability of the full-potential linearized augmented-plane-wave (FLAPW) method, one of the most precise, versatile and generally applicable electronic structure methods for solids working within the framework of density-functional theory (DFT), to orbital-dependent functionals for the exchange-correlation (xc) energy. Two different schemes that deal with orbital-dependent functionals, the Kohn-Sham (KS) and the generalized Kohn-Sham (gKS) formalism, have been realized. Hybrid functionals, combining some amount of the orbital-dependent exact exchange energy with local or semi-local functionals of the density, are implemented within the gKS scheme. We work in particular with the PBE0 hybrid of Perdew, Burke, and Ernzerhof. Our implementation relies on a representation of the non-local exact exchange potential - its calculation constitutes the most time consuming step in a practical calculation - by an auxiliary mixed product basis (MPB). In this way, the matrix elements of the Hamiltonian corresponding to the non-local potential become a Brillouin-zone (BZ) sum over vector-matrix-vector products. Several techniques are developed and explored to further accelerate our numerical scheme. We show PBE0 results for a variety of semiconductors and insulators. In comparison with experiment, the PBE0 functional leads to improved band gaps and an improved description of localized states. Even for the ferromagnetic semiconductor EuO with localized 4f electrons, the electronic and magnetic properties are correctly described by the PBE0 functional. Subsequently, we discuss the construction of the local, multiplicative exact exchange (EXX) potential from the non-local, orbital-dependent exact exchange energy. For this purpose we employ the optimized effective potential (OEP) method. Central ingredients of the OEP equation are the KS wave-function response and the single-particle density response function. We show that a balance between the LAPW

  19. Orbital-dependent exchange-correlation functionals in density-functional theory realized by the FLAPW method

    International Nuclear Information System (INIS)

    Betzinger, Markus

    2011-01-01

    In this thesis, we extended the applicability of the full-potential linearized augmented-plane-wave (FLAPW) method, one of the most precise, versatile and generally applicable electronic structure methods for solids working within the framework of density-functional theory (DFT), to orbital-dependent functionals for the exchange-correlation (xc) energy. Two different schemes that deal with orbital-dependent functionals, the Kohn-Sham (KS) and the generalized Kohn-Sham (gKS) formalism, have been realized. Hybrid functionals, combining some amount of the orbital-dependent exact exchange energy with local or semi-local functionals of the density, are implemented within the gKS scheme. We work in particular with the PBE0 hybrid of Perdew, Burke, and Ernzerhof. Our implementation relies on a representation of the non-local exact exchange potential - its calculation constitutes the most time consuming step in a practical calculation - by an auxiliary mixed product basis (MPB). In this way, the matrix elements of the Hamiltonian corresponding to the non-local potential become a Brillouin-zone (BZ) sum over vector-matrix-vector products. Several techniques are developed and explored to further accelerate our numerical scheme. We show PBE0 results for a variety of semiconductors and insulators. In comparison with experiment, the PBE0 functional leads to improved band gaps and an improved description of localized states. Even for the ferromagnetic semiconductor EuO with localized 4f electrons, the electronic and magnetic properties are correctly described by the PBE0 functional. Subsequently, we discuss the construction of the local, multiplicative exact exchange (EXX) potential from the non-local, orbital-dependent exact exchange energy. For this purpose we employ the optimized effective potential (OEP) method. Central ingredients of the OEP equation are the KS wave-function response and the single-particle density response function. We show that a balance between the LAPW

  20. DincRNA: a comprehensive web-based bioinformatics toolkit for exploring disease associations and ncRNA function.

    Science.gov (United States)

    Cheng, Liang; Hu, Yang; Sun, Jie; Zhou, Meng; Jiang, Qinghua

    2018-06-01

    DincRNA aims to provide a comprehensive web-based bioinformatics toolkit to elucidate the entangled relationships among diseases and non-coding RNAs (ncRNAs) from the perspective of disease similarity. The quantitative way to illustrate relationships of pair-wise diseases always depends on their molecular mechanisms, and structures of the directed acyclic graph of Disease Ontology (DO). Corresponding methods for calculating similarity of pair-wise diseases involve Resnik's, Lin's, Wang's, PSB and SemFunSim methods. Recently, disease similarity was validated suitable for calculating functional similarities of ncRNAs and prioritizing ncRNA-disease pairs, and it has been widely applied for predicting the ncRNA function due to the limited biological knowledge from wet lab experiments of these RNAs. For this purpose, a large number of algorithms and priori knowledge need to be integrated. e.g. 'pair-wise best, pairs-average' (PBPA) and 'pair-wise all, pairs-maximum' (PAPM) methods for calculating functional similarities of ncRNAs, and random walk with restart (RWR) method for prioritizing ncRNA-disease pairs. To facilitate the exploration of disease associations and ncRNA function, DincRNA implemented all of the above eight algorithms based on DO and disease-related genes. Currently, it provides the function to query disease similarity scores, miRNA and lncRNA functional similarity scores, and the prioritization scores of lncRNA-disease and miRNA-disease pairs. http://bio-annotation.cn:18080/DincRNAClient/. biofomeng@hotmail.com or qhjiang@hit.edu.cn. Supplementary data are available at Bioinformatics online.

  1. The analytic regularization ζ function method and the cut-off method in Casimir effect

    International Nuclear Information System (INIS)

    Svaiter, N.F.; Svaiter, B.F.

    1990-01-01

    The zero point energy associated to a hermitian massless scalar field in the presence of perfectly reflecting plates in a three dimensional flat space-time is discussed. A new technique to unify two different methods - the ζ function and a variant of the cut-off method - used to obtain the so called Casimir energy is presented, and the proof of the analytic equivalence between both methods is given. (author)

  2. A Numerical Matrix-Based method in Harmonic Studies in Wind Power Plants

    DEFF Research Database (Denmark)

    Dowlatabadi, Mohammadkazem Bakhshizadeh; Hjerrild, Jesper; Kocewiak, Łukasz Hubert

    2016-01-01

    In the low frequency range, there are some couplings between the positive- and negative-sequence small-signal impedances of the power converter due to the nonlinear and low bandwidth control loops such as the synchronization loop. In this paper, a new numerical method which also considers...... these couplings will be presented. The numerical data are advantageous to the parametric differential equations, because analysing the high order and complex transfer functions is very difficult, and finally one uses the numerical evaluation methods. This paper proposes a numerical matrix-based method, which...

  3. Is function-based control room design human-centered?

    International Nuclear Information System (INIS)

    Norros, L.; Savioja, P.

    2006-01-01

    Function-based approaches to system interface design appears an appealing possibility in helping designers and operators to cope with the vast amount of information needed to control complex processes. In this paper we provide evidence of operator performance analyses showing that outcome-centered performance measures may not be sufficiently informative for design. We need analyses indicating habitual patterns of using information, operator practices. We argue that practices that portray functional orienting to the task support mastery of the process. They also create potential to make use of function-based information presentation. We see that functional design is not an absolute value. Instead, such design should support communication of the functional significance of the process information to the operators in variable situations. Hence, it should facilitate development of practices that focus to interpreting this message. Successful function-based design facilitates putting operations into their contexts and is human-centered in an extended sense: It aids making sense in the complex, dynamic and uncertain environment. (authors)

  4. Exp-function method for solving Fisher's equation

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, X-W [Department of Mathematics, Kunming Teacher' s College, Kunming, Yunnan 650031 (China)], E-mail: km_xwzhou@163.com

    2008-02-15

    There are many methods to solve Fisher's equation, but each method can only lead to a special solution. In this paper, a new method, namely the exp-function method, is employed to solve the Fisher's equation. The obtained result includes all solutions in open literature as special cases, and the generalized solution with some free parameters might imply some fascinating meanings hidden in the Fisher's equation.

  5. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader; Pinnau, Ingo; Swaidan, Raja

    2015-01-01

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  6. Triptycene-based dianhydrides, polyimides, methods of making each, and methods of use

    KAUST Repository

    Ghanem, Bader

    2015-12-30

    A triptycene-based monomer, a method of making a triptycene-based monomer, a triptycene-based aromatic polyimide, a method of making a triptycene- based aromatic polyimide, methods of using triptycene-based aromatic polyimides, structures incorporating triptycene-based aromatic polyimides, and methods of gas separation are provided. Embodiments of the triptycene-based monomers and triptycene-based aromatic polyimides have high permeabilities and excellent selectivities. Embodiments of the triptycene-based aromatic polyimides have one or more of the following characteristics: intrinsic microporosity, good thermal stability, and enhanced solubility. In an exemplary embodiment, the triptycene-based aromatic polyimides are microporous and have a high BET surface area. In an exemplary embodiment, the triptycene-based aromatic polyimides can be used to form a gas separation membrane.

  7. A model-based radiography restoration method based on simple scatter-degradation scheme for improving image visibility

    Science.gov (United States)

    Kim, K.; Kang, S.; Cho, H.; Kang, W.; Seo, C.; Park, C.; Lee, D.; Lim, H.; Lee, H.; Kim, G.; Park, S.; Park, J.; Kim, W.; Jeon, D.; Woo, T.; Oh, J.

    2018-02-01

    In conventional planar radiography, image visibility is often limited mainly due to the superimposition of the object structure under investigation and the artifacts caused by scattered x-rays and noise. Several methods, including computed tomography (CT) as a multiplanar imaging modality, air-gap and grid techniques for the reduction of scatters, phase-contrast imaging as another image-contrast modality, etc., have extensively been investigated in attempt to overcome these difficulties. However, those methods typically require higher x-ray doses or special equipment. In this work, as another approach, we propose a new model-based radiography restoration method based on simple scatter-degradation scheme where the intensity of scattered x-rays and the transmission function of a given object are estimated from a single x-ray image to restore the original degraded image. We implemented the proposed algorithm and performed an experiment to demonstrate its viability. Our results indicate that the degradation of image characteristics by scattered x-rays and noise was effectively recovered by using the proposed method, which improves the image visibility in radiography considerably.

  8. Perturbation methods and the Melnikov functions for slowly varying oscillators

    International Nuclear Information System (INIS)

    Lakrad, Faouzi; Charafi, Moulay Mustapha

    2005-01-01

    A new approach to obtaining the Melnikov function for homoclinic orbits in slowly varying oscillators is proposed. The present method applies the Lindstedt-Poincare method to determine an approximation of homoclinic solutions. It is shown that the resultant Melnikov condition is the same as that obtained in the usual way involving distance functions in three dimensions by Wiggins and Holmes [Homoclinic orbits in slowly varying oscillators. SIAM J Math Anal 1987;18(3):612

  9. In search of functional association from time-series microarray data based on the change trend and level of gene expression

    Directory of Open Access Journals (Sweden)

    Zeng An-Ping

    2006-02-01

    Full Text Available Abstract Background The increasing availability of time-series expression data opens up new possibilities to study functional linkages of genes. Present methods used to infer functional linkages between genes from expression data are mainly based on a point-to-point comparison. Change trends between consecutive time points in time-series data have been so far not well explored. Results In this work we present a new method based on extracting main features of the change trend and level of gene expression between consecutive time points. The method, termed as trend correlation (TC, includes two major steps: 1, calculating a maximal local alignment of change trend score by dynamic programming and a change trend correlation coefficient between the maximal matched change levels of each gene pair; 2, inferring relationships of gene pairs based on two statistical extraction procedures. The new method considers time shifts and inverted relationships in a similar way as the local clustering (LC method but the latter is merely based on a point-to-point comparison. The TC method is demonstrated with data from yeast cell cycle and compared with the LC method and the widely used Pearson correlation coefficient (PCC based clustering method. The biological significance of the gene pairs is examined with several large-scale yeast databases. Although the TC method predicts an overall lower number of gene pairs than the other two methods at a same p-value threshold, the additional number of gene pairs inferred by the TC method is considerable: e.g. 20.5% compared with the LC method and 49.6% with the PCC method for a p-value threshold of 2.7E-3. Moreover, the percentage of the inferred gene pairs consistent with databases by our method is generally higher than the LC method and similar to the PCC method. A significant number of the gene pairs only inferred by the TC method are process-identity or function-similarity pairs or have well-documented biological

  10. A Lateral Control Method of Intelligent Vehicle Based on Fuzzy Neural Network

    Directory of Open Access Journals (Sweden)

    Linhui Li

    2015-01-01

    Full Text Available A lateral control method is proposed for intelligent vehicle to track the desired trajectory. Firstly, a lateral control model is established based on the visual preview and dynamic characteristics of intelligent vehicle. Then, the lateral error and orientation error are melded into an integrated error. Considering the system parameter perturbation and the external interference, a sliding model control is introduced in this paper. In order to design a sliding surface, the integrated error is chosen as the parameter of the sliding mode switching function. The sliding mode switching function and its derivative are selected as two inputs of the controller, and the front wheel angle is selected as the output. Next, a fuzzy neural network is established, and the self-learning functions of neural network is utilized to construct the fuzzy rules. Finally, the simulation results demonstrate the effectiveness and robustness of the proposed method.

  11. An Efficient and Reliable Statistical Method for Estimating Functional Connectivity in Large Scale Brain Networks Using Partial Correlation.

    Science.gov (United States)

    Wang, Yikai; Kang, Jian; Kemmer, Phebe B; Guo, Ying

    2016-01-01

    Currently, network-oriented analysis of fMRI data has become an important tool for understanding brain organization and brain networks. Among the range of network modeling methods, partial correlation has shown great promises in accurately detecting true brain network connections. However, the application of partial correlation in investigating brain connectivity, especially in large-scale brain networks, has been limited so far due to the technical challenges in its estimation. In this paper, we propose an efficient and reliable statistical method for estimating partial correlation in large-scale brain network modeling. Our method derives partial correlation based on the precision matrix estimated via Constrained L1-minimization Approach (CLIME), which is a recently developed statistical method that is more efficient and demonstrates better performance than the existing methods. To help select an appropriate tuning parameter for sparsity control in the network estimation, we propose a new Dens-based selection method that provides a more informative and flexible tool to allow the users to select the tuning parameter based on the desired sparsity level. Another appealing feature of the Dens-based method is that it is much faster than the existing methods, which provides an important advantage in neuroimaging applications. Simulation studies show that the Dens-based method demonstrates comparable or better performance with respect to the existing methods in network estimation. We applied the proposed partial correlation method to investigate resting state functional connectivity using rs-fMRI data from the Philadelphia Neurodevelopmental Cohort (PNC) study. Our results show that partial correlation analysis removed considerable between-module marginal connections identified by full correlation analysis, suggesting these connections were likely caused by global effects or common connection to other nodes. Based on partial correlation, we find that the most significant

  12. Method of moments solution of volume integral equations using higher-order hierarchical Legendre basis functions

    DEFF Research Database (Denmark)

    Kim, Oleksiy S.; Jørgensen, Erik; Meincke, Peter

    2004-01-01

    An efficient higher-order method of moments (MoM) solution of volume integral equations is presented. The higher-order MoM solution is based on higher-order hierarchical Legendre basis functions and higher-order geometry modeling. An unstructured mesh composed of 8-node trilinear and/or curved 27...... of magnitude in comparison to existing higher-order hierarchical basis functions. Consequently, an iterative solver can be applied even for high expansion orders. Numerical results demonstrate excellent agreement with the analytical Mie series solution for a dielectric sphere as well as with results obtained...

  13. An Extreme Learning Machine Based on the Mixed Kernel Function of Triangular Kernel and Generalized Hermite Dirichlet Kernel

    Directory of Open Access Journals (Sweden)

    Senyue Zhang

    2016-01-01

    Full Text Available According to the characteristics that the kernel function of extreme learning machine (ELM and its performance have a strong correlation, a novel extreme learning machine based on a generalized triangle Hermitian kernel function was proposed in this paper. First, the generalized triangle Hermitian kernel function was constructed by using the product of triangular kernel and generalized Hermite Dirichlet kernel, and the proposed kernel function was proved as a valid kernel function of extreme learning machine. Then, the learning methodology of the extreme learning machine based on the proposed kernel function was presented. The biggest advantage of the proposed kernel is its kernel parameter values only chosen in the natural numbers, which thus can greatly shorten the computational time of parameter optimization and retain more of its sample data structure information. Experiments were performed on a number of binary classification, multiclassification, and regression datasets from the UCI benchmark repository. The experiment results demonstrated that the robustness and generalization performance of the proposed method are outperformed compared to other extreme learning machines with different kernels. Furthermore, the learning speed of proposed method is faster than support vector machine (SVM methods.

  14. DGDFT: A massively parallel method for large scale density functional theory calculations.

    Science.gov (United States)

    Hu, Wei; Lin, Lin; Yang, Chao

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10(-4) Hartree/atom in terms of the error of energy and 6.2 × 10(-4) Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  15. DGDFT: A massively parallel method for large scale density functional theory calculations

    International Nuclear Information System (INIS)

    Hu, Wei; Yang, Chao; Lin, Lin

    2015-01-01

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10 −4 Hartree/atom in terms of the error of energy and 6.2 × 10 −4 Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail

  16. DGDFT: A massively parallel method for large scale density functional theory calculations

    Energy Technology Data Exchange (ETDEWEB)

    Hu, Wei, E-mail: whu@lbl.gov; Yang, Chao, E-mail: cyang@lbl.gov [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Lin, Lin, E-mail: linlin@math.berkeley.edu [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, California 94720 (United States); Department of Mathematics, University of California, Berkeley, California 94720 (United States)

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  17. Mindfulness-Based Therapies in the Treatment of Functional Gastrointestinal Disorders: A Meta-Analysis

    Directory of Open Access Journals (Sweden)

    Monique Aucoin

    2014-01-01

    Full Text Available Background. Functional gastrointestinal disorders are highly prevalent and standard treatments are often unsatisfactory. Mindfulness-based therapy has shown benefit in conditions including chronic pain, mood, and somatization disorders. Objectives. To assess the quality and effectiveness reported in existing literature, we conducted a meta-analysis of mindfulness-based therapy in functional gastrointestinal disorders. Methods. Pubmed, EBSCO, and Cochrane databases were searched from inception to May 2014. Study inclusion criteria included randomized, controlled studies of adults using mindfulness-based therapy in the treatment of functional gastrointestinal disorders. Study quality was evaluated using the Cochrane risk of bias. Effect sizes were calculated and pooled to achieve a summary effect for the intervention on symptom severity and quality of life. Results. Of 119 records, eight articles, describing seven studies, met inclusion criteria. In six studies, significant improvements were achieved or maintained at the end of intervention or follow-up time points. The studies had an unclear or high risk of bias. Pooled effects were statistically significant for IBS severity (0.59, 95% CI 0.33 to 0.86 and quality of life (0.56, 95% CI 0.47 to 0.79. Conclusion. Studies suggest that mindfulness based interventions may provide benefit in functional gastrointestinal disorders; however, substantial improvements in methodological quality and reporting are needed.

  18. PreSurgMapp: a MATLAB Toolbox for Presurgical Mapping of Eloquent Functional Areas Based on Task-Related and Resting-State Functional MRI.

    Science.gov (United States)

    Huang, Huiyuan; Ding, Zhongxiang; Mao, Dewang; Yuan, Jianhua; Zhu, Fangmei; Chen, Shuda; Xu, Yan; Lou, Lin; Feng, Xiaoyan; Qi, Le; Qiu, Wusi; Zhang, Han; Zang, Yu-Feng

    2016-10-01

    The main goal of brain tumor surgery is to maximize tumor resection while minimizing the risk of irreversible postoperative functional sequelae. Eloquent functional areas should be delineated preoperatively, particularly for patients with tumors near eloquent areas. Functional magnetic resonance imaging (fMRI) is a noninvasive technique that demonstrates great promise for presurgical planning. However, specialized data processing toolkits for presurgical planning remain lacking. Based on several functions in open-source software such as Statistical Parametric Mapping (SPM), Resting-State fMRI Data Analysis Toolkit (REST), Data Processing Assistant for Resting-State fMRI (DPARSF) and Multiple Independent Component Analysis (MICA), here, we introduce an open-source MATLAB toolbox named PreSurgMapp. This toolbox can reveal eloquent areas using comprehensive methods and various complementary fMRI modalities. For example, PreSurgMapp supports both model-based (general linear model, GLM, and seed correlation) and data-driven (independent component analysis, ICA) methods and processes both task-based and resting-state fMRI data. PreSurgMapp is designed for highly automatic and individualized functional mapping with a user-friendly graphical user interface (GUI) for time-saving pipeline processing. For example, sensorimotor and language-related components can be automatically identified without human input interference using an effective, accurate component identification algorithm using discriminability index. All the results generated can be further evaluated and compared by neuro-radiologists or neurosurgeons. This software has substantial value for clinical neuro-radiology and neuro-oncology, including application to patients with low- and high-grade brain tumors and those with epilepsy foci in the dominant language hemisphere who are planning to undergo a temporal lobectomy.

  19. Dynamic Sensor Management Algorithm Based on Improved Efficacy Function

    Directory of Open Access Journals (Sweden)

    TANG Shujuan

    2016-01-01

    Full Text Available A dynamic sensor management algorithm based on improved efficacy function is proposed to solve the multi-target and multi-sensory management problem. The tracking task precision requirements (TPR, target priority and sensor use cost were considered to establish the efficacy function by weighted sum the normalized value of the three factors. The dynamic sensor management algorithm was accomplished through control the diversities of the desired covariance matrix (DCM and the filtering covariance matrix (FCM. The DCM was preassigned in terms of TPR and the FCM was obtained by the centralized sequential Kalman filtering algorithm. The simulation results prove that the proposed method could meet the requirements of desired tracking precision and adjust sensor selection according to target priority and cost of sensor source usage. This makes sensor management scheme more reasonable and effective.

  20. Minimizing convex functions by continuous descent methods

    Directory of Open Access Journals (Sweden)

    Sergiu Aizicovici

    2010-01-01

    Full Text Available We study continuous descent methods for minimizing convex functions, defined on general Banach spaces, which are associated with an appropriate complete metric space of vector fields. We show that there exists an everywhere dense open set in this space of vector fields such that each of its elements generates strongly convergent trajectories.